Full Code of daleroberts/bv for AI

master 479241ef5cc9 cached
3 files
15.0 KB
4.5k tokens
1 requests
Download .txt
Repository: daleroberts/bv
Branch: master
Commit: 479241ef5cc9
Files: 3
Total size: 15.0 KB

Directory structure:
gitextract_l4p76iwy/

├── .gitignore
├── README.md
└── bv

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# IPython Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# dotenv
.env

# virtualenv
venv/
ENV/

# Spyder project settings
.spyderproject

# Rope project settings
.ropeproject

.DS_Store


================================================
FILE: README.md
================================================

**bv** is a small tool to quickly view high-resolution multi-band imagery
directly in your [iTerm 2](https://www.iterm2.com). It was designed for
visualising very large images located on a remote machine over a low-bandwidth
connection. It subsamples and compresses the image sends it over the wire as a
base64-encoded PNG (hence the name "bv") that iTerm 2 inlines in your terminal.

<img src="https://github.com/daleroberts/bv/raw/master/docs/trump.png" width="800">

Now, go and compare the above to [old-school rendering](https://camo.githubusercontent.com/a6c791a0b4d97315d00b6592f918fe744abe00e6/687474703a2f2f692e696d6775722e636f6d2f556e666e704d722e706e67)
or my other tool [tv](https://github.com/daleroberts/tv). Welcome to 2017!

# Some Examples

Here are a number of examples that show how this tool can be used.

## Big image over small connection

Display a 3.5 billion pixel single-band image (3.3GB) using only 467KB over a SSH connection.

<img src="https://github.com/daleroberts/bv/raw/master/docs/bigimg.png" width="800">

## Different band combinations

Display a six-band image (7.2GB) using only 1.1MB over a SSH connection. Here,
we put bands 5-4-3 into the RGB channels using `-b 5 -b 4 -b 3` (ordering
matters) and set the width of the output image to be 600 pixels using `-w 600`.

<img src="https://github.com/daleroberts/bv/raw/master/docs/bands.png" width="800">

You can also specify a single band to display (e.g., `-b 1`).

## Subset images

You can subset images using `gdal_translate` syntax which is `-srcwin xoff yoff
xsize ysize`. For example, only displaying a small 1000x1000 area of the same large image above.

<img src="https://github.com/daleroberts/bv/raw/master/docs/subset.png" width="800">

This allows you to quickly identify regions of your image and then paste the same options 
into `gdal_translate` to complete your desired workflow. For example:
```
remote$ gdal_translate tasmania-2014.tif -b 5 -b 4 -b 3 -srcwin 12000 11000 1000 1000 -of PNG -ot UInt16 -scale 0 4000 ~/out.png
Input file size is 20000, 16000
0...10...20...30...40...50...60...70...80...90...100 - done.
remote$
```

## Machine learning multi-class outputs with different color maps

Sometimes you might have a single-band image that only contains classes
(integers). Different color maps can be applied to these single-band images
using the `-cm` option and any choice from [matplotlib's
colormaps](http://matplotlib.org/examples/color/colormaps_reference.html).

<img src="https://github.com/daleroberts/bv/raw/master/docs/colors.png" width="800">

## URLs

The **bv** tool can read from URLs (see the Trump image above). It can also
parse URLs on `stdin`, this allows you to [do
things](https://github.com/developmentseed/landsat-util) like this to quicky
display available Landsat images roughly over Dubai.

```
remote$ landsat search --lat 25 --lon 55 --latest 3 | bv -urls -
```

## Standard Input

Filenames can be read from `stdin`. For example:
```
ls -1 *.tif | bv -w 100 -
```

## Compression

The level of compression can be changed using the `-zlevel` option (0-9).

## Stacking images

If your bands are located in seperate images then you can stack them and display them
in the RGB channels using
```
bv -stack RED.tif GREEN.tif BLUE.tif
```

There is also the `-revstack` option to do it in reverse order.

## Subsampling algorithm

The subsampling algorithm can be changed using the `-r` option (same syntax as GDAL). The available subsamplings are:
- Nearest
- Average
- Cubic Spline
- Cubic
- Mode
- Lanczos
- Bilinear

## Alpha channel

For single-band images, you can specify the color value to set as the alpha
channel. This is sometimes useful for machine learning outputs where you want
to not display certain classes. You can add multiple of these with different
values.

## PDF, EPS, and PNG

The **bv** tool will display PDF, EPS, and PNG output inline with out any
changes to those files. If you want to disable this behaviour you can pass the
`-nop` option allow GDAL to subsample, etc.

## TMUX Support

<img src="https://github.com/daleroberts/bv/raw/master/docs/tmux.png" width="800">

# Configuration

You can save your default configuration by setting an alias in your `~/.profile` file. For example, I do:
```
alias bv='bv -w 800'
```

# Installation

It is just a single-file script so all you'll need to do it put it in your
`PATH`. Dependencies are Python 3, GDAL 2, Numpy, Matplotlib, and iTerm 2. I've
found that the best way to install these dependencies are: 
```bash
# Python 3
brew install python3

# Numpy and matplotlib
pip3 install numpy matplotlib

# GDAL 2
brew install gdal --HEAD --without-python
pip3 install gdal
```


================================================
FILE: bv
================================================
#!/usr/bin/env python3
"""
bv: Quickly view hyperspectral imagery, satellite imagery, and 
machine learning image outputs directly in your iTerm2 terminal.

Dale Roberts <dale.o.roberts@gmail.com>

http://www.github.com/daleroberts/bv
"""
import numpy as np
import shutil
import gdal
import sys
import os
import re

from urllib.request import urlopen, URLError
from os.path import splitext
from base64 import b64encode
from uuid import uuid4

gdal.UseExceptions()

SAMPLING = {'nearest': gdal.GRIORA_NearestNeighbour,
            'bilinear': gdal.GRIORA_Bilinear,
            'cubic': gdal.GRIORA_Cubic,
            'cubicspline': gdal.GRIORA_Cubic,
            'lanczos': gdal.GRIORA_Lanczos,
            'average': gdal.GRIORA_Average,
            'mode': gdal.GRIORA_Mode}
RE_URL = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
TMUX = os.getenv('TERM','').startswith('screen')

def sizefmt(num, suffix='B'):
    for unit in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z']:
        if abs(num) < 1024.0:
            return '%3.1f%s%s' % (num, unit, suffix)
        num /= 1024.0
    return '%.1f%s%s' % (num, 'Yi', suffix)


def typescale(data, dtype=np.uint8, scale=None):
    typeinfo = np.iinfo(dtype)
    low, high = typeinfo.min, typeinfo.max
    if scale:
        cmin, cmax = scale
    else:
        cmin, cmax = np.min(data), np.max(data)
    cscale = cmax - cmin
    scale = float(high - low) / cscale
    typedata = (data * 1.0 - cmin) * scale + 0.4999
    with np.errstate(all='ignore'):
        typedata[typedata < low] = low
        typedata[typedata > high] = high
    return typedata.astype(dtype) + np.cast[dtype](low)


def imgcat(data, lines=-1):
    if TMUX:
        if lines == -1:
            lines = 10
        osc = b'\033Ptmux;\033\033]'
        st = b'\a\033\\'
    else:
        osc = b'\033]'
        st = b'\a'
    csi = b'\033['
    buf = bytes()
    if lines > 0:
        buf += lines*b'\n' + csi + b'?25l' + csi + b'%dF' % lines + osc
        dims = b'width=auto;height=%d;preserveAspectRatio=1' % lines
    else:
        buf += osc
        dims = b'width=auto;height=auto'
    buf += b'1337;File=;size=%d;inline=1;' % len(data) + dims + b':'
    buf += b64encode(data) + st
    if lines > 0:
        buf += csi + b'%dE' % lines + csi + b'?25h'
    sys.stdout.buffer.write(buf)
    sys.stdout.flush()
    print()


def show(rbs, xoff, yoff, ow, oh, w=500, h=500, r='average', zlevel=1,
         cm='bone', alpha=None, scale=None, quiet=None, lines=-1):
    memdriver = gdal.GetDriverByName('MEM')
    if len(rbs) == 1:
        if alpha is None:
            md = memdriver.Create('', w, h, 3, gdal.GDT_UInt16)
        else:
            md = memdriver.Create('', w, h, 4, gdal.GDT_UInt16)
        bnd = rbs[0].ReadAsArray(xoff, yoff, ow, oh, buf_xsize=w,
                                 buf_ysize=h, resample_alg=SAMPLING[r])
        try:
            import matplotlib.cm as cms
            cm = getattr(cms, cm)
        except AttributeError:
            print('incorrect colormap, defaulting to "bone"')
            cm = getattr(cms, 'bone')
        dmin, dmax = bnd.min(), bnd.max()
        bnds = cm((bnd - dmin) / (dmax - dmin))
        for i in range(3):
            obnd = md.GetRasterBand(i + 1)
            obnd.WriteArray(typescale(bnds[:, :, i], np.uint16), 0, 0)
        if alpha is not None:
            obnd = md.GetRasterBand(4)
            mask = np.logical_and.reduce([bnd != n for n in alpha])
            obnd.WriteArray((65535 * mask).astype(np.uint16), 0, 0)
            obnd.SetColorInterpretation(gdal.GCI_AlphaBand)
    else:
        if len(rbs) == 4 or alpha is not None:  # RGBA
            md = memdriver.Create('', w, h, 4, gdal.GDT_UInt16)
        else:  # RGB
            md = memdriver.Create('', w, h, 3, gdal.GDT_UInt16)
            rbs = rbs[:3]
        for i, b in enumerate(rbs):
            bnd = b.ReadAsArray(xoff, yoff, ow, oh, buf_xsize=w,
                                buf_ysize=h, resample_alg=SAMPLING[r])
            obnd = md.GetRasterBand(i + 1)
            obnd.WriteArray(typescale(bnd, np.uint16, scale), 0, 0)
            if i == 3:  # alpha
                obnd.SetColorInterpretation(gdal.GCI_AlphaBand)
        if alpha is not None:
            obnd = md.GetRasterBand(4)
            mask = np.logical_and.reduce([bnd != n for n in alpha])
            obnd.WriteArray((65535 * mask).astype(np.uint16), 0, 0)
            obnd.SetColorInterpretation(gdal.GCI_AlphaBand)

    if zlevel is None:
        zlevel = 'ZLEVEL=1'
    else:
        zlevel = 'ZLEVEL={}'.format(zlevel)

    mmapfn = "/vsimem/" + uuid4().hex
    driver = gdal.GetDriverByName('PNG')
    fd = driver.CreateCopy(mmapfn, md, 0, [zlevel])

    size = gdal.VSIStatL(mmapfn, gdal.VSI_STAT_SIZE_FLAG).size
    fd = gdal.VSIFOpenL(mmapfn, 'rb')
    data = gdal.VSIFReadL(1, size, fd)
    gdal.VSIFCloseL(fd)

    imgcat(data, lines)

    gdal.Unlink(mmapfn)

    return size


def show_stacked(imgs, *args, **kwargs):
    b = kwargs.pop('b')

    fds = [gdal.Open(fd) for fd in imgs[:3]]
    rbs = [fd.GetRasterBand(1) for fd in fds]

    quiet = kwargs.pop('quiet')
    srcwin = kwargs.pop('srcwin')
    if srcwin is not None:
        xoff, yoff, ow, oh = srcwin
    else:
        xoff, yoff, ow, oh = 0, 0, fds[0].RasterXSize, fds[0].RasterYSize

    kwargs['h'] = int(oh / ow * kwargs['w'])

    size = show(rbs, xoff, yoff, ow, oh, **kwargs)

    fd = fds[0]
    geo = fd.GetGeoTransform()
    if not quiet:
        desc = '{}x{} pixels / {} bands.  [tfr: {}]'
        print(desc.format(fd.RasterYSize, fd.RasterXSize,
                          fd.RasterCount, sizefmt(size)))


def show_fd(fd, *args, **kwargs):
    b = kwargs.pop('b')
    rc = fd.RasterCount

    if rc == 1:
        rbs = [fd.GetRasterBand(1)]
    else:
        if b is None:
            if rc == 4:
                b = range(1, 5)
            else:
                b = range(1, 4)
        rbs = [fd.GetRasterBand(i) for i in b]

    srcwin = kwargs.pop('srcwin')
    if srcwin is not None:
        xoff, yoff, ow, oh = srcwin
    else:
        xoff, yoff, ow, oh = 0, 0, fd.RasterXSize, fd.RasterYSize

    kwargs['h'] = int(oh / ow * kwargs['w'])

    return show(rbs, xoff, yoff, ow, oh, **kwargs)


def show_fn(fn, *args, **kwargs):
    try:
        quiet = kwargs.pop('quiet')
        fd = gdal.Open(fn)
        size = show_fd(fd, *args, **kwargs)
        geo = fd.GetGeoTransform()
        if not quiet:
            desc = '{}x{} pixels / {} bands.  [tfr: {}]'
            print(desc.format(fd.RasterYSize, fd.RasterXSize,
                              fd.RasterCount, sizefmt(size)))
    except RuntimeError as e:
        print('Error:', e)
        sys.exit(1)
    except TypeError:
        print('Error: bad data. incorrect srcwin?')
        sys.exit(1)


def show_url(url, *args, **kwargs):
    try:
        urlfd = urlopen(url, timeout=15)
        mmapfn = "/vsimem/" + uuid4().hex
        gdal.FileFromMemBuffer(mmapfn, urlfd.read())
        return show_fd(gdal.Open(mmapfn), *args, **kwargs)
    except URLError as e:
        print(e)
    finally:
        gdal.Unlink(mmapfn)


if __name__ == '__main__':
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument('-w', type=int, default=800)
    parser.add_argument('-b', action='append', type=int)
    parser.add_argument('-r', choices=SAMPLING.keys(), default='nearest')
    parser.add_argument('-cm', default="bone")
    parser.add_argument('-zlevel', type=int)
    parser.add_argument('-scale', nargs=2, type=float, metavar=('minval', 'maxval'))
    parser.add_argument('-alpha', action='append', type=int)
    parser.add_argument('-quiet', action='store_true')
    parser.add_argument('-stack', action='store_true')
    parser.add_argument('-revstack', action='store_true')
    parser.add_argument('-urls', action='store_true')
    parser.add_argument('-nofn', action='store_true')
    parser.add_argument('-nopassthrough', action='store_true')
    parser.add_argument('-lines', type=int, default=-1)
    parser.add_argument('-srcwin', nargs=4,
                        metavar=('xoff', 'yoff', 'xsize', 'ysize'),
                        type=int)
    parser.add_argument('img', nargs='+')
    kwargs = vars(parser.parse_args())

    imgs = kwargs.pop('img')
    urls = kwargs.pop('urls')
    nofn = kwargs.pop('nofn') or (imgs[0] != '-' and len(imgs) == 1)
    stack = kwargs.pop('stack')
    revstack = kwargs.pop('revstack')
    nop = kwargs.pop('nopassthrough')

    if TMUX:
        # dirty hack to make tmux integration work
        kwargs['w'] = min(kwargs['w'], 370)

    try:
        if not sys.stdin.isatty() or imgs[0] == '-':
            imgs = [line.strip() for line in sys.stdin.readlines()]

        if stack:
            show_stacked(imgs, **kwargs)
            sys.exit(0)

        if revstack:
            show_stacked(list(reversed(imgs)), **kwargs)
            sys.exit(0)

        for img in imgs:
            if urls:
                for url in re.findall(RE_URL, img):
                    if not nofn:
                        print(url)
                    show_url(url, **kwargs)
            else:
                if not nofn:
                    print(img)
                if not nop and splitext(img)[1][1:].lower() in ['png', 'pdf', 'eps']:
                    with open(img, 'rb') as fd:
                        data = fd.read()
                        imgcat(data, kwargs.pop('lines', -1))
                else:
                    show_fn(img, **kwargs)

    except KeyboardInterrupt:
        pass

    finally:
        print()
Download .txt
gitextract_l4p76iwy/

├── .gitignore
├── README.md
└── bv
Condensed preview — 3 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (16K chars).
[
  {
    "path": ".gitignore",
    "chars": 1056,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "README.md",
    "chars": 4693,
    "preview": "\n**bv** is a small tool to quickly view high-resolution multi-band imagery\ndirectly in your [iTerm 2](https://www.iterm2"
  },
  {
    "path": "bv",
    "chars": 9601,
    "preview": "#!/usr/bin/env python3\n\"\"\"\nbv: Quickly view hyperspectral imagery, satellite imagery, and \nmachine learning image output"
  }
]

About this extraction

This page contains the full source code of the daleroberts/bv GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 3 files (15.0 KB), approximately 4.5k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!