Full Code of ZFTurbo/volumentations for AI

master 1d0565a0534d cached
21 files
127.5 KB
32.1k tokens
228 symbols
1 requests
Download .txt
Repository: ZFTurbo/volumentations
Branch: master
Commit: 1d0565a0534d
Files: 21
Total size: 127.5 KB

Directory structure:
gitextract_swy1yiy_/

├── .github/
│   └── workflows/
│       └── python-package.yml
├── .gitignore
├── EXAMPLES.md
├── LICENSE
├── README.md
├── images/
│   └── examples.py
├── pyproject.toml
├── setup.py
├── test/
│   └── test_basic.py
├── tst_volumentations_speed.py
├── tst_volumentations_type_1.py
├── tst_volumentations_type_2.py
└── volumentations/
    ├── __init__.py
    ├── __version__.py
    ├── augmentations/
    │   ├── __init__.py
    │   ├── functional.py
    │   └── transforms.py
    ├── core/
    │   ├── __init__.py
    │   ├── composition.py
    │   └── transforms_interface.py
    └── random_utils.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/python-package.yml
================================================
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python

name: Python package

on:
  push:
    branches: [ "master", "develop" ]
  pull_request:
    branches: [ "master" ]

jobs:
  build:

    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        python-version: ["3.9", "3.10", "3.11"]

    steps:
    - uses: actions/checkout@v3
    - name: Set up Python ${{ matrix.python-version }}
      uses: actions/setup-python@v3
      with:
        python-version: ${{ matrix.python-version }}
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        python -m pip install poetry
        poetry install --with dev
    - name: Lint with flake8
      run: |
        poetry run flake8 ./volumentations --count --select=E9,F63,F7,F82 --show-source --statistics
    - name: Test with pytest
      run: |
        poetry run pytest


================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
#   in version control.
#   https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

================================================
FILE: EXAMPLES.md
================================================
# Function
$$ f(x, y, z) = \frac{\sin(xyz)}{xyz} $$
# No Augmentation
![](images/original.png)
![](images/original_flat.png)
# Downscale
![](images/Downscale.png)
![](images/Downscale_flat.png)
# ElasticTransform
![](images/ElasticTransform.png)
![](images/ElasticTransform_flat.png)
# GlassBlur
![](images/GlassBlur.png)
![](images/GlassBlur_flat.png)
# GridDistortion
![](images/GridDistortion.png)
![](images/GridDistortion_flat.png)
# GridDropout
![](images/GridDropout.png)
![](images/GridDropout_flat.png)
# RandomGamma
![](images/RandomGamma.png)
![](images/RandomGamma_flat.png)
# RandomScale2
![](images/RandomScale2.png)
![](images/RandomScale2_flat.png)
# RotatePseudo2D
![](images/RotatePseudo2D.png)
![](images/RotatePseudo2D_flat.png)


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2021 ZFTurbo

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# Volumentations 3D

3D Volume data augmentation package inspired by albumentations.

Volumentations is a working project, which originated from the following Git repositories:
- Original:                 https://github.com/albumentations-team/albumentations
- 3D Conversion:            https://github.com/ashawkey/volumentations
- Continued Development:    https://github.com/ZFTurbo/volumentations

Nevertheless, if you are using this subpackage, please give credit to all authors including ashawkey, ZFTurbo, qubvel and muellerdo.

Initially inspired by [albumentations](https://github.com/albumentations-team/albumentations) library for augmentation of 2D images.

# Installation

```sh
pip install volumentations-3D
```

# Simple Example

```python
from volumentations import *

def get_augmentation(patch_size):
    return Compose([
        Rotate((-15, 15), (0, 0), (0, 0), p=0.5),
        RandomCropFromBorders(crop_value=0.1, p=0.5),
        ElasticTransform((0, 0.25), interpolation=2, p=0.1),
        Resize(patch_size, interpolation=1, resize_type=0, always_apply=True, p=1.0),
        Flip(0, p=0.5),
        Flip(1, p=0.5),
        Flip(2, p=0.5),
        RandomRotate90((1, 2), p=0.5),
        GaussianNoise(var_limit=(0, 5), p=0.2),
        RandomGamma(gamma_limit=(80, 120), p=0.2),
    ], p=1.0)

aug = get_augmentation((64, 128, 128))

img = np.random.randint(0, 255, size=(128, 256, 256), dtype=np.uint8)
lbl = np.random.randint(0, 1, size=(128, 256, 256), dtype=np.uint8)

# with mask
data = {'image': img, 'mask': lbl}
aug_data = aug(**data)
img, lbl = aug_data['image'], aug_data['mask']

# without mask
data = {'image': img}
aug_data = aug(**data)
img = aug_data['image']

```

* Check working usage example in [tst_volumentations_type_1.py](tst_volumentations_type_1.py)  
* Added another usage example / testing in [tst_volumentations_type_2.py](tst_volumentations_type_2.py)  

# Difference from initial version

* Diverse bug fixes.
* Implemented multiple augmentations.
* Approximation enhancements to be closer to Albumentations.

# Implemented 3D augmentations

Check the [EXAMPLES](EXAMPLES.md) page for visual demonstrations
```python
CenterCrop
ColorJitter
Contiguous
CropNonEmptyMaskIfExists
Downscale
ElasticTransform
ElasticTransformPseudo2D
Flip
Float
GaussianNoise
GlassBlur
GridDistortion
GridDropout
ImageCompression
Normalize
PadIfNeeded
RandomBrightnessContrast
RandomCrop
RandomCropFromBorders
RandomDropPlane
RandomGamma
RandomResizedCrop
RandomRotate90
RandomScale
RandomScale2
RemoveEmptyBorder
Resize
ResizedCropNonEmptyMaskIfExists
Rotate
RotatePseudo2D
Transpose
```

# Speed table

Speed in seconds per one sample.

| Aug name | Cube = 64px | Cube = 96px | Cube = 128px | Cube = 224px | Cube = 256px |
|----------|-------------|-------------|--------------|--------------|--------------|
| Rotate | 0.0402 | 0.1366 | 0.3246 | 1.7546 | 2.6349 | 
| RandomCropFromBorders| 0.0037 | 0.0129 | 0.0315 | 0.1634 | 0.2426 |
| ElasticTransform | 0.1588 | 0.5439 | 2.8649 | 11.8937 | 42.3886 |
| Resize (type = 0) | 0.4029 | 0.4077 | 0.4245 | 0.5545 | 0.6278 |
| Resize (type = 1) | 0.3618 | 0.3696 | 0.3871 | 0.5174 | 0.5896 |
| Flip | 0.0042 | 0.0134 | 0.0314 | 0.1649 | 0.2453 |
| RandomRotate90 | 0.0040 | 0.0140 | 0.0306 | 0.1672 | 0.2439 |
| GaussianNoise | 0.0143 | 0.0406 | 0.0956 | 0.4992 | 0.7381 |
| RandomGamma | 0.0066 | 0.0211 | 0.0505 | 0.2654 |  0.3989 |
| RandomScale | 0.0158 | 0.0518 | 0.1198 | 0.6391 | 0.9457 |

### Related repositories

 * [timm_3d](https://github.com/ZFTurbo/timm_3d) - classification models in 3D for PyTorch
 * [classification_models_3D](https://github.com/ZFTurbo/classification_models_3D) - 3D volumes classification models for Keras/Tensorflow
 * [segmentation_models_pytorch_3d](https://github.com/ZFTurbo/segmentation_models_pytorch_3d) - 3D volumes segmentation models for PyTorch
 * [segmentation_models_3D](https://github.com/ZFTurbo/segmentation_models_3D) - segmentation models in 3D for Keras/Tensorflow

# Citation

For more details, please refer to the publication: https://doi.org/10.1016/j.compbiomed.2021.105089

If you find this code useful, please cite it as:
```
@article{solovyev20223d,
  title={3D convolutional neural networks for stalled brain capillary detection},
  author={Solovyev, Roman and Kalinin, Alexandr A and Gabruseva, Tatiana},
  journal={Computers in Biology and Medicine},
  volume={141},
  pages={105089},
  year={2022},
  publisher={Elsevier},
  doi={10.1016/j.compbiomed.2021.105089}
}
```


================================================
FILE: images/examples.py
================================================
"""
This script requires (in addition to volumentations requirements):
    * plotly
    * kaleido
"""
import numpy as np
from volumentations import *
from plotly import graph_objects as go
from plotly import express as px


augmentations = [
    Downscale(.5, .51),
    ElasticTransform((.7, .71)),
    GlassBlur(),
    GridDistortion(distort_limit=.5),
    GridDropout(holes_number_x=3, holes_number_y=3, holes_number_z=3, random_offset=True, fill_value=.5),
    RandomGamma(gamma_limit=(70, 71)),
    RandomScale2(scale_limit=[1.5, 1.6]),
    RotatePseudo2D((1, 2), limit=(40, 41)),
]

X, Y, Z = np.mgrid[-8:8:40j, -8:8:40j, -8:8:40j]
values = np.sin(X*Y*Z) / (X*Y*Z)

fig = go.Figure(data=go.Isosurface(
    x=X.flatten(),
    y=Y.flatten(),
    z=Z.flatten(),
    value=values.flatten(),
    isomin=.1,
    isomax=.9,
    opacity=.5,
    surface_count=6,
    caps=dict(x_show=False, y_show=False, z_show=False),
    colorscale="gray"
))
fig.write_image("images/original.png")
fig = px.imshow(values[20], color_continuous_scale="gray", zmin=values[20].min(), zmax=values[20].max())
fig.write_image("images/original_flat.png")

for aug in augmentations:
    cube = aug(True, ["image"], image=values)["image"]

    fig = go.Figure(data=go.Isosurface(
        x=X.flatten(),
        y=Y.flatten(),
        z=Z.flatten(),
        value=cube.flatten(),
        isomin=.1,
        isomax=.9,
        opacity=.5,
        surface_count=6,
        caps=dict(x_show=False, y_show=False, z_show=False),
        colorscale="gray"
    ))
    name = aug.__class__.__name__
    print(f"images/{name}.png")
    fig.write_image(f"images/{name}.png")

    fig = px.imshow(cube[20], color_continuous_scale="gray", zmin=values[20].min(), zmax=values[20].max())
    print(f"images/{name}_flat.png")
    fig.write_image(f"images/{name}_flat.png")


================================================
FILE: pyproject.toml
================================================
[tool.poetry]
name = "volumentations-3d"
version = "1.0.4"
description = "Library for 3D augmentations"
authors = [
    "Roman Sol (ZFTurbo)",
    "ashawkey",
    "qubvel",
    "muellerdo"
]
license = "MIT"
readme = "README.md"
packages = [{include = "volumentations"}]

[tool.poetry.dependencies]
python = ">=3.9, <3.12"
scikit-image = "^0.20.0"
opencv-python = "^4.7.0.72"
numpy = "^1.24.3"

[tool.poetry.group.dev.dependencies]
pytest = "^7.3.1"
flake8 = "^6.0.0"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"


================================================
FILE: setup.py
================================================
try:
    from setuptools import setup
except ImportError:
    from distutils.core import setup

setup(
    name='volumentations_3D',
    version='1.0.4',
    author='Roman Sol (ZFTurbo), ashawkey, qubvel, muellerdo',
    packages=['volumentations', 'volumentations/augmentations', 'volumentations/core'],
    url='https://github.com/ZFTurbo/volumentations',
    description='Library for 3D augmentations',
    long_description='Library for 3D augmentations. Inspired by albumentations.'
                     'More details: https://github.com/ZFTurbo/volumentations',
    install_requires=[
        'scikit-image',
        'scipy',
        'opencv-python',
        "numpy",
    ],
)


================================================
FILE: test/test_basic.py
================================================
import pytest
import numpy as np

from volumentations import *


augmentations = [
    CenterCrop,
    ColorJitter,
    Contiguous,
    # CropNonEmptyMaskIfExists,
    Downscale,
    ElasticTransform,
    ElasticTransformPseudo2D,
    Flip,
    Float,
    GaussianNoise,
    GlassBlur,
    GridDistortion,
    GridDropout,
    ImageCompression,
    Normalize,
    PadIfNeeded,
    RandomBrightnessContrast,
    RandomCrop,
    RandomCropFromBorders,
    RandomDropPlane,
    RandomGamma,
    RandomResizedCrop,
    RandomRotate90,
    RandomScale,
    RandomScale2,
    RemoveEmptyBorder,
    Resize,
    # ResizedCropNonEmptyMaskIfExists,
    Rotate,
    RotatePseudo2D,
    Transpose,
]

arg_required = [
    CenterCrop,
    CropNonEmptyMaskIfExists,
    PadIfNeeded,
    RandomCrop,
    RandomResizedCrop,
    Resize,
    ResizedCropNonEmptyMaskIfExists,
]


@pytest.fixture(scope="module")
def cube():
    X, Y, Z = np.mgrid[-8:8:40j, -8:8:40j, -8:8:40j]
    values = np.sin(X*Y*Z) / (X*Y*Z)
    return values


@pytest.mark.parametrize("aug_class", augmentations)
def test_augmentations(aug_class, cube):
    if aug_class in arg_required:
        aug = aug_class(shape=(30, 30, 30))
    else:
        aug = aug_class()
    new_cube = aug(True, "image", image=cube)["image"]
    print(aug_class.__name__, new_cube.shape)


================================================
FILE: tst_volumentations_speed.py
================================================
# coding: utf-8
__author__ = 'ZFTurbo: https://kaggle.com/zfturbo'


from volumentations import *
import time


def tst_volumentations_speed():
    total_volumes_to_check = 100
    sizes_list = [
        (64, 64, 64),
        (96, 96, 96),
        (128, 128, 128),
        (224, 224, 224),
        (256, 256, 256),
    ]

    for size in sizes_list:
        patch_size1 = (32, 32, 32)
        patch_size2 = (200, 200, 200)

        full_list_to_check = [
            Rotate((-15, 15), (-15, 15), (-15, 15), p=1.0),
            RandomCropFromBorders(crop_value=0.1, p=1.0),
            ElasticTransform((0, 0.25), interpolation=2, p=1.0),
            Resize(patch_size1, interpolation=1, resize_type=0, always_apply=True, p=1.0),
            Resize(patch_size1, interpolation=1, resize_type=1, always_apply=True, p=1.0),
            Resize(patch_size2, interpolation=1, resize_type=0, always_apply=True, p=1.0),
            Resize(patch_size2, interpolation=1, resize_type=1, always_apply=True, p=1.0),
            Flip(0, p=1.0),
            Flip(1, p=1.0),
            Flip(2, p=1.0),
            RandomRotate90((1, 2), p=1.0),
            GaussianNoise(var_limit=(0, 5), p=1.0),
            RandomGamma(gamma_limit=(80, 120), p=1.0),
            RandomScale(scale_limit=[0.9, 1.1], interpolation=1, always_apply=True, p=1.0)
        ]

        for f in full_list_to_check:
            name = f.__class__.__name__
            aug1 = Compose([
                f,
            ], p=1.0)

            start_time = time.time()
            data = []
            for i in range(total_volumes_to_check):
                data.append(np.random.uniform(low=0.0, high=255, size=size))

            for i, cube in enumerate(data):
                try:
                    cube1 = aug1(image=cube)['image']
                except Exception as e:
                    print('Augmentation error: {}'.format(str(e)))
                    continue

            delta = time.time() - start_time
            print('Size: {} Aug: {} Time: {:.2f} sec Per sample: {:.4f} sec'.format(size, name, delta, delta / len(data)))
            print(f.__dict__)


if __name__ == '__main__':
    tst_volumentations_speed()


================================================
FILE: tst_volumentations_type_1.py
================================================
# coding: utf-8
__author__ = 'ZFTurbo: https://kaggle.com/zfturbo'


from volumentations import *
import os
import cv2
import urllib.request
import time


OUTPUT_DIR = './debug_videos/'
if not os.path.isdir(OUTPUT_DIR):
    os.mkdir(OUTPUT_DIR)


def read_video(f):
    cap = cv2.VideoCapture(f)
    length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    fps = cap.get(cv2.CAP_PROP_FPS)
    current_frame = 0
    frame_list = []
    print('ID: {} Video length: {} Width: {} Height: {} FPS: {}'.format(os.path.basename(f), length, width, height, fps))
    while (cap.isOpened()):
        ret, frame = cap.read()
        if ret is False:
            break
        frame_list.append(frame.copy())
        current_frame += 1

    frame_list = np.array(frame_list, dtype=np.uint8)
    return frame_list


def get_augmentation_v1(patch_size):
    return Compose([
        Rotate((-15, 15), (0, 0), (0, 0), p=0.5),
        RandomCropFromBorders(crop_value=0.1, p=0.5),
        ElasticTransform((0, 0.25), interpolation=2, p=0.1),
        RandomDropPlane(plane_drop_prob=0.1, axes=(0, 1, 2), p=0.5),
        Resize(patch_size, interpolation=1, always_apply=True, p=1.0),
        Flip(0, p=0.5),
        Flip(1, p=0.5),
        Flip(2, p=0.5),
        RandomRotate90((1, 2), p=0.5),
        GaussianNoise(var_limit=(0, 5), p=0.5),
        RandomGamma(gamma_limit=(80, 120), p=0.5),
    ], p=1.0)


def get_augmentation_v2(patch_size):
    return Compose([
        Resize(
            patch_size,
            interpolation=1,
            resize_type=1,
            always_apply=True,
            p=1.0
        ),
    ], p=1.0)



def create_video(image_list, out_file, fps):
    height, width = image_list[0].shape[:2]
    # fourcc = cv2.VideoWriter_fourcc(*'DIB ')
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    # fourcc = cv2.VideoWriter_fourcc(*'H264')
    # fourcc = -1
    video = cv2.VideoWriter(out_file, fourcc, fps, (width, height), True)
    for im in image_list:
        if len(im.shape) == 2:
            im = np.stack((im, im, im), axis=2)
        video.write(im.astype(np.uint8))
    cv2.destroyAllWindows()
    video.release()


def tst_volumentations():
    number_of_aug_videos = 10
    out_shape = (150, 224, 360)
    inp_video = 'sample.mp4'
    if not os.path.isfile(inp_video):
        print('Downloading sample.mp4...')
        urllib.request.urlretrieve('https://github.com/ZFTurbo/volumentations/releases/download/v1.0/sample.mp4', inp_video)

    cube = read_video(inp_video)
    print('Sample video shape: {}'.format(cube.shape))
    aug = get_augmentation_v1(out_shape)
    start_time = time.time()
    for i in range(number_of_aug_videos):
        single_time = time.time()
        data = {'image': cube}
        aug_data = aug(**data)
        img = aug_data['image']
        create_video(img, OUTPUT_DIR + 'video_test_{}.avi'.format(i), 24)
        print('Aug: {} Time: {:.2f} sec'.format(i, time.time() - single_time))
    print('Total augm time: {:.2f} sec'.format(time.time() - start_time))


if __name__ == '__main__':
    tst_volumentations()


================================================
FILE: tst_volumentations_type_2.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#
#-----------------------------------------------------#
#                   Library imports                   #
#-----------------------------------------------------#
# External libraries
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from skimage.data import cells3d
# Volumentations libraries
from volumentations import Compose
from volumentations import augmentations as ai

# -----------------------------------------------------#
#                    GIF Visualizer                    #
# -----------------------------------------------------#
def grayscale_normalization(image):
    # Identify minimum and maximum
    max_value = np.max(image)
    min_value = np.min(image)
    # Scaling
    image_scaled = (image - min_value) / (max_value - min_value)
    image_normalized = np.around(image_scaled * 255, decimals=0)
    # Return normalized image
    return image_normalized


def visualize_evaluation(index, volume, viz_path="test_volumentations"):
    # Grayscale Normalization of Volume
    volume_gray = grayscale_normalization(volume)

    # Create a figure and two axes objects from matplot
    fig = plt.figure()
    img = plt.imshow(volume_gray[0, :, :], cmap='gray', vmin=0, vmax=255,
                     animated=True)

    # Update function to show the slice for the current frame
    def update(i):
        plt.suptitle("Augmentation: " + str(index) + " - " + "Slice: " + str(i))
        img.set_data(volume_gray[i, :, :])
        return img

    # Compute the animation (gif)
    ani = animation.FuncAnimation(fig, update, frames=volume_gray.shape[0],
                                  interval=5, repeat_delay=0, blit=False)
    # Set up the output path for the gif
    if not os.path.exists(viz_path):
        os.mkdir(viz_path)
    file_name = "visualization." + str(index) + ".gif"
    out_path = os.path.join(viz_path, file_name)
    # Save the animation (gif)
    ani.save(out_path, writer='imagemagick', fps=None, dpi=None)
    # Close the matplot
    plt.close()


#-----------------------------------------------------#
#                Albumentations Builder               #
#-----------------------------------------------------#
""" Builds the albumenations augmentator by initializing  all transformations.
    The activated transformation and their configurations are defined as
    class variables.

    -> Builds a new self.operator
"""
def build(aug_flip, aug_rotate, aug_brightness, aug_contrast, aug_saturation,
          aug_hue, aug_scale, aug_crop, aug_gridDistortion, aug_compression,
          aug_gaussianNoise, aug_gaussianBlur, aug_downscaling, aug_gamma,
          aug_elasticTransform):
    # Initialize transform list
    transforms = []
    # Fill transform list
    if aug_flip:
        tf = ai.Flip(p=0.5)
        transforms.append(tf)
    if aug_rotate:
        tf = ai.RandomRotate90(p=0.5)
        transforms.append(tf)
    if aug_brightness:
        tf = ai.ColorJitter(contrast=0, hue=0, saturation=0,
                            p=0.5)
        transforms.append(tf)
    if aug_contrast:
        tf = ai.ColorJitter(brightness=0, hue=0, saturation=0,
                            p=0.5)
        transforms.append(tf)
    if aug_saturation:
        tf = ai.ColorJitter(brightness=0, contrast=0, hue=0,
                            p=0.5)
        transforms.append(tf)
    if aug_hue:
        tf = ai.ColorJitter(brightness=0, contrast=0, saturation=0,
                            p=0.5)
        transforms.append(tf)
    if aug_scale:
        tf = ai.RandomScale(p=0.5)
        transforms.append(tf)
    if aug_crop:
        tf = ai.RandomCrop(shape=(30, 128, 128), p=0.5)
        transforms.append(tf)
    if aug_gridDistortion:
        tf = ai.GridDistortion(p=0.5)
        transforms.append(tf)
    if aug_compression:
        tf = ai.ImageCompression(p=0.5)
        transforms.append(tf)
    if aug_gaussianNoise:
        tf = ai.GaussianNoise(p=0.5)
        transforms.append(tf)
    if aug_gaussianBlur:
        tf = ai.GlassBlur(p=0.5)
        transforms.append(tf)
    if aug_downscaling:
        tf = ai.Downscale(p=0.5)
        transforms.append(tf)
    if aug_gamma:
        tf = ai.RandomGamma(p=0.5)
        transforms.append(tf)
    if aug_elasticTransform:
        tf = ai.ElasticTransform(p=0.5)
        transforms.append(tf)

    # Compose transforms
    return Compose(transforms)


#-----------------------------------------------------#
#                  Application Test                   #
#-----------------------------------------------------#
if __name__ == "__main__":
    # Obtain 3D volume of fluorescence microscopy image of cells
    data_raw = cells3d()
    # Extract nuclei
    data = np.reshape(data_raw[:,1,:,:], (60, 256, 256))
    data = np.float32(data)
    data = grayscale_normalization(data)
    # Visualize original volume
    visualize_evaluation("original", data)
    print(data)
    print("original", data.shape)
    # Setup options
    options = [False for x in range(15)]
    options_labels = ["flip", "rotate", "brightness", "contrast", "saturation",
                      "hue", "scale", "crop", "grid_distortion", "compression",
                      "gaussian_noise", "gaussian_blur", "downscaling", "gamma",
                      "elastic_transform"]
    # Apply each augmentation once for testing
    for i in range(15):
        # Active current augmentation technique
        options_curr = options.copy()
        options_curr[i] = True
        # Initialize Volumentations
        data_aug = build(*options_curr)
        # Apply augmentation
        img_augmented = data_aug(image=data)["image"]
        # Visualize result
        print(options_labels[i], img_augmented.shape)
        visualize_evaluation(options_labels[i], img_augmented)


================================================
FILE: volumentations/__init__.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#
from .augmentations.transforms import *
from .core.composition import *
from .core.transforms_interface import *


================================================
FILE: volumentations/__version__.py
================================================
__version__ = "1.0.4"


================================================
FILE: volumentations/augmentations/__init__.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#
from .functional import *
from .transforms import *


================================================
FILE: volumentations/augmentations/functional.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#
import numpy as np
from functools import wraps
import skimage.transform as skt
import scipy.ndimage.interpolation as sci
from scipy.ndimage import zoom
import cv2
from scipy.ndimage import gaussian_filter
from scipy.ndimage import map_coordinates
from warnings import warn
from itertools import product


MAX_VALUES_BY_DTYPE = {
    np.dtype("uint8"): 255,
    np.dtype("uint16"): 65535,
    np.dtype("uint32"): 4294967295,
    np.dtype("float32"): 1.0,
}


"""
vol: [H, W, D(, C)]

x, y, z <--> H, W, D

you should give (H, W, D) form shape.

skimage interpolation notations:

order = 0: Nearest-Neighbor
order = 1: Bi-Linear (default)
order = 2: Bi-Quadratic
order = 3: Bi-Cubic
order = 4: Bi-Quartic
order = 5: Bi-Quintic

Interpolation behaves strangely when input of type int.
** Be sure to change volume and mask data type to float !!! **
"""


def preserve_shape(func):
    """
    Preserve shape of the image
    """

    @wraps(func)
    def wrapped_function(img, *args, **kwargs):
        shape = img.shape
        result = func(img, *args, **kwargs)
        result = result.reshape(shape)
        return result

    return wrapped_function

def rotate2d(img, angle, axes=(0,1), reshape=False, interpolation=1, border_mode='reflect', value=0):
    return sci.rotate(img, angle, axes, reshape=reshape, order=interpolation, mode=border_mode, cval=value)


def shift(img, shift, interpolation=1, border_mode='reflect', value=0):
    return sci.shift(img, shift, order=interpolation, mode=border_mode, cval=value)


def crop(img, x1, y1, z1, x2, y2, z2):
    height, width, depth = img.shape[:3]
    if x2 <= x1 or y2 <= y1 or z2 <= z1:
        raise ValueError
    if x1 < 0 or y1 < 0 or z1 < 0:
        raise ValueError
    if x2 > height or y2 > width or z2 > depth:
        img = pad(img, (x2, y2, z2))
        warn('image size smaller than crop size, pad by default.', UserWarning)

    return img[x1:x2, y1:y2, z1:z2]


def get_center_crop_coords(height, width, depth, crop_height, crop_width, crop_depth):
    x1 = (height - crop_height) // 2
    x2 = x1 + crop_height
    y1 = (width - crop_width) // 2
    y2 = y1 + crop_width
    z1 = (depth - crop_depth) // 2
    z2 = z1 + crop_depth
    return x1, y1, z1, x2, y2, z2


def center_crop(img, crop_height, crop_width, crop_depth):
    height, width, depth = img.shape[:3]
    if height < crop_height or width < crop_width or depth < crop_depth:
        raise ValueError
    x1, y1, z1, x2, y2, z2 = get_center_crop_coords(height, width, depth, crop_height, crop_width, crop_depth)
    img = img[x1:x2, y1:y2, z1:z2]
    return img


def get_random_crop_coords(height, width, depth, crop_height, crop_width, crop_depth, h_start, w_start, d_start):
    x1 = int((height - crop_height) * h_start)
    x2 = x1 + crop_height
    y1 = int((width - crop_width) * w_start)
    y2 = y1 + crop_width
    z1 = int((depth - crop_depth) * d_start)
    z2 = z1 + crop_depth
    return x1, y1, z1, x2, y2, z2


def random_crop(img, crop_height, crop_width, crop_depth, h_start, w_start, d_start):
    height, width, depth = img.shape[:3]
    if height < crop_height or width < crop_width or depth < crop_depth:
        img = pad(img, (crop_width, crop_height, crop_depth))
        warn('image size smaller than crop size, pad by default.', UserWarning)
    else:
        x1, y1, z1, x2, y2, z2 = get_random_crop_coords(height, width, depth, crop_height, crop_width, crop_depth, h_start, w_start, d_start)
        img = img[x1:x2, y1:y2, z1:z2]
    return img


def normalize(img, range_norm=True):
    if range_norm:
        mn = img.min()
        mx = img.max()
        if mx != mn:
            img = (img - mn) / (mx - mn)
    mean = img.mean()
    std = img.std()
    denominator = np.reciprocal(std)
    if np.isinf(denominator).any():
        img[...] = 0
    else:
        img = (img - mean) * denominator
    return img


def pad(image, new_shape, border_mode="reflect", value=0):
    '''
    image: [H, W, D, C] or [H, W, D]
    new_shape: [H, W, D]
    '''
    axes_not_pad = len(image.shape) - len(new_shape)

    old_shape = np.array(image.shape[:len(new_shape)])
    new_shape = np.array([max(new_shape[i], old_shape[i]) for i in range(len(new_shape))])

    difference = new_shape - old_shape
    pad_below = difference // 2
    pad_above = difference - pad_below

    pad_list = [list(i) for i in zip(pad_below, pad_above)] + [[0, 0]] * axes_not_pad

    if border_mode == 'reflect':
        res = np.pad(image, pad_list, border_mode)
    elif border_mode == 'constant':
        res = np.pad(image, pad_list, border_mode, constant_values=value)
    else:
        raise ValueError

    return res


def gaussian_noise(img, gauss):
    img = img.astype("float32")
    return img + gauss


def resize(img, new_shape, interpolation=1, resize_type=0):
    """
    img: [H, W, D, C] or [H, W, D]
    new_shape: [H, W, D]
    interpolation: The order of the spline interpolation (0-5)
    resize_type: what type of resize to use: scikit-image (0) or zoom (1)
    """

    if resize_type == 0:
        new_img = skt.resize(
            img,
            new_shape,
            order=interpolation,
            mode='reflect',
            cval=0,
            clip=True,
            anti_aliasing=False
        )
    else:
        shp = tuple(np.array(new_shape) / np.array(img.shape[:3]))

        if len(img.shape) == 4:
            # Multichannel
            data = []
            for i in range(img.shape[-1]):
                subimg = img[..., i].copy()
                d0 = zoom(subimg, shp, order=interpolation)
                data.append(d0.copy())
            new_img = np.stack(data, axis=-1)
        else:
            new_img = zoom(img.copy(), shp, order=interpolation)

    return new_img


def rescale(img, scale, interpolation=1):
    """
    img: [H, W, D, C] or [H, W, D]
    scale: scalar float
    """
    return skt.rescale(img, scale, order=interpolation, mode='reflect', cval=0,
                       clip=True, channel_axis=-1, anti_aliasing=False)
    """
    shape = [int(scale * i) for i in img.shape[:3]]
    return resize(img, shape, interpolation)
    """


@preserve_shape
def gamma_transform(img, gamma):
    if img.dtype == np.uint8:
        table = (np.arange(0, 256.0 / 255, 1.0 / 255) ** gamma) * 255
        img = cv2.LUT(img, table.astype(np.uint8))
    else:
        img = np.power(img, gamma)

    return img


def elastic_transform_pseudo2D(img, alpha, sigma, alpha_affine, interpolation=cv2.INTER_LINEAR, border_mode=cv2.BORDER_REFLECT_101, value=None, random_state=42, approximate=False):
    """Elastic deformation of images as described in [Simard2003]_ (with modifications).
    Based on https://gist.github.com/erniejunior/601cdf56d2b424757de5

    .. [Simard2003] Simard, Steinkraus and Platt, "Best Practices for
         Convolutional Neural Networks applied to Visual Document Analysis", in
         Proc. of the International Conference on Document Analysis and
         Recognition, 2003.
    """
    random_state = np.random.RandomState(random_state)

    depth, height, width  = img.shape[:3]

    # Random affine
    center_square = np.float32((height, width)) // 2
    square_size = min((height, width)) // 3
    alpha = float(alpha)
    sigma = float(sigma)
    alpha_affine = float(alpha_affine)

    pts1 = np.float32(
        [
            center_square + square_size,
            [center_square[0] + square_size, center_square[1] - square_size],
            center_square - square_size,
        ]
    )
    pts2 = pts1 + random_state.uniform(-alpha_affine, alpha_affine, size=pts1.shape).astype(np.float32)
    matrix = cv2.getAffineTransform(pts1, pts2)

    # pseoudo 2D
    res = np.zeros_like(img)
    for d in range(depth):
        tmp = img[d, :, :] # [D, H, W, C]
        tmp = cv2.warpAffine(tmp, M=matrix, dsize=(width, height), flags=interpolation, borderMode=border_mode, borderValue=value)
        res[d, :, :] = tmp
    img = res


    if approximate:
        # Approximate computation smooth displacement map with a large enough kernel.
        # On large images (512+) this is approximately 2X times faster
        dx = random_state.rand(height, width).astype(np.float32) * 2 - 1
        cv2.GaussianBlur(dx, (17, 17), sigma, dst=dx)
        dx *= alpha

        dy = random_state.rand(height, width).astype(np.float32) * 2 - 1
        cv2.GaussianBlur(dy, (17, 17), sigma, dst=dy)
        dy *= alpha
    else:
        dx = np.float32(gaussian_filter((random_state.rand(height, width) * 2 - 1), sigma) * alpha)
        dy = np.float32(gaussian_filter((random_state.rand(height, width) * 2 - 1), sigma) * alpha)

    x, y = np.meshgrid(np.arange(width), np.arange(height))

    map_x = np.float32(x + dx)
    map_y = np.float32(y + dy)

    # pseoudo 2D
    res = np.zeros_like(img)
    for d in range(depth):
        tmp = img[:, :, d] # [H, W, C]
        tmp = cv2.remap(tmp, map1=map_x, map2=map_y, interpolation=interpolation, borderMode=border_mode, borderValue=value)
        res[:, :, d] = tmp
    img = res

    return img


"""
Later are coordinates-based 3D rotation and elastic transforms.
reference: https://github.com/MIC-DKFZ/batchgenerators
"""

def elastic_transform(img, sigmas, alphas, interpolation=1, border_mode='reflect', value=0, random_state=42):
    """
    img: [H, W, D(, C)]
    """
    coords = generate_coords(img.shape[:3])
    coords = elastic_deform_coords(coords, sigmas, alphas, random_state)
    coords = recenter_coords(coords)
    if len(img.shape) == 4:
        num_channels = img.shape[3]
        res = []
        for channel in range(num_channels):
            res.append(map_coordinates(img[:,:,:,channel], coords, order=interpolation, mode=border_mode, cval=value))
        return np.stack(res, -1)
    else:
        return map_coordinates(img, coords, order=interpolation, mode=border_mode, cval=value)


def generate_coords(shape):
    """
    coords: [n_dim=3, H, W, D]
    """
    tmp = tuple([np.arange(i) for i in shape])
    coords = np.array(np.meshgrid(*tmp, indexing='ij')).astype(float)
    for d in range(len(shape)):
        coords[d] -= ((np.array(shape).astype(float) - 1) / 2)[d]
    return coords


def elastic_deform_coords(coords, sigmas, alphas, random_state):
    random_state = np.random.RandomState(random_state)
    n_dim = coords.shape[0]
    if not isinstance(alphas, (tuple, list)):
        alphas = [alphas] * n_dim
    if not isinstance(sigmas, (tuple, list)):
        sigmas = [sigmas] * n_dim
    offsets = []
    for d in range(n_dim):
        offset = gaussian_filter((random_state.rand(*coords.shape[1:]) * 2 - 1), sigmas, mode="constant", cval=0)
        mx = np.max(np.abs(offset))
        offset = alphas[d] * offset / mx
        offsets.append(offset)
    offsets = np.array(offsets)
    coords += offsets
    return coords


def recenter_coords(coords):
    n_dim = coords.shape[0]
    mean = coords.mean(axis=tuple(range(1, len(coords.shape))), keepdims=True)
    coords -= mean
    for d in range(n_dim):
        ctr = int(np.round(coords.shape[d+1]/2))
        coords[d] += ctr
    return coords


def rotate3d(img, x, y, z, interpolation=1, border_mode='reflect', value=0):
    """
    img: [H, W, D(, C)]
    x, y, z: angle in degree.
    """
    x, y, z = [np.pi*i/180 for i in [x, y, z]]
    coords = generate_coords(img.shape[:3])
    coords = rotate_coords(coords, x, y, z)
    coords = recenter_coords(coords)
    if len(img.shape) == 4:
        num_channels = img.shape[3]
        res = []
        for channel in range(num_channels):
            res.append(map_coordinates(img[:,:,:,channel], coords, order=interpolation, mode=border_mode, cval=value))
        return np.stack(res, -1)
    else:
        return map_coordinates(img, coords, order=interpolation, mode=border_mode, cval=value)


def rotate_coords(coords, angle_x, angle_y, angle_z):
    rot_matrix = np.identity(len(coords))
    rot_matrix = rot_matrix @ rot_x(angle_x)
    rot_matrix = rot_matrix @ rot_y(angle_y)
    rot_matrix = rot_matrix @ rot_z(angle_z)
    coords = np.dot(coords.reshape(len(coords), -1).transpose(), rot_matrix).transpose().reshape(coords.shape)
    return coords


def rot_x(angle):
    rotation_x = np.array([[1, 0, 0],
                           [0, np.cos(angle), -np.sin(angle)],
                           [0, np.sin(angle), np.cos(angle)]])
    return rotation_x


def rot_y(angle):
    rotation_y = np.array([[np.cos(angle), 0, np.sin(angle)],
                           [0, 1, 0],
                           [-np.sin(angle), 0, np.cos(angle)]])
    return rotation_y


def rot_z(angle):
    rotation_z = np.array([[np.cos(angle), -np.sin(angle), 0],
                           [np.sin(angle), np.cos(angle), 0],
                           [0, 0, 1]])
    return rotation_z


def rescale_warp(img, scale, interpolation=1, border_mode='reflect', value=0):
    """
    img: [H, W, D(, C)]
    """
    coords = generate_coords(img.shape[:3])
    coords = scale_coords(coords, scale)
    coords = recenter_coords(coords)
    if len(img.shape) == 4:
        num_channels = img.shape[3]
        res = []
        for channel in range(num_channels):
            res.append(map_coordinates(img[:,:,:,channel], coords, order=interpolation, mode=border_mode, cval=value))
        return np.stack(res, -1)
    else:
        return map_coordinates(img, coords, order=interpolation, mode=border_mode, cval=value)


def scale_coords(coords, scale):
    if isinstance(scale, (tuple, list, np.ndarray)):
        assert len(scale) == len(coords)
        for i in range(len(scale)):
            coords[i] *= scale[i]
    else:
        coords *= scale
    return coords


def clamping_crop(img, sh0_min, sh1_min, sh2_min, sh0_max, sh1_max, sh2_max):
    d, h, w = img.shape[:3]
    if sh0_min < 0:
        sh0_min = 0
    if sh1_min < 0:
        sh1_min = 0
    if sh2_min < 0:
        sh2_min = 0
    if sh0_max > d:
        sh0_max = d
    if sh1_max > h:
        sh1_max = h
    if sh2_max > w:
        sh2_max = w
    return img[int(sh0_min): int(sh0_max), int(sh1_min): int(sh1_max), int(sh2_min): int(sh2_max)]


def cutout(img, holes, fill_value=0):
    # Make a copy of the input image since we don't want to modify it directly
    img = img.copy()
    for x1, y1, z1, x2, y2, z2 in holes:
        img[y1:y2, x1:x2, z1:z2] = fill_value
    return img

def clip(img, dtype, maxval):
    return np.clip(img, 0, maxval).astype(dtype)

def clipped(func):
    @wraps(func)
    def wrapped_function(img, *args, **kwargs):
        dtype = img.dtype
        maxval = MAX_VALUES_BY_DTYPE.get(dtype, 1.0)
        return clip(func(img, *args, **kwargs), dtype, maxval)

    return wrapped_function

@clipped
def _brightness_contrast_adjust_non_uint(img, alpha=1, beta=0, beta_by_max=False):
    dtype = img.dtype
    img = img.astype("float32")

    if alpha != 1:
        img *= alpha
    if beta != 0:
        if beta_by_max:
            max_value = MAX_VALUES_BY_DTYPE[dtype]
            img += beta * max_value
        else:
            img += beta * np.mean(img)
    return img

@preserve_shape
def _brightness_contrast_adjust_uint(img, alpha=1, beta=0, beta_by_max=False):
    dtype = np.dtype("uint8")

    max_value = MAX_VALUES_BY_DTYPE[dtype]

    lut = np.arange(0, max_value + 1).astype("float32")

    if alpha != 1:
        lut *= alpha
    if beta != 0:
        if beta_by_max:
            lut += beta * max_value
        else:
            lut += beta * np.mean(img)

    lut = np.clip(lut, 0, max_value).astype(dtype)
    img = cv2.LUT(img, lut)
    return img

def brightness_contrast_adjust(img, alpha=1, beta=0, beta_by_max=False):
    if img.dtype == np.uint8:
        return _brightness_contrast_adjust_uint(img, alpha, beta, beta_by_max)

    return _brightness_contrast_adjust_non_uint(img, alpha, beta, beta_by_max)


def _adjust_brightness_torchvision_uint8(img, factor):
    lut = np.arange(0, 256) * factor
    lut = np.clip(lut, 0, 255).astype(np.uint8)
    return cv2.LUT(img, lut)

@preserve_shape
def adjust_brightness_torchvision(img, factor):
    if factor == 0:
        return np.zeros_like(img)
    elif factor == 1:
        return img

    if img.dtype == np.uint8:
        return _adjust_brightness_torchvision_uint8(img, factor)

    return clip(img * factor, img.dtype, MAX_VALUES_BY_DTYPE[img.dtype])

def _adjust_contrast_torchvision_uint8(img, factor, mean):
    lut = np.arange(0, 256) * factor
    lut = lut + mean * (1 - factor)
    lut = clip(lut, img.dtype, 255)

    return cv2.LUT(img, lut)

@preserve_shape
def adjust_contrast_torchvision(img, factor):
    if factor == 1:
        return img

    if is_2Dgrayscale_image(img):
        mean = img.mean()
    else:
        mean = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY).mean()

    if factor == 0:
        return np.full_like(img, int(mean + 0.5), dtype=img.dtype)

    if img.dtype == np.uint8:
        return _adjust_contrast_torchvision_uint8(img, factor, mean)

    return clip(
        img.astype(np.float32) * factor + mean * (1 - factor),
        img.dtype,
        MAX_VALUES_BY_DTYPE[img.dtype],
    )


@preserve_shape
def adjust_saturation_torchvision(img, factor, gamma=0):
    if factor == 1:
        return img

    if is_2Dgrayscale_image(img):
        gray = img
        return gray
    else:
        gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
        gray = cv2.cvtColor(gray, cv2.COLOR_GRAY2RGB)

    if factor == 0:
        return gray

    result = cv2.addWeighted(img, factor, gray, 1 - factor, gamma=gamma)
    if img.dtype == np.uint8:
        return result

    # OpenCV does not clip values for float dtype
    return clip(result, img.dtype, MAX_VALUES_BY_DTYPE[img.dtype])


def _adjust_hue_torchvision_uint8(img, factor):
    img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)

    lut = np.arange(0, 256, dtype=np.int16)
    lut = np.mod(lut + 180 * factor, 180).astype(np.uint8)
    img[..., 0] = cv2.LUT(img[..., 0], lut)

    return cv2.cvtColor(img, cv2.COLOR_HSV2RGB)


def adjust_hue_torchvision(img, factor):
    if is_2Dgrayscale_image(img):
        return img

    if factor == 0:
        return img

    if img.dtype == np.uint8:
        return _adjust_hue_torchvision_uint8(img, factor)

    img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
    img[..., 0] = np.mod(img[..., 0] + factor * 360, 360)
    return cv2.cvtColor(img, cv2.COLOR_HSV2RGB)


def is_3Drgb_image(image):
    return len(image.shape) == 4 and image.shape[-1] == 3

def is_3Dgrayscale_image(image):
    return (len(image.shape) == 3) or (len(image.shape) == 4 and image.shape[-1] == 1)

def is_2Drgb_image(image):
    return len(image.shape) == 3 and image.shape[-1] == 3

def is_2Dgrayscale_image(image):
    return (len(image.shape) == 2) or (len(image.shape) == 3 and image.shape[-1] == 1)

def _maybe_process_in_chunks(process_fn, **kwargs):
    """
    Wrap OpenCV function to enable processing images with more than 4 channels.
    Limitations:
        This wrapper requires image to be the first argument and rest must be sent via named arguments.
    Args:
        process_fn: Transform function (e.g cv2.resize).
        kwargs: Additional parameters.
    Returns:
        numpy.ndarray: Transformed image.
    """
    def get_num_channels(image):
        return image.shape[2] if len(image.shape) == 3 else 1

    @wraps(process_fn)
    def __process_fn(img):
        num_channels = get_num_channels(img)
        if num_channels > 4:
            chunks = []
            for index in range(0, num_channels, 4):
                if num_channels - index == 2:
                    # Many OpenCV functions cannot work with 2-channel images
                    for i in range(2):
                        chunk = img[:, :, index + i : index + i + 1]
                        chunk = process_fn(chunk, **kwargs)
                        chunk = np.expand_dims(chunk, -1)
                        chunks.append(chunk)
                else:
                    chunk = img[:, :, index : index + 4]
                    chunk = process_fn(chunk, **kwargs)
                    chunks.append(chunk)
            img = np.dstack(chunks)
        else:
            img = process_fn(img, **kwargs)
        return img

    return __process_fn

@preserve_shape
def grid_distortion(
    img,
    num_steps=10,
    xsteps=(),
    ysteps=(),
    interpolation=cv2.INTER_LINEAR,
    border_mode=cv2.BORDER_REFLECT_101,
    value=None,
):
    """Perform a grid distortion of an input image.
    Reference:
        http://pythology.blogspot.sg/2014/03/interpolation-on-regular-distorted-grid.html
    """
    height, width = img.shape[:2]

    x_step = width // num_steps
    xx = np.zeros(width, np.float32)
    prev = 0
    for idx in range(num_steps + 1):
        x = idx * x_step
        start = int(x)
        end = int(x) + x_step
        if end > width:
            end = width
            cur = width
        else:
            cur = prev + x_step * xsteps[idx]

        xx[start:end] = np.linspace(prev, cur, end - start)
        prev = cur

    y_step = height // num_steps
    yy = np.zeros(height, np.float32)
    prev = 0
    for idx in range(num_steps + 1):
        y = idx * y_step
        start = int(y)
        end = int(y) + y_step
        if end > height:
            end = height
            cur = height
        else:
            cur = prev + y_step * ysteps[idx]

        yy[start:end] = np.linspace(prev, cur, end - start)
        prev = cur

    map_x, map_y = np.meshgrid(xx, yy)
    map_x = map_x.astype(np.float32)
    map_y = map_y.astype(np.float32)

    remap_fn = _maybe_process_in_chunks(
        cv2.remap,
        map1=map_x,
        map2=map_y,
        interpolation=interpolation,
        borderMode=border_mode,
        borderValue=value,
    )
    return remap_fn(img)

@preserve_shape
def downscale(img, scale, interpolation=cv2.INTER_NEAREST):
    shape_org = img.shape[:3]
    shape_down = tuple([int(x*scale) for x in shape_org])

    need_cast = interpolation != cv2.INTER_NEAREST and img.dtype == np.uint8
    if need_cast:
        img = to_float(img)



    downscaled = skt.resize(img, shape_down, order=interpolation, mode='reflect',
                            cval=0, clip=True, anti_aliasing=False)
    upscaled = skt.resize(downscaled, shape_org, order=interpolation, mode='reflect',
                            cval=0, clip=True, anti_aliasing=False)
    if need_cast:
        upscaled = from_float(np.clip(upscaled, 0, 1), dtype=np.dtype("uint8"))
    return upscaled

def glass_blur(img, sigma, max_delta, iterations, dxy, mode):
    x = cv2.GaussianBlur(np.array(img), sigmaX=sigma, ksize=(0, 0))

    if mode == "fast":

        hs = np.arange(img.shape[0] - max_delta, max_delta, -1)
        ws = np.arange(img.shape[1] - max_delta, max_delta, -1)
        h = np.tile(hs, ws.shape[0])
        w = np.repeat(ws, hs.shape[0])

        for i in range(iterations):
            dy = dxy[:, i, 0]
            dx = dxy[:, i, 1]
            x[h, w], x[h + dy, w + dx] = x[h + dy, w + dx], x[h, w]

    elif mode == "exact":
        for ind, (i, h, w) in enumerate(
            product(
                range(iterations),
                range(img.shape[0] - max_delta, max_delta, -1),
                range(img.shape[1] - max_delta, max_delta, -1),
            )
        ):
            ind = ind if ind < len(dxy) else ind % len(dxy)
            dy = dxy[ind, i, 0]
            dx = dxy[ind, i, 1]
            x[h, w], x[h + dy, w + dx] = x[h + dy, w + dx], x[h, w]

    return cv2.GaussianBlur(x, sigmaX=sigma, ksize=(0, 0))

@preserve_shape
def image_compression(img, quality, image_type):
    if image_type in [".jpeg", ".jpg"]:
        quality_flag = cv2.IMWRITE_JPEG_QUALITY
    elif image_type == ".webp":
        quality_flag = cv2.IMWRITE_WEBP_QUALITY
    else:
        NotImplementedError("Only '.jpg' and '.webp' compression transforms are implemented. ")

    input_dtype = img.dtype
    needs_float = False

    if input_dtype == np.float32:
        warn(
            "Image compression augmentation "
            "is most effective with uint8 inputs, "
            "{} is used as input.".format(input_dtype),
            UserWarning,
        )
        img = from_float(img, dtype=np.dtype("uint8"))
        needs_float = True
    elif input_dtype not in (np.uint8, np.float32):
        raise ValueError("Unexpected dtype {} for image augmentation".format(input_dtype))

    _, encoded_img = cv2.imencode(image_type, img, (int(quality_flag), quality))
    img = cv2.imdecode(encoded_img, cv2.IMREAD_UNCHANGED)

    if needs_float:
        img = to_float(img, max_value=255)
    return img

def from_float(img, dtype, max_value=None):
    if max_value is None:
        try:
            max_value = MAX_VALUES_BY_DTYPE[dtype]
        except KeyError:
            raise RuntimeError(
                "Can't infer the maximum value for dtype {}. You need to specify the maximum value manually by "
                "passing the max_value argument".format(dtype)
            )
    return (img * max_value).astype(dtype)

def to_float(img, max_value=None):
    if max_value is None:
        try:
            max_value = MAX_VALUES_BY_DTYPE[img.dtype]
        except KeyError:
            raise RuntimeError(
                "Can't infer the maximum value for dtype {}. You need to specify the maximum value manually by "
                "passing the max_value argument".format(img.dtype)
            )
    return img.astype("float32") / max_value


================================================
FILE: volumentations/augmentations/transforms.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#
import cv2
import random
import numpy as np
import numbers
from enum import Enum, IntEnum
from ..core.transforms_interface import *
from ..augmentations import functional as F
from ..random_utils import *

class Float(DualTransform):
    def apply(self, image):
        return image.astype(np.float32)

class Contiguous(DualTransform):
    def apply(self, image):
        return np.ascontiguousarray(image)


class PadIfNeeded(DualTransform):
    def __init__(self, shape, border_mode='constant', value=0, mask_value=0, always_apply=False, p=1):
        super().__init__(always_apply, p)
        self.shape = shape
        self.border_mode = border_mode
        self.value = value
        self.mask_value = mask_value

    def apply(self, img):
        return F.pad(img, self.shape, self.border_mode, self.value)

    def apply_to_mask(self, mask):
        return F.pad(mask, self.shape, self.border_mode, self.mask_value)
    

class Blur(ImageOnlyTransform):
    """Blur the input image using a random-sized kernel.
    Args:
        blur_limit (int, (int, int)): maximum kernel size for blurring the input image.
            Should be in range [3, inf). Default: (3, 7).
        p (float): probability of applying the transform. Default: 0.5.
    Targets:
        image
    Image types:
        uint8, float32
    """

    def __init__(self, blur_limit=7, always_apply=False, p=0.5):
        super(Blur, self).__init__(always_apply, p)
        self.blur_limit = to_tuple(blur_limit, 3)

    def apply(self, image, ksize=3, **params):
        return F.blur(image, ksize)

    def get_params(self, **data):
        return {"ksize": int(random.choice(np.arange(self.blur_limit[0], self.blur_limit[1] + 1, 2)))}

    def get_transform_init_args_names(self):
        return ("blur_limit",)


class GaussianNoise(Transform):
    def __init__(self, var_limit=(10.0, 50.0), mean=0, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.var_limit = var_limit
        self.mean = mean

    def apply(self, img, gauss=None):
        return F.gaussian_noise(img, gauss=gauss)

    def get_params(self, **data):
        image = data["image"]
        var = uniform(self.var_limit[0], self.var_limit[1])
        sigma = var ** 0.5

        gauss = normal(self.mean, sigma, image.shape).astype("float32")
        return {"gauss": gauss}


class Resize(DualTransform):
    def __init__(self, shape, interpolation=1, resize_type=1, always_apply=False, p=1):
        super().__init__(always_apply, p)
        self.shape = shape
        self.interpolation = interpolation
        self.resize_type = resize_type

    def apply(self, img):
        return F.resize(img, new_shape=self.shape, interpolation=self.interpolation, resize_type=self.resize_type)

    def apply_to_mask(self, mask):
        return F.resize(mask, new_shape=self.shape, interpolation=0, resize_type=self.resize_type)


class RandomScale(DualTransform):
    def __init__(self, scale_limit=[0.9, 1.1], interpolation=1, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.scale_limit = scale_limit
        self.interpolation = interpolation

    def get_params(self, **data):
        return {"scale": random.uniform(self.scale_limit[0], self.scale_limit[1])}

    def apply(self, img, scale):
        return F.rescale(img, scale, interpolation=self.interpolation)

    def apply_to_mask(self, mask, scale):
        return F.rescale(mask, scale, interpolation=0)


class RandomScale2(DualTransform):
    """
    TODO: compare speeds with version 1.
    """
    def __init__(self, scale_limit=[0.9, 1.1], interpolation=1, border_mode='constant', value=0, mask_value=0, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.scale_limit = scale_limit
        self.interpolation = interpolation
        self.border_mode = border_mode
        self.value = value
        self.mask_value = mask_value

    def get_params(self, **data):
        return {"scale": random.uniform(self.scale_limit[0], self.scale_limit[1])}

    def apply(self, img, scale):
        return F.rescale_warp(img, scale, interpolation=self.interpolation, border_mode=self.border_mode, value=self.value)

    def apply_to_mask(self, mask, scale):
        return F.rescale_warp(mask, scale, interpolation=0, border_mode=self.border_mode, value=self.mask_value)


class RotatePseudo2D(DualTransform):
    def __init__(self, axes=(0,1), limit=(-90, 90), interpolation=1, border_mode='constant', value=0, mask_value=0, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.axes = axes
        self.limit = limit
        self.interpolation = interpolation
        self.border_mode = border_mode
        self.value = value
        self.mask_value = mask_value

    def apply(self, img, angle):
        return F.rotate2d(img, angle, axes=self.axes, reshape=False, interpolation=self.interpolation, border_mode=self.border_mode, value=self.value)

    def apply_to_mask(self, mask, angle):
        return F.rotate2d(mask, angle, axes=self.axes, reshape=False, interpolation=0, border_mode=self.border_mode, value=self.mask_value)

    def get_params(self, **data):
        return {"angle": random.uniform(self.limit[0], self.limit[1])}


class RandomRotate90(DualTransform):
    def __init__(self, axes=None, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.axes = axes

    def apply(self, img, axes, factor):
        return np.rot90(img, factor, axes=axes)

    def get_params(self, **data):
        # Pick predefined axis to flip
        if self.axes is not None : axes = self.axes
        # Pick random combination of axes to flip
        else:
            combinations = [(0,1), (1,0), (0,2), (2,0), (1,2), (2,1)]
            axes = random.choice(combinations)
        # Define params
        return {"factor": random.randint(0, 3),
                "axes": axes}


class Flip(DualTransform):
    def __init__(self, axis=None, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.axis = axis

    def apply(self, img):
        # Pick predefined axis to flip
        if self.axis is not None : axis = self.axis
        # Pick random combination of axes to flip
        else:
            combinations = [(0,), (1,), (2,), (0,1), (0,2), (1,2), (0,1,2)]
            axis = random.choice(combinations)
        # Apply flipping
        return np.flip(img, axis)


class Normalize(Transform):
    def __init__(self, range_norm=False, always_apply=True, p=1.0):
        super().__init__(always_apply, p)
        self.range_norm = range_norm

    def apply(self, img):
        return F.normalize(img, range_norm=self.range_norm)


class Transpose(DualTransform):
    def __init__(self, axes=(1,0,2), always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.axes = axes

    def apply(self, img):
        return np.transpose(img, self.axes)


class CenterCrop(DualTransform):
    def __init__(self, shape, always_apply=False, p=1.0):
        super().__init__(always_apply, p)
        self.shape = shape

    def apply(self, img):
        return F.center_crop(img, self.shape[0], self.shape[1], self.shape[2])


class RandomResizedCrop(DualTransform):
    def __init__(self, shape, scale_limit=(0.8, 1.2), interpolation=1, resize_type=1, always_apply=False, p=1.0):
        super().__init__(always_apply, p)
        self.shape = shape
        self.scale_limit = scale_limit
        self.interpolation = interpolation
        self.resize_type = resize_type

    def apply(self, img, scale=1, scaled_shape=None, h_start=0, w_start=0, d_start=0):
        if scaled_shape is None:
            scaled_shape = self.shape
        img = F.random_crop(img, scaled_shape[0], scaled_shape[1], scaled_shape[2], h_start, w_start, d_start)
        return F.resize(img, new_shape=self.shape, interpolation=self.interpolation, resize_type=self.resize_type)

    def apply_to_mask(self, img, scale=1, scaled_shape=None, h_start=0, w_start=0, d_start=0):
        if scaled_shape is None:
            scaled_shape = self.shape
        img = F.random_crop(img, scaled_shape[0], scaled_shape[1], scaled_shape[2], h_start, w_start, d_start)
        return F.resize(img, new_shape=self.shape, interpolation=0, resize_type=self.resize_type)

    def get_params(self, **data):
        scale = random.uniform(self.scale_limit[0], self.scale_limit[1])
        scaled_shape = [int(scale * i) for i in self.shape]
        return {
            "scale": scale,
            "scaled_shape": scaled_shape,
            "h_start": random.random(),
            "w_start": random.random(),
            "d_start": random.random(),
        }


class RandomCrop(DualTransform):
    def __init__(self, shape, always_apply=False, p=1.0):
        super().__init__(always_apply, p)
        self.shape = shape

    def apply(self, img, h_start=0, w_start=0, d_start=0):
        return F.random_crop(img, self.shape[0], self.shape[1], self.shape[2], h_start, w_start, d_start)

    def get_params(self, **data):
        return {
            "h_start": random.random(),
            "w_start": random.random(),
            "d_start": random.random(),
        }


class CropNonEmptyMaskIfExists(DualTransform):
    def __init__(self, shape, always_apply=False, p=1.0):
        super().__init__(always_apply, p)
        self.height = shape[0]
        self.width = shape[1]
        self.depth = shape[2]

    def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_max=0):
        return F.crop(img, x_min, y_min, z_min, x_max, y_max, z_max)

    def get_params(self, **data):
        mask = data["mask"] # [H, W, D]
        mask_height, mask_width, mask_depth = mask.shape

        if mask.sum() == 0:
            x_min = random.randint(0, mask_height - self.height)
            y_min = random.randint(0, mask_width - self.width)
            z_min = random.randint(0, mask_depth - self.depth)
        else:
            non_zero = np.argwhere(mask)
            x, y, z = random.choice(non_zero)
            x_min = x - random.randint(0, self.height - 1)
            y_min = y - random.randint(0, self.width - 1)
            z_min = z - random.randint(0, self.depth - 1)
            x_min = np.clip(x_min, 0, mask_height - self.height)
            y_min = np.clip(y_min, 0, mask_width - self.width)
            z_min = np.clip(z_min, 0, mask_depth - self.depth)

        x_max = x_min + self.height
        y_max = y_min + self.width
        z_max = z_min + self.depth

        return {
            "x_min": x_min, "x_max": x_max,
            "y_min": y_min, "y_max": y_max,
            "z_min": z_min, "z_max": z_max,
        }


class ResizedCropNonEmptyMaskIfExists(DualTransform):
    def __init__(self, shape, scale_limit=(0.8, 1.2), interpolation=1, resize_type=1, always_apply=False, p=1.0):
        super().__init__(always_apply, p)
        self.shape = shape
        self.scale_limit = scale_limit
        self.interpolation = interpolation
        self.resize_type = resize_type

    def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_max=0):
        img = F.crop(img, x_min, y_min, z_min, x_max, y_max, z_max)
        return F.resize(img, self.shape, interpolation=self.interpolation, resize_type=self.resize_type)

    def apply_to_mask(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_max=0):
        img = F.crop(img, x_min, y_min, z_min, x_max, y_max, z_max)
        return F.resize(img, self.shape, interpolation=0, resize_type=self.resize_type)

    def get_params(self, **data):
        mask = data["mask"] # [H, W, D]
        mask_height, mask_width, mask_depth = mask.shape

        scale = random.uniform(self.scale_limit[0], self.scale_limit[1])
        height, width, depth = [int(scale * i) for i in self.shape]

        if mask.sum() == 0:
            x_min = random.randint(0, mask_height - height)
            y_min = random.randint(0, mask_width - width)
            z_min = random.randint(0, mask_depth - depth)
        else:
            non_zero = np.argwhere(mask)
            x, y, z = random.choice(non_zero)
            x_min = x - random.randint(0, height - 1)
            y_min = y - random.randint(0, width - 1)
            z_min = z - random.randint(0, depth - 1)
            x_min = np.clip(x_min, 0, mask_height - height)
            y_min = np.clip(y_min, 0, mask_width - width)
            z_min = np.clip(z_min, 0, mask_depth - depth)

        x_max = x_min + height
        y_max = y_min + width
        z_max = z_min + depth

        return {
            "x_min": x_min, "x_max": x_max,
            "y_min": y_min, "y_max": y_max,
            "z_min": z_min, "z_max": z_max,
        }


class RandomGamma(ImageOnlyTransform):
    """
    Args:
        gamma_limit (float or (float, float)): If gamma_limit is a single float value,
            the range will be (-gamma_limit, gamma_limit). Default: (80, 120).
        eps: Deprecated.
    Targets:
        image
    Image types:
        uint8, float32
    """

    def __init__(self, gamma_limit=(80, 120), eps=None, always_apply=False, p=0.5):
        super(RandomGamma, self).__init__(always_apply, p)
        self.gamma_limit = to_tuple(gamma_limit)
        self.eps = eps

    def apply(self, img, gamma=1, **params):
        return F.gamma_transform(img, gamma=gamma)

    def get_params(self, **data):
        return {"gamma": random.randint(self.gamma_limit[0], self.gamma_limit[1]) / 100.0}

    def get_transform_init_args_names(self):
        return ("gamma_limit", "eps")


class ElasticTransformPseudo2D(DualTransform):
    def __init__(self, alpha=1000, sigma=50, alpha_affine=1, approximate=False, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.alpha = alpha
        self.sigma = sigma
        self.alpha_affine = alpha_affine
        self.approximate = approximate

    def apply(self, img, random_state=None):
        return F.elastic_transform_pseudo2D(img, self.alpha, self.sigma, self.alpha_affine, interpolation=cv2.INTER_LINEAR, border_mode=cv2.BORDER_REFLECT_101, value=None, random_state=random_state, approximate=False)

    def apply_to_mask(self, img, random_state=None):
        return F.elastic_transform_pseudo2D(img, self.alpha, self.sigma, self.alpha_affine, interpolation=cv2.INTER_NEAREST, border_mode=cv2.BORDER_REFLECT_101, value=None, random_state=random_state, approximate=False)

    def get_params(self, **data):
        return {"random_state": random.randint(0, 10000)}


class ElasticTransform(DualTransform):
    def __init__(self, deformation_limits=(0, 0.25), interpolation=1, border_mode='constant', value=0, mask_value=0, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.deformation_limits = deformation_limits
        self.interpolation = interpolation
        self.border_mode = border_mode
        self.value = value
        self.mask_value = mask_value

    def apply(self, img, sigmas, alphas, random_state=None):
        return F.elastic_transform(img, sigmas, alphas, interpolation=self.interpolation, random_state=random_state, border_mode=self.border_mode, value=self.value)

    def apply_to_mask(self, img, sigmas, alphas, random_state=None):
        return F.elastic_transform(img, sigmas, alphas, interpolation=0, random_state=random_state, border_mode=self.border_mode, value=self.mask_value)

    def get_params(self, **data):
        image = data["image"] # [H, W, D]
        random_state = random.randint(0, 10000)
        deformation = random.uniform(*self.deformation_limits)
        sigmas = [deformation * x for x in image.shape[:3]]
        alphas = [random.uniform(x/8, x/2) for x in sigmas]
        return {
            "random_state": random_state,
            "sigmas": sigmas,
            "alphas": alphas,
        }


class Rotate(DualTransform):
    def __init__(self, x_limit=(-15,15), y_limit=(-15,15), z_limit=(-15,15), interpolation=1, border_mode='constant', value=0, mask_value=0, always_apply=False, p=0.5):
        super().__init__(always_apply, p)
        self.x_limit = x_limit
        self.y_limit = y_limit
        self.z_limit = z_limit
        self.interpolation = interpolation
        self.border_mode = border_mode
        self.value = value
        self.mask_value = mask_value

    def apply(self, img, x, y, z):
        return F.rotate3d(img, x, y, z, interpolation=self.interpolation, border_mode=self.border_mode, value=self.value)

    def apply_to_mask(self, mask, x, y, z):
        return F.rotate3d(mask, x, y, z, interpolation=0, border_mode=self.border_mode, value=self.mask_value)

    def get_params(self, **data):
        return {
            "x": random.uniform(self.x_limit[0], self.x_limit[1]),
            "y": random.uniform(self.y_limit[0], self.y_limit[1]),
            "z": random.uniform(self.z_limit[0], self.z_limit[1]),
        }


class RemoveEmptyBorder(DualTransform):
    def __init__(self, border_value=0, always_apply=False, p=1.0):
        super().__init__(always_apply, p)
        self.border_value = border_value


    def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_max=0):
        return F.crop(img, x_min, y_min, z_min, x_max, y_max, z_max)

    def get_params(self, **data):
        image = data["image"] # [H, W, D, C]

        borders = np.where(image != self.border_value)

        return {
            "x_min": np.min(borders[0]), "x_max": np.max(borders[0])+1,
            "y_min": np.min(borders[1]), "y_max": np.max(borders[1])+1,
            "z_min": np.min(borders[2]), "z_max": np.max(borders[2])+1,
        }


class RandomCropFromBorders(DualTransform):
    """Crop bbox from image randomly cut parts from borders without resize at the end

    Args:
        crop_value (float): float value in (0.0, 0.5) range. Default 0.1
        crop_0_min (float): float value in (0.0, 1.0) range. Default 0.1
        crop_0_max (float): float value in (0.0, 1.0) range. Default 0.1
        crop_1_min (float): float value in (0.0, 1.0) range. Default 0.1
        crop_1_max (float): float value in (0.0, 1.0) range. Default 0.1
        crop_2_min (float): float value in (0.0, 1.0) range. Default 0.1
        crop_2_max (float): float value in (0.0, 1.0) range. Default 0.1
        p (float): probability of applying the transform. Default: 1.

    Targets:
        image, mask, bboxes, keypoints

    Image types:
        uint8, float32
    """

    def __init__(
            self,
            crop_value=None,
            crop_0_min=None,
            crop_0_max=None,
            crop_1_min=None,
            crop_1_max=None,
            crop_2_min=None,
            crop_2_max=None,
            always_apply=False,
            p=1.0
    ):
        super(RandomCropFromBorders, self).__init__(always_apply, p)
        self.crop_0_min = 0.1
        self.crop_0_max = 0.1
        self.crop_1_min = 0.1
        self.crop_1_max = 0.1
        self.crop_2_min = 0.1
        self.crop_2_max = 0.1
        if crop_value is not None:
            self.crop_0_min = crop_value
            self.crop_0_max = crop_value
            self.crop_1_min = crop_value
            self.crop_1_max = crop_value
            self.crop_2_min = crop_value
            self.crop_2_max = crop_value
        if crop_0_min is not None:
            self.crop_0_min = crop_0_min
        if crop_0_max is not None:
            self.crop_0_max = crop_0_max
        if crop_1_min is not None:
            self.crop_1_min = crop_1_min
        if crop_1_max is not None:
            self.crop_1_max = crop_1_max
        if crop_2_min is not None:
            self.crop_2_min = crop_2_min
        if crop_2_max is not None:
            self.crop_2_max = crop_2_max

    def get_params(self, **data):
        img = data["image"]
        sh0_min = random.randint(0, int(self.crop_0_min * img.shape[0]))
        sh0_max = random.randint(max(sh0_min + 1, int((1 - self.crop_0_max) * img.shape[0])), img.shape[0])

        sh1_min = random.randint(0, int(self.crop_1_min * img.shape[1]))
        sh1_max = random.randint(max(sh1_min + 1, int((1 - self.crop_1_max) * img.shape[1])), img.shape[1])

        sh2_min = random.randint(0, int(self.crop_2_min * img.shape[2]))
        sh2_max = random.randint(max(sh2_min + 1, int((1 - self.crop_2_max) * img.shape[2])), img.shape[2])

        return {
            "sh0_min": sh0_min, "sh0_max": sh0_max,
            "sh1_min": sh1_min, "sh1_max": sh1_max,
            "sh2_min": sh2_min, "sh2_max": sh2_max
        }

    def apply(self, img, sh0_min=0, sh0_max=0, sh1_min=0, sh1_max=0, sh2_min=0, sh2_max=0, **params):
        return F.clamping_crop(img, sh0_min, sh1_min, sh2_min, sh0_max, sh1_max, sh2_max)

    def apply_to_mask(self, mask, sh0_min=0, sh0_max=0, sh1_min=0, sh1_max=0, sh2_min=0, sh2_max=0, **params):
        return F.clamping_crop(mask, sh0_min, sh1_min, sh2_min, sh0_max, sh1_max, sh2_max)


class GridDropout(DualTransform):
    """GridDropout, drops out rectangular regions of an image and the corresponding mask in a grid fashion.
    Args:
        ratio (float): the ratio of the mask holes to the unit_size (same for horizontal and vertical directions).
            Must be between 0 and 1. Default: 0.5.
        unit_size_min (int): minimum size of the grid unit. Must be between 2 and the image shorter edge.
            If 'None', holes_number_x and holes_number_y are used to setup the grid. Default: `None`.
        unit_size_max (int): maximum size of the grid unit. Must be between 2 and the image shorter edge.
            If 'None', holes_number_x and holes_number_y are used to setup the grid. Default: `None`.
        holes_number_x (int): the number of grid units in x direction. Must be between 1 and image width//2.
            If 'None', grid unit width is set as image_width//10. Default: `None`.
        holes_number_y (int): the number of grid units in y direction. Must be between 1 and image height//2.
            If `None`, grid unit height is set equal to the grid unit width or image height, whatever is smaller.
        holes_number_z (int): the number of grid units in z direction. Must be between 1 and image depth//2.
            If `None`, grid unit depth is set equal to the grid unit width or image height, whatever is smaller.
        shift_x (int): offsets of the grid start in x direction from (0,0) coordinate.
            Clipped between 0 and grid unit_width - hole_width. Default: 0.
        shift_y (int): offsets of the grid start in y direction from (0,0) coordinate.
            Clipped between 0 and grid unit height - hole_height. Default: 0.
        shift_z (int): offsets of the grid start in z direction from (0,0) coordinate.
            Clipped between 0 and grid unit depth - hole_depth. Default: 0.
        random_offset (boolean): weather to offset the grid randomly between 0 and grid unit size - hole size
            If 'True', entered shift_x, shift_y, shift_z are ignored and set randomly. Default: `False`.
        fill_value (int): value for the dropped pixels. Default = 0
        mask_fill_value (int): value for the dropped pixels in mask.
            If `None`, tranformation is not applied to the mask. Default: `None`.
    Targets:
        image, mask
    Image types:
        uint8, float32
    References:
        https://arxiv.org/abs/2001.04086
    """

    def __init__(
        self,
        ratio: float = 0.5,
        unit_size_min: int = None,
        unit_size_max: int = None,
        holes_number_x: int = None,
        holes_number_y: int = None,
        holes_number_z: int = None,
        shift_x: int = 0,
        shift_y: int = 0,
        shift_z: int = 0,
        random_offset: bool = False,
        fill_value: int = 0,
        mask_fill_value: int = None,
        always_apply: bool = False,
        p: float = 0.5,
    ):
        super(GridDropout, self).__init__(always_apply, p)
        self.ratio = ratio
        self.unit_size_min = unit_size_min
        self.unit_size_max = unit_size_max
        self.holes_number_x = holes_number_x
        self.holes_number_y = holes_number_y
        self.holes_number_z = holes_number_z
        self.shift_x = shift_x
        self.shift_y = shift_y
        self.shift_z = shift_z
        self.random_offset = random_offset
        self.fill_value = fill_value
        self.mask_fill_value = mask_fill_value
        if not 0 < self.ratio <= 1:
            raise ValueError("ratio must be between 0 and 1.")

    def apply(self, image, holes=(), **params):
        return F.cutout(image, holes, self.fill_value)

    def apply_to_mask(self, image, holes=(), **params):
        if self.mask_fill_value is None:
            return image

        return F.cutout(image, holes, self.mask_fill_value)

    def get_params(self, **data):
        img = data["image"]
        height, width, depth = img.shape[:3]
        # set grid using unit size limits
        if self.unit_size_min and self.unit_size_max:
            if not 2 <= self.unit_size_min <= self.unit_size_max:
                raise ValueError("Max unit size should be >= min size, both at least 2 pixels.")
            if self.unit_size_max > min(height, width):
                raise ValueError("Grid size limits must be within the shortest image edge.")
            unit_width = random.randint(self.unit_size_min, self.unit_size_max + 1)
            unit_height = unit_width
            unit_depth = unit_width
        else:
            # set grid using holes numbers
            if self.holes_number_x is None:
                unit_width = max(2, width // 10)
            else:
                if not 1 <= self.holes_number_x <= width // 2:
                    raise ValueError("The hole_number_x must be between 1 and image width//2.")
                unit_width = width // self.holes_number_x
            if self.holes_number_y is None:
                unit_height = max(min(unit_width, height), 2)
            else:
                if not 1 <= self.holes_number_y <= height // 2:
                    raise ValueError("The hole_number_y must be between 1 and image height//2.")
                unit_height = height // self.holes_number_y
            if self.holes_number_z is None:
                unit_depth = max(min(unit_height, depth), 2)
            else:
                if not 1 <= self.holes_number_z <= depth // 2:
                    raise ValueError("The hole_number_z must be between 1 and image depth//2.")
                unit_depth = depth // self.holes_number_z

        hole_width = int(unit_width * self.ratio)
        hole_height = int(unit_height * self.ratio)
        hole_depth = int(unit_depth * self.ratio)
        # min 1 pixel and max unit length - 1
        hole_width = min(max(hole_width, 1), unit_width - 1)
        hole_height = min(max(hole_height, 1), unit_height - 1)
        hole_depth = min(max(hole_depth, 1), unit_depth - 1)
        # set offset of the grid
        if self.shift_x is None:
            shift_x = 0
        else:
            shift_x = min(max(0, self.shift_x), unit_width - hole_width)
        if self.shift_y is None:
            shift_y = 0
        else:
            shift_y = min(max(0, self.shift_y), unit_height - hole_height)
        if self.shift_z is None:
            shift_z = 0
        else:
            shift_z = min(max(0, self.shift_z), unit_depth - hole_depth)
        if self.random_offset:
            shift_x = random.randint(0, unit_width - hole_width)
            shift_y = random.randint(0, unit_height - hole_height)
            shift_z = random.randint(0, unit_depth - hole_depth)
        holes = []
        for i in range(width // unit_width + 1):
            for j in range(height // unit_height + 1):
                for k in range(depth // unit_depth + 1):
                    x1 = min(shift_x + unit_width * i, width)
                    y1 = min(shift_y + unit_height * j, height)
                    z1 = min(shift_z + unit_depth * k, depth)
                    x2 = min(x1 + hole_width, width)
                    y2 = min(y1 + hole_height, height)
                    z2 = min(z1 + hole_depth, depth)
                    holes.append((x1, y1, z1, x2, y2, z2))

        return {"holes": holes}

    def get_transform_init_args_names(self):
        return (
            "ratio",
            "unit_size_min",
            "unit_size_max",
            "holes_number_x",
            "holes_number_y",
            "shift_x",
            "shift_y",
            "mask_fill_value",
            "random_offset",
        )


class RandomDropPlane(DualTransform):
    """Randomly drop some planes in random axis

    Args:
        plane_drop_prob (float): float value in (0.0, 1.0) range. Default: 0.1
        axes (tuple). Default: 0
        p (float): probability of applying the transform. Default: 1.

    Targets:
        image, mask

    Image types:
        uint8, float32
    """

    def __init__(
            self,
            plane_drop_prob=0.1,
            axes=(0,),
            always_apply=False,
            p=1.0
    ):
        super(RandomDropPlane, self).__init__(always_apply, p)
        self.plane_drop_prob = plane_drop_prob
        self.axes = axes

    def get_params(self, **data):
        img = data["image"]
        axis = random.choice(self.axes)
        r = img.shape[axis]
        indexes = []
        for i in range(r):
            if random.uniform(0, 1) > self.plane_drop_prob:
                indexes.append(i)
        if len(indexes) == 0:
            indexes.append(0)

        return {
            "indexes": indexes, "axis": axis,
        }

    def apply(self, img, indexes=(), axis=0, **params):
        return np.take(img, indexes, axis=axis)

    def apply_to_mask(self, mask, indexes=(), axis=0, **params):
        return np.take(mask, indexes, axis=axis)

class RandomBrightnessContrast(ImageOnlyTransform):
    """Randomly change brightness and contrast of the input image.
    Args:
        brightness_limit ((float, float) or float): factor range for changing brightness.
            If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).
        contrast_limit ((float, float) or float): factor range for changing contrast.
            If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).
        brightness_by_max (Boolean): If True adjust contrast by image dtype maximum,
            else adjust contrast by image mean.
        p (float): probability of applying the transform. Default: 0.5.
    Targets:
        image
    Image types:
        uint8, float32
    """

    def __init__(
        self,
        brightness_limit=0.2,
        contrast_limit=0.2,
        brightness_by_max=True,
        always_apply=False,
        p=0.5,
    ):
        super(RandomBrightnessContrast, self).__init__(always_apply, p)
        self.brightness_limit = to_tuple(brightness_limit)
        self.contrast_limit = to_tuple(contrast_limit)
        self.brightness_by_max = brightness_by_max

    def apply(self, img, alpha=1.0, beta=0.0, **params):
        return F.brightness_contrast_adjust(img, alpha, beta, self.brightness_by_max)

    def get_params(self, **data):
        return {
            "alpha": 1.0 + random.uniform(self.contrast_limit[0], self.contrast_limit[1]),
            "beta": 0.0 + random.uniform(self.brightness_limit[0], self.brightness_limit[1]),
        }

    def get_transform_init_args_names(self):
        return ("brightness_limit", "contrast_limit", "brightness_by_max")


class ColorJitter(ImageOnlyTransform):
    """Randomly changes the brightness, contrast, and saturation of an image. Compared to ColorJitter from torchvision,
    this transform gives a little bit different results because Pillow (used in torchvision) and OpenCV (used in
    Albumentations) transform an image to HSV format by different formulas. Another difference - Pillow uses uint8
    overflow, but we use value saturation.
    Args:
        brightness (float or tuple of float (min, max)): How much to jitter brightness.
            brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness]
            or the given [min, max]. Should be non negative numbers.
        contrast (float or tuple of float (min, max)): How much to jitter contrast.
            contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast]
            or the given [min, max]. Should be non negative numbers.
        saturation (float or tuple of float (min, max)): How much to jitter saturation.
            saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation]
            or the given [min, max]. Should be non negative numbers.
        hue (float or tuple of float (min, max)): How much to jitter hue.
            hue_factor is chosen uniformly from [-hue, hue] or the given [min, max].
            Should have 0 <= hue <= 0.5 or -0.5 <= min <= max <= 0.5.
    """

    def __init__(
        self,
        brightness=0.2,
        contrast=0.2,
        saturation=0.2,
        hue=0.2,
        always_apply=False,
        p=0.5,
    ):
        super(ColorJitter, self).__init__(always_apply=always_apply, p=p)

        self.brightness = self.__check_values(brightness, "brightness")
        self.contrast = self.__check_values(contrast, "contrast")
        self.saturation = self.__check_values(saturation, "saturation")
        self.hue = self.__check_values(hue, "hue", offset=0, bounds=[-0.5, 0.5], clip=False)

    @staticmethod
    def __check_values(value, name, offset=1, bounds=(0, float("inf")), clip=True):
        if isinstance(value, numbers.Number):
            if value < 0:
                raise ValueError("If {} is a single number, it must be non negative.".format(name))
            value = [offset - value, offset + value]
            if clip:
                value[0] = max(value[0], 0)
        elif isinstance(value, (tuple, list)) and len(value) == 2:
            if not bounds[0] <= value[0] <= value[1] <= bounds[1]:
                raise ValueError("{} values should be between {}".format(name, bounds))
        else:
            raise TypeError("{} should be a single number or a list/tuple with length 2.".format(name))

        return value

    def get_params(self, **data):
        brightness = random.uniform(self.brightness[0], self.brightness[1])
        contrast = random.uniform(self.contrast[0], self.contrast[1])
        saturation = random.uniform(self.saturation[0], self.saturation[1])
        hue = random.uniform(self.hue[0], self.hue[1])

        transforms = [
            lambda x: F.adjust_brightness_torchvision(x, brightness),
            lambda x: F.adjust_contrast_torchvision(x, contrast),
            lambda x: F.adjust_saturation_torchvision(x, saturation),
            lambda x: F.adjust_hue_torchvision(x, hue),
        ]
        random.shuffle(transforms)

        return {"transforms": transforms}

    def apply(self, img, transforms=(), **params):
        if not F.is_3Drgb_image(img) and not F.is_3Dgrayscale_image(img):
            raise TypeError("ColorJitter transformation expects 1-channel or 3-channel images.")

        for transform in transforms:
            img_transformed = np.zeros(img.shape, dtype=np.float32)
            for slice in range(img.shape[0]):
                img_transformed[slice,:,:] = transform(img[slice,:,:].astype(np.float32))
        return img_transformed

    def get_transform_init_args_names(self):
        return ("brightness", "contrast", "saturation", "hue")


class GridDistortion(DualTransform):
    """
    Args:
        num_steps (int): count of grid cells on each side.
        distort_limit (float, (float, float)): If distort_limit is a single float, the range
            will be (-distort_limit, distort_limit). Default: (-0.03, 0.03).
        interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
            cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4.
            Default: cv2.INTER_LINEAR.
        border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of:
            cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101.
            Default: cv2.BORDER_REFLECT_101
        value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT.
        mask_value (int, float,
                    list of ints,
                    list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks.
    Targets:
        image, mask
    Image types:
        uint8, float32
    """

    def __init__(
        self,
        num_steps=5,
        distort_limit=0.3,
        interpolation=cv2.INTER_LINEAR,
        border_mode=cv2.BORDER_REFLECT_101,
        value=None,
        mask_value=None,
        always_apply=False,
        p=0.5,
    ):
        super(GridDistortion, self).__init__(always_apply, p)
        self.num_steps = num_steps
        self.distort_limit = to_tuple(distort_limit)
        self.interpolation = interpolation
        self.border_mode = border_mode
        self.value = value
        self.mask_value = mask_value

    def apply(self, img, stepsx=(), stepsy=(), interpolation=cv2.INTER_LINEAR, **params):
        img_transformed = np.zeros(img.shape, dtype=img.dtype)
        for slice in range(img.shape[0]):
            img_transformed[slice,:,:] = F.grid_distortion(img[slice,:,:],
                                                           self.num_steps,
                                                           stepsx,
                                                           stepsy,
                                                           interpolation,
                                                           self.border_mode,
                                                           self.value)
        return img_transformed

    def apply_to_mask(self, img, stepsx=(), stepsy=(), **params):
        img_transformed = np.zeros(img.shape, dtype=img.dtype)
        for slice in range(img.shape[0]):
            img_transformed[slice,:,:] = F.grid_distortion(img[slice,:,:],
                                                           self.num_steps,
                                                           stepsx,
                                                           stepsy,
                                                           cv2.INTER_NEAREST,
                                                           self.border_mode,
                                                           self.mask_value)
        return img_transformed

    def get_params(self, **data):
        stepsx = [1 + random.uniform(self.distort_limit[0], self.distort_limit[1]) for i in range(self.num_steps + 1)]
        stepsy = [1 + random.uniform(self.distort_limit[0], self.distort_limit[1]) for i in range(self.num_steps + 1)]
        return {"stepsx": stepsx, "stepsy": stepsy}

    def get_transform_init_args_names(self):
        return (
            "num_steps",
            "distort_limit",
            "interpolation",
            "border_mode",
            "value",
            "mask_value",
        )

class Downscale(ImageOnlyTransform):
    """Decreases image quality by downscaling and upscaling back.
    Args:
        scale_min (float): lower bound on the image scale. Should be < 1.
        scale_max (float):  upper bound on the image scale. Should be < 1.
        interpolation: cv2 interpolation method. cv2.INTER_NEAREST by default
    Targets:
        image
    Image types:
        uint8, float32
    """

    def __init__(
        self,
        scale_min=0.25,
        scale_max=0.25,
        interpolation=cv2.INTER_NEAREST,
        always_apply=False,
        p=0.5,
    ):
        super(Downscale, self).__init__(always_apply, p)
        if scale_min > scale_max:
            raise ValueError("Expected scale_min be less or equal scale_max, got {} {}".format(scale_min, scale_max))
        if scale_max >= 1:
            raise ValueError("Expected scale_max to be less than 1, got {}".format(scale_max))
        self.scale_min = scale_min
        self.scale_max = scale_max
        self.interpolation = interpolation

    def apply(self, image, scale, interpolation, **params):
        return F.downscale(image, scale=scale, interpolation=interpolation)

    def get_params(self, **data):
        return {
            "scale": random.uniform(self.scale_min, self.scale_max),
            "interpolation": self.interpolation,
        }

    def get_transform_init_args_names(self):
        return "scale_min", "scale_max", "interpolation"

class GlassBlur(Blur):
    """Apply glass noise to the input image.
    Args:
        sigma (float): standard deviation for Gaussian kernel.
        max_delta (int): max distance between pixels which are swapped.
        iterations (int): number of repeats.
            Should be in range [1, inf). Default: (2).
        mode (str): mode of computation: fast or exact. Default: "fast".
        p (float): probability of applying the transform. Default: 0.5.
    Targets:
        image
    Image types:
        uint8, float32
    Reference:
    |  https://arxiv.org/abs/1903.12261
    |  https://github.com/hendrycks/robustness/blob/master/ImageNet-C/create_c/make_imagenet_c.py
    """

    def __init__(
        self,
        sigma=0.7,
        max_delta=4,
        iterations=2,
        always_apply=False,
        mode="fast",
        p=0.5,
    ):
        super(GlassBlur, self).__init__(always_apply=always_apply, p=p)
        if iterations < 1:
            raise ValueError("Iterations should be more or equal to 1, but we got {}".format(iterations))

        if mode not in ["fast", "exact"]:
            raise ValueError("Mode should be 'fast' or 'exact', but we got {}".format(iterations))

        self.sigma = sigma
        self.max_delta = max_delta
        self.iterations = iterations
        self.mode = mode

    def apply(self, img, dxy, **data):
        img_blurred = np.zeros(img.shape, dtype=img.dtype)
        for slice in range(img.shape[0]):
            img_processed = F.glass_blur(img[slice,:,:],
                                         self.sigma,
                                         self.max_delta,
                                         self.iterations,
                                         dxy,
                                         self.mode)
            if len(img.shape) == 4 and img.shape[-1] == 1:
                img_blurred[slice,:,:] = np.reshape(img_processed,
                                                        img_processed.shape+(1,))
            else : img_blurred[slice,:,:] = img_processed
        return img_blurred

    def get_params(self, **data):
        img = data["image"]

        # generate array containing all necessary values for transformations
        width_pixels = img.shape[1] - self.max_delta * 2
        height_pixels = img.shape[2] - self.max_delta * 2
        total_pixels = width_pixels * height_pixels
        dxy = randint(-self.max_delta, self.max_delta, size=(total_pixels, self.iterations, 2))

        return {"dxy": dxy}

    def get_transform_init_args_names(self):
        return ("sigma", "max_delta", "iterations")

    @property
    def targets_as_params(self):
        return ["image"]

class ImageCompression(ImageOnlyTransform):
    """Decrease Jpeg, WebP compression of an image.
    Args:
        quality_lower (float): lower bound on the image quality.
                               Should be in [0, 100] range for jpeg and [1, 100] for webp.
        quality_upper (float): upper bound on the image quality.
                               Should be in [0, 100] range for jpeg and [1, 100] for webp.
        compression_type (ImageCompressionType): should be ImageCompressionType.JPEG or ImageCompressionType.WEBP.
            Default: ImageCompressionType.JPEG
    Targets:
        image
    Image types:
        uint8, float32
    """

    class ImageCompressionType(IntEnum):
        JPEG = 0
        WEBP = 1

    def __init__(
        self,
        quality_lower=99,
        quality_upper=100,
        compression_type=ImageCompressionType.JPEG,
        always_apply=False,
        p=0.5,
    ):
        super(ImageCompression, self).__init__(always_apply, p)

        self.compression_type = ImageCompression.ImageCompressionType(compression_type)
        low_thresh_quality_assert = 0

        if self.compression_type == ImageCompression.ImageCompressionType.WEBP:
            low_thresh_quality_assert = 1

        if not low_thresh_quality_assert <= quality_lower <= 100:
            raise ValueError("Invalid quality_lower. Got: {}".format(quality_lower))
        if not low_thresh_quality_assert <= quality_upper <= 100:
            raise ValueError("Invalid quality_upper. Got: {}".format(quality_upper))

        self.quality_lower = quality_lower
        self.quality_upper = quality_upper

    def apply(self, img, quality=100, image_type=".jpg", **params):
        if not F.is_3Drgb_image(img) and not F.is_3Dgrayscale_image(img):
            raise TypeError("ImageCompression transformation expects 1, 3 or 4 channel images.")


        img_transformed = np.zeros(img.shape, dtype=img.dtype)
        for slice in range(img.shape[0]):
            img_transformed[slice,:,:] = F.image_compression(img[slice,:,:],
                                                             quality,
                                                             image_type)
        return img_transformed

    def get_params(self, **data):
        image_type = ".jpg"

        if self.compression_type == ImageCompression.ImageCompressionType.WEBP:
            image_type = ".webp"

        return {
            "quality": random.randint(self.quality_lower, self.quality_upper),
            "image_type": image_type,
        }

    def get_transform_init_args(self):
        return {
            "quality_lower": self.quality_lower,
            "quality_upper": self.quality_upper,
            "compression_type": self.compression_type.value,
        }


================================================
FILE: volumentations/core/__init__.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#


================================================
FILE: volumentations/core/composition.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#
import random
from ..augmentations import transforms as T


class Compose:
    def __init__(self, transforms, p=1.0, targets=[['image'],['mask']]):
        assert 0 <= p <= 1
        self.transforms = [T.Float(always_apply=True)] + transforms + [T.Contiguous(always_apply=True)]
        self.p = p
        self.targets = targets

    def get_always_apply_transforms(self):
        res = []
        for tr in self.transforms:
            if tr.always_apply:
                res.append(tr)
        return res

    def __call__(self, force_apply=False, **data):
        need_to_run = force_apply or random.random() < self.p
        transforms = self.transforms if need_to_run else self.get_always_apply_transforms()

        for tr in transforms:
            data = tr(force_apply, self.targets, **data)

        return data


class ComposeChoice:
    def __init__(self, transforms, p=1.0, n=1, targets=[['image'], ['mask']]):
        assert 0 <= p <= 1
        self.transforms = transforms
        self.p = p
        self.n = n
        self.targets = targets

    def __call__(self, **data):
        if random.random() > self.p:
            return data

        transforms = random.sample(self.transforms, self.n)
        transforms = [T.Float()] + transforms + [T.Contiguous()]

        for tr in transforms:
            data = tr(True, self.targets, **data)

        return data


================================================
FILE: volumentations/core/transforms_interface.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#
import random
import numpy as np
from typing import Sequence, Tuple

# DEBUG only flag
VERBOSE = False


def to_tuple(param, low=None, bias=None):
    """Convert input argument to min-max tuple
    Args:
        param (scalar, tuple or list of 2+ elements): Input value.
            If value is scalar, return value would be (offset - value, offset + value).
            If value is tuple, return value would be value + offset (broadcasted).
        low:  Second element of tuple can be passed as optional argument
        bias: An offset factor added to each element
    """
    if low is not None and bias is not None:
        raise ValueError("Arguments low and bias are mutually exclusive")

    if param is None:
        return param

    if isinstance(param, (int, float)):
        if low is None:
            param = -param, +param
        else:
            param = (low, param) if low < param else (param, low)
    elif isinstance(param, Sequence):
        param = tuple(param)
    else:
        raise ValueError("Argument param must be either scalar (int, float) or tuple")

    if bias is not None:
        return tuple(bias + x for x in param)

    return tuple(param)


class Transform:
    def __init__(self, always_apply=False, p=0.5):
        assert 0 <= p <= 1
        self.p = p
        self.always_apply = always_apply

    def __call__(self, force_apply, targets, **data):
        if force_apply or self.always_apply or random.random() < self.p:
            params = self.get_params(**data)

            if VERBOSE:
                print('RUN', self.__class__.__name__, params)

            for k, v in data.items():
                if k in targets[0]:
                    data[k] = self.apply(v, **params)
                else:
                    data[k] = v

        return data

    def get_params(self, **data):
        """
        shared parameters for one apply. (usually random values)
        """
        return {}

    def apply(self, volume, **params):
        raise NotImplementedError


class DualTransform(Transform):
    def __call__(self, force_apply, targets, **data):
        if force_apply or self.always_apply or random.random() < self.p:
            params = self.get_params(**data)

            if VERBOSE:
                print('RUN', self.__class__.__name__, params)

            for k, v in data.items():
                if k in targets[0]:
                    data[k] = self.apply(v, **params)
                elif k in targets[1]:
                    data[k] = self.apply_to_mask(v, **params)
                else:
                    data[k] = v

        return data

    def apply_to_mask(self, mask, **params):
        return self.apply(mask, **params)


class ImageOnlyTransform(Transform):
    """Transform applied to image only."""

    @property
    def targets(self):
        return {"image": self.apply}


================================================
FILE: volumentations/random_utils.py
================================================
#=================================================================================#
#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #
#  Copyright:    albumentations:    : https://github.com/albumentations-team      #
#                Pavel Iakubovskii  : https://github.com/qubvel                   #
#                ZFTurbo            : https://github.com/ZFTurbo                  #
#                ashawkey           : https://github.com/ashawkey                 #
#                Dominik Müller     : https://github.com/muellerdo                #
#                                                                                 #
#  Volumentations History:                                                        #
#       - Original:                 https://github.com/albumentations-team/album  #
#                                   entations                                     #
#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #
#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #
#       - Enhancements:             https://github.com/qubvel/volumentations      #
#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #
#                                                                                 #
#  MIT License.                                                                   #
#                                                                                 #
#  Permission is hereby granted, free of charge, to any person obtaining a copy   #
#  of this software and associated documentation files (the "Software"), to deal  #
#  in the Software without restriction, including without limitation the rights   #
#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #
#  copies of the Software, and to permit persons to whom the Software is          #
#  furnished to do so, subject to the following conditions:                       #
#                                                                                 #
#  The above copyright notice and this permission notice shall be included in all #
#  copies or substantial portions of the Software.                                #
#                                                                                 #
#  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #
#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #
#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #
#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #
#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #
#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #
#  SOFTWARE.                                                                      #
#=================================================================================#
import numpy as np
from typing import Optional, Sequence, Union, Type, Any
import random as py_random

NumType = Union[int, float, np.ndarray]
IntNumType = Union[int, np.ndarray]
Size = Union[int, Sequence[int]]

def get_random_state() -> np.random.RandomState:
    return np.random.RandomState(py_random.randint(0, (1 << 32) - 1))

def randint(
    low: IntNumType,
    high: Optional[IntNumType] = None,
    size: Optional[Size] = None,
    dtype: Type = np.int32,
    random_state: Optional[np.random.RandomState] = None,
) -> Any:
    if random_state is None:
        random_state = get_random_state()
    return random_state.randint(low, high, size, dtype)

def uniform(
    low: NumType = 0.0,
    high: NumType = 1.0,
    size: Optional[Size] = None,
    random_state: Optional[np.random.RandomState] = None,
) -> Any:
    if random_state is None:
        random_state = get_random_state()
    return random_state.uniform(low, high, size)

def normal(
    loc: NumType = 0.0,
    scale: NumType = 1.0,
    size: Optional[Size] = None,
    random_state: Optional[np.random.RandomState] = None,
) -> Any:
    if random_state is None:
        random_state = get_random_state()
    return random_state.normal(loc, scale, size)
Download .txt
gitextract_swy1yiy_/

├── .github/
│   └── workflows/
│       └── python-package.yml
├── .gitignore
├── EXAMPLES.md
├── LICENSE
├── README.md
├── images/
│   └── examples.py
├── pyproject.toml
├── setup.py
├── test/
│   └── test_basic.py
├── tst_volumentations_speed.py
├── tst_volumentations_type_1.py
├── tst_volumentations_type_2.py
└── volumentations/
    ├── __init__.py
    ├── __version__.py
    ├── augmentations/
    │   ├── __init__.py
    │   ├── functional.py
    │   └── transforms.py
    ├── core/
    │   ├── __init__.py
    │   ├── composition.py
    │   └── transforms_interface.py
    └── random_utils.py
Download .txt
SYMBOL INDEX (228 symbols across 9 files)

FILE: test/test_basic.py
  function cube (line 53) | def cube():
  function test_augmentations (line 60) | def test_augmentations(aug_class, cube):

FILE: tst_volumentations_speed.py
  function tst_volumentations_speed (line 9) | def tst_volumentations_speed():

FILE: tst_volumentations_type_1.py
  function read_video (line 17) | def read_video(f):
  function get_augmentation_v1 (line 37) | def get_augmentation_v1(patch_size):
  function get_augmentation_v2 (line 53) | def get_augmentation_v2(patch_size):
  function create_video (line 66) | def create_video(image_list, out_file, fps):
  function tst_volumentations (line 81) | def tst_volumentations():

FILE: tst_volumentations_type_2.py
  function grayscale_normalization (line 53) | def grayscale_normalization(image):
  function visualize_evaluation (line 64) | def visualize_evaluation(index, volume, viz_path="test_volumentations"):
  function build (line 102) | def build(aug_flip, aug_rotate, aug_brightness, aug_contrast, aug_satura...

FILE: volumentations/augmentations/functional.py
  function preserve_shape (line 78) | def preserve_shape(func):
  function rotate2d (line 92) | def rotate2d(img, angle, axes=(0,1), reshape=False, interpolation=1, bor...
  function shift (line 96) | def shift(img, shift, interpolation=1, border_mode='reflect', value=0):
  function crop (line 100) | def crop(img, x1, y1, z1, x2, y2, z2):
  function get_center_crop_coords (line 113) | def get_center_crop_coords(height, width, depth, crop_height, crop_width...
  function center_crop (line 123) | def center_crop(img, crop_height, crop_width, crop_depth):
  function get_random_crop_coords (line 132) | def get_random_crop_coords(height, width, depth, crop_height, crop_width...
  function random_crop (line 142) | def random_crop(img, crop_height, crop_width, crop_depth, h_start, w_sta...
  function normalize (line 153) | def normalize(img, range_norm=True):
  function pad (line 169) | def pad(image, new_shape, border_mode="reflect", value=0):
  function gaussian_noise (line 195) | def gaussian_noise(img, gauss):
  function resize (line 200) | def resize(img, new_shape, interpolation=1, resize_type=0):
  function rescale (line 235) | def rescale(img, scale, interpolation=1):
  function gamma_transform (line 249) | def gamma_transform(img, gamma):
  function elastic_transform_pseudo2D (line 259) | def elastic_transform_pseudo2D(img, alpha, sigma, alpha_affine, interpol...
  function elastic_transform (line 333) | def elastic_transform(img, sigmas, alphas, interpolation=1, border_mode=...
  function generate_coords (line 350) | def generate_coords(shape):
  function elastic_deform_coords (line 361) | def elastic_deform_coords(coords, sigmas, alphas, random_state):
  function recenter_coords (line 379) | def recenter_coords(coords):
  function rotate3d (line 389) | def rotate3d(img, x, y, z, interpolation=1, border_mode='reflect', value...
  function rotate_coords (line 408) | def rotate_coords(coords, angle_x, angle_y, angle_z):
  function rot_x (line 417) | def rot_x(angle):
  function rot_y (line 424) | def rot_y(angle):
  function rot_z (line 431) | def rot_z(angle):
  function rescale_warp (line 438) | def rescale_warp(img, scale, interpolation=1, border_mode='reflect', val...
  function scale_coords (line 455) | def scale_coords(coords, scale):
  function clamping_crop (line 465) | def clamping_crop(img, sh0_min, sh1_min, sh2_min, sh0_max, sh1_max, sh2_...
  function cutout (line 482) | def cutout(img, holes, fill_value=0):
  function clip (line 489) | def clip(img, dtype, maxval):
  function clipped (line 492) | def clipped(func):
  function _brightness_contrast_adjust_non_uint (line 502) | def _brightness_contrast_adjust_non_uint(img, alpha=1, beta=0, beta_by_m...
  function _brightness_contrast_adjust_uint (line 517) | def _brightness_contrast_adjust_uint(img, alpha=1, beta=0, beta_by_max=F...
  function brightness_contrast_adjust (line 536) | def brightness_contrast_adjust(img, alpha=1, beta=0, beta_by_max=False):
  function _adjust_brightness_torchvision_uint8 (line 543) | def _adjust_brightness_torchvision_uint8(img, factor):
  function adjust_brightness_torchvision (line 549) | def adjust_brightness_torchvision(img, factor):
  function _adjust_contrast_torchvision_uint8 (line 560) | def _adjust_contrast_torchvision_uint8(img, factor, mean):
  function adjust_contrast_torchvision (line 568) | def adjust_contrast_torchvision(img, factor):
  function adjust_saturation_torchvision (line 591) | def adjust_saturation_torchvision(img, factor, gamma=0):
  function _adjust_hue_torchvision_uint8 (line 613) | def _adjust_hue_torchvision_uint8(img, factor):
  function adjust_hue_torchvision (line 623) | def adjust_hue_torchvision(img, factor):
  function is_3Drgb_image (line 638) | def is_3Drgb_image(image):
  function is_3Dgrayscale_image (line 641) | def is_3Dgrayscale_image(image):
  function is_2Drgb_image (line 644) | def is_2Drgb_image(image):
  function is_2Dgrayscale_image (line 647) | def is_2Dgrayscale_image(image):
  function _maybe_process_in_chunks (line 650) | def _maybe_process_in_chunks(process_fn, **kwargs):
  function grid_distortion (line 689) | def grid_distortion(
  function downscale (line 751) | def downscale(img, scale, interpolation=cv2.INTER_NEAREST):
  function glass_blur (line 769) | def glass_blur(img, sigma, max_delta, iterations, dxy, mode):
  function image_compression (line 800) | def image_compression(img, quality, image_type):
  function from_float (line 830) | def from_float(img, dtype, max_value=None):
  function to_float (line 841) | def to_float(img, max_value=None):

FILE: volumentations/augmentations/transforms.py
  class Float (line 46) | class Float(DualTransform):
    method apply (line 47) | def apply(self, image):
  class Contiguous (line 50) | class Contiguous(DualTransform):
    method apply (line 51) | def apply(self, image):
  class PadIfNeeded (line 55) | class PadIfNeeded(DualTransform):
    method __init__ (line 56) | def __init__(self, shape, border_mode='constant', value=0, mask_value=...
    method apply (line 63) | def apply(self, img):
    method apply_to_mask (line 66) | def apply_to_mask(self, mask):
  class Blur (line 70) | class Blur(ImageOnlyTransform):
    method __init__ (line 82) | def __init__(self, blur_limit=7, always_apply=False, p=0.5):
    method apply (line 86) | def apply(self, image, ksize=3, **params):
    method get_params (line 89) | def get_params(self, **data):
    method get_transform_init_args_names (line 92) | def get_transform_init_args_names(self):
  class GaussianNoise (line 96) | class GaussianNoise(Transform):
    method __init__ (line 97) | def __init__(self, var_limit=(10.0, 50.0), mean=0, always_apply=False,...
    method apply (line 102) | def apply(self, img, gauss=None):
    method get_params (line 105) | def get_params(self, **data):
  class Resize (line 114) | class Resize(DualTransform):
    method __init__ (line 115) | def __init__(self, shape, interpolation=1, resize_type=1, always_apply...
    method apply (line 121) | def apply(self, img):
    method apply_to_mask (line 124) | def apply_to_mask(self, mask):
  class RandomScale (line 128) | class RandomScale(DualTransform):
    method __init__ (line 129) | def __init__(self, scale_limit=[0.9, 1.1], interpolation=1, always_app...
    method get_params (line 134) | def get_params(self, **data):
    method apply (line 137) | def apply(self, img, scale):
    method apply_to_mask (line 140) | def apply_to_mask(self, mask, scale):
  class RandomScale2 (line 144) | class RandomScale2(DualTransform):
    method __init__ (line 148) | def __init__(self, scale_limit=[0.9, 1.1], interpolation=1, border_mod...
    method get_params (line 156) | def get_params(self, **data):
    method apply (line 159) | def apply(self, img, scale):
    method apply_to_mask (line 162) | def apply_to_mask(self, mask, scale):
  class RotatePseudo2D (line 166) | class RotatePseudo2D(DualTransform):
    method __init__ (line 167) | def __init__(self, axes=(0,1), limit=(-90, 90), interpolation=1, borde...
    method apply (line 176) | def apply(self, img, angle):
    method apply_to_mask (line 179) | def apply_to_mask(self, mask, angle):
    method get_params (line 182) | def get_params(self, **data):
  class RandomRotate90 (line 186) | class RandomRotate90(DualTransform):
    method __init__ (line 187) | def __init__(self, axes=None, always_apply=False, p=0.5):
    method apply (line 191) | def apply(self, img, axes, factor):
    method get_params (line 194) | def get_params(self, **data):
  class Flip (line 206) | class Flip(DualTransform):
    method __init__ (line 207) | def __init__(self, axis=None, always_apply=False, p=0.5):
    method apply (line 211) | def apply(self, img):
  class Normalize (line 222) | class Normalize(Transform):
    method __init__ (line 223) | def __init__(self, range_norm=False, always_apply=True, p=1.0):
    method apply (line 227) | def apply(self, img):
  class Transpose (line 231) | class Transpose(DualTransform):
    method __init__ (line 232) | def __init__(self, axes=(1,0,2), always_apply=False, p=0.5):
    method apply (line 236) | def apply(self, img):
  class CenterCrop (line 240) | class CenterCrop(DualTransform):
    method __init__ (line 241) | def __init__(self, shape, always_apply=False, p=1.0):
    method apply (line 245) | def apply(self, img):
  class RandomResizedCrop (line 249) | class RandomResizedCrop(DualTransform):
    method __init__ (line 250) | def __init__(self, shape, scale_limit=(0.8, 1.2), interpolation=1, res...
    method apply (line 257) | def apply(self, img, scale=1, scaled_shape=None, h_start=0, w_start=0,...
    method apply_to_mask (line 263) | def apply_to_mask(self, img, scale=1, scaled_shape=None, h_start=0, w_...
    method get_params (line 269) | def get_params(self, **data):
  class RandomCrop (line 281) | class RandomCrop(DualTransform):
    method __init__ (line 282) | def __init__(self, shape, always_apply=False, p=1.0):
    method apply (line 286) | def apply(self, img, h_start=0, w_start=0, d_start=0):
    method get_params (line 289) | def get_params(self, **data):
  class CropNonEmptyMaskIfExists (line 297) | class CropNonEmptyMaskIfExists(DualTransform):
    method __init__ (line 298) | def __init__(self, shape, always_apply=False, p=1.0):
    method apply (line 304) | def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_ma...
    method get_params (line 307) | def get_params(self, **data):
  class ResizedCropNonEmptyMaskIfExists (line 336) | class ResizedCropNonEmptyMaskIfExists(DualTransform):
    method __init__ (line 337) | def __init__(self, shape, scale_limit=(0.8, 1.2), interpolation=1, res...
    method apply (line 344) | def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_ma...
    method apply_to_mask (line 348) | def apply_to_mask(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max...
    method get_params (line 352) | def get_params(self, **data):
  class RandomGamma (line 384) | class RandomGamma(ImageOnlyTransform):
    method __init__ (line 396) | def __init__(self, gamma_limit=(80, 120), eps=None, always_apply=False...
    method apply (line 401) | def apply(self, img, gamma=1, **params):
    method get_params (line 404) | def get_params(self, **data):
    method get_transform_init_args_names (line 407) | def get_transform_init_args_names(self):
  class ElasticTransformPseudo2D (line 411) | class ElasticTransformPseudo2D(DualTransform):
    method __init__ (line 412) | def __init__(self, alpha=1000, sigma=50, alpha_affine=1, approximate=F...
    method apply (line 419) | def apply(self, img, random_state=None):
    method apply_to_mask (line 422) | def apply_to_mask(self, img, random_state=None):
    method get_params (line 425) | def get_params(self, **data):
  class ElasticTransform (line 429) | class ElasticTransform(DualTransform):
    method __init__ (line 430) | def __init__(self, deformation_limits=(0, 0.25), interpolation=1, bord...
    method apply (line 438) | def apply(self, img, sigmas, alphas, random_state=None):
    method apply_to_mask (line 441) | def apply_to_mask(self, img, sigmas, alphas, random_state=None):
    method get_params (line 444) | def get_params(self, **data):
  class Rotate (line 457) | class Rotate(DualTransform):
    method __init__ (line 458) | def __init__(self, x_limit=(-15,15), y_limit=(-15,15), z_limit=(-15,15...
    method apply (line 468) | def apply(self, img, x, y, z):
    method apply_to_mask (line 471) | def apply_to_mask(self, mask, x, y, z):
    method get_params (line 474) | def get_params(self, **data):
  class RemoveEmptyBorder (line 482) | class RemoveEmptyBorder(DualTransform):
    method __init__ (line 483) | def __init__(self, border_value=0, always_apply=False, p=1.0):
    method apply (line 488) | def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_ma...
    method get_params (line 491) | def get_params(self, **data):
  class RandomCropFromBorders (line 503) | class RandomCropFromBorders(DualTransform):
    method __init__ (line 523) | def __init__(
    method get_params (line 562) | def get_params(self, **data):
    method apply (line 579) | def apply(self, img, sh0_min=0, sh0_max=0, sh1_min=0, sh1_max=0, sh2_m...
    method apply_to_mask (line 582) | def apply_to_mask(self, mask, sh0_min=0, sh0_max=0, sh1_min=0, sh1_max...
  class GridDropout (line 586) | class GridDropout(DualTransform):
    method __init__ (line 620) | def __init__(
    method apply (line 653) | def apply(self, image, holes=(), **params):
    method apply_to_mask (line 656) | def apply_to_mask(self, image, holes=(), **params):
    method get_params (line 662) | def get_params(self, **data):
    method get_transform_init_args_names (line 733) | def get_transform_init_args_names(self):
  class RandomDropPlane (line 747) | class RandomDropPlane(DualTransform):
    method __init__ (line 762) | def __init__(
    method get_params (line 773) | def get_params(self, **data):
    method apply (line 788) | def apply(self, img, indexes=(), axis=0, **params):
    method apply_to_mask (line 791) | def apply_to_mask(self, mask, indexes=(), axis=0, **params):
  class RandomBrightnessContrast (line 794) | class RandomBrightnessContrast(ImageOnlyTransform):
    method __init__ (line 810) | def __init__(
    method apply (line 823) | def apply(self, img, alpha=1.0, beta=0.0, **params):
    method get_params (line 826) | def get_params(self, **data):
    method get_transform_init_args_names (line 832) | def get_transform_init_args_names(self):
  class ColorJitter (line 836) | class ColorJitter(ImageOnlyTransform):
    method __init__ (line 856) | def __init__(
    method __check_values (line 873) | def __check_values(value, name, offset=1, bounds=(0, float("inf")), cl...
    method get_params (line 888) | def get_params(self, **data):
    method apply (line 904) | def apply(self, img, transforms=(), **params):
    method get_transform_init_args_names (line 914) | def get_transform_init_args_names(self):
  class GridDistortion (line 918) | class GridDistortion(DualTransform):
    method __init__ (line 940) | def __init__(
    method apply (line 959) | def apply(self, img, stepsx=(), stepsy=(), interpolation=cv2.INTER_LIN...
    method apply_to_mask (line 971) | def apply_to_mask(self, img, stepsx=(), stepsy=(), **params):
    method get_params (line 983) | def get_params(self, **data):
    method get_transform_init_args_names (line 988) | def get_transform_init_args_names(self):
  class Downscale (line 998) | class Downscale(ImageOnlyTransform):
    method __init__ (line 1010) | def __init__(
    method apply (line 1027) | def apply(self, image, scale, interpolation, **params):
    method get_params (line 1030) | def get_params(self, **data):
    method get_transform_init_args_names (line 1036) | def get_transform_init_args_names(self):
  class GlassBlur (line 1039) | class GlassBlur(Blur):
    method __init__ (line 1057) | def __init__(
    method apply (line 1078) | def apply(self, img, dxy, **data):
    method get_params (line 1093) | def get_params(self, **data):
    method get_transform_init_args_names (line 1104) | def get_transform_init_args_names(self):
    method targets_as_params (line 1108) | def targets_as_params(self):
  class ImageCompression (line 1111) | class ImageCompression(ImageOnlyTransform):
    class ImageCompressionType (line 1126) | class ImageCompressionType(IntEnum):
    method __init__ (line 1130) | def __init__(
    method apply (line 1154) | def apply(self, img, quality=100, image_type=".jpg", **params):
    method get_params (line 1166) | def get_params(self, **data):
    method get_transform_init_args (line 1177) | def get_transform_init_args(self):

FILE: volumentations/core/composition.py
  class Compose (line 41) | class Compose:
    method __init__ (line 42) | def __init__(self, transforms, p=1.0, targets=[['image'],['mask']]):
    method get_always_apply_transforms (line 48) | def get_always_apply_transforms(self):
    method __call__ (line 55) | def __call__(self, force_apply=False, **data):
  class ComposeChoice (line 65) | class ComposeChoice:
    method __init__ (line 66) | def __init__(self, transforms, p=1.0, n=1, targets=[['image'], ['mask'...
    method __call__ (line 73) | def __call__(self, **data):

FILE: volumentations/core/transforms_interface.py
  function to_tuple (line 45) | def to_tuple(param, low=None, bias=None):
  class Transform (line 76) | class Transform:
    method __init__ (line 77) | def __init__(self, always_apply=False, p=0.5):
    method __call__ (line 82) | def __call__(self, force_apply, targets, **data):
    method get_params (line 97) | def get_params(self, **data):
    method apply (line 103) | def apply(self, volume, **params):
  class DualTransform (line 107) | class DualTransform(Transform):
    method __call__ (line 108) | def __call__(self, force_apply, targets, **data):
    method apply_to_mask (line 125) | def apply_to_mask(self, mask, **params):
  class ImageOnlyTransform (line 129) | class ImageOnlyTransform(Transform):
    method targets (line 133) | def targets(self):

FILE: volumentations/random_utils.py
  function get_random_state (line 45) | def get_random_state() -> np.random.RandomState:
  function randint (line 48) | def randint(
  function uniform (line 59) | def uniform(
  function normal (line 69) | def normal(
Condensed preview — 21 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (136K chars).
[
  {
    "path": ".github/workflows/python-package.yml",
    "chars": 1053,
    "preview": "# This workflow will install Python dependencies, run tests and lint with a variety of Python versions\n# For more inform"
  },
  {
    "path": ".gitignore",
    "chars": 3077,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "EXAMPLES.md",
    "chars": 749,
    "preview": "# Function\n$$ f(x, y, z) = \\frac{\\sin(xyz)}{xyz} $$\n# No Augmentation\n![](images/original.png)\n![](images/original_flat."
  },
  {
    "path": "LICENSE",
    "chars": 1064,
    "preview": "MIT License\n\nCopyright (c) 2021 ZFTurbo\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof"
  },
  {
    "path": "README.md",
    "chars": 4513,
    "preview": "# Volumentations 3D\n\n3D Volume data augmentation package inspired by albumentations.\n\nVolumentations is a working projec"
  },
  {
    "path": "images/examples.py",
    "chars": 1828,
    "preview": "\"\"\"\nThis script requires (in addition to volumentations requirements):\n    * plotly\n    * kaleido\n\"\"\"\nimport numpy as np"
  },
  {
    "path": "pyproject.toml",
    "chars": 552,
    "preview": "[tool.poetry]\nname = \"volumentations-3d\"\nversion = \"1.0.4\"\ndescription = \"Library for 3D augmentations\"\nauthors = [\n    "
  },
  {
    "path": "setup.py",
    "chars": 682,
    "preview": "try:\n    from setuptools import setup\nexcept ImportError:\n    from distutils.core import setup\n\nsetup(\n    name='volumen"
  },
  {
    "path": "test/test_basic.py",
    "chars": 1325,
    "preview": "import pytest\nimport numpy as np\n\nfrom volumentations import *\n\n\naugmentations = [\n    CenterCrop,\n    ColorJitter,\n    "
  },
  {
    "path": "tst_volumentations_speed.py",
    "chars": 2188,
    "preview": "# coding: utf-8\n__author__ = 'ZFTurbo: https://kaggle.com/zfturbo'\n\n\nfrom volumentations import *\nimport time\n\n\ndef tst_"
  },
  {
    "path": "tst_volumentations_type_1.py",
    "chars": 3170,
    "preview": "# coding: utf-8\n__author__ = 'ZFTurbo: https://kaggle.com/zfturbo'\n\n\nfrom volumentations import *\nimport os\nimport cv2\ni"
  },
  {
    "path": "tst_volumentations_type_2.py",
    "chars": 8830,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  },
  {
    "path": "volumentations/__init__.py",
    "chars": 3137,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  },
  {
    "path": "volumentations/__version__.py",
    "chars": 22,
    "preview": "__version__ = \"1.0.4\"\n"
  },
  {
    "path": "volumentations/augmentations/__init__.py",
    "chars": 3076,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  },
  {
    "path": "volumentations/augmentations/functional.py",
    "chars": 28494,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  },
  {
    "path": "volumentations/augmentations/transforms.py",
    "chars": 49198,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  },
  {
    "path": "volumentations/core/__init__.py",
    "chars": 3024,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  },
  {
    "path": "volumentations/core/composition.py",
    "chars": 4403,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  },
  {
    "path": "volumentations/core/transforms_interface.py",
    "chars": 5883,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  },
  {
    "path": "volumentations/random_utils.py",
    "chars": 4254,
    "preview": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, "
  }
]

About this extraction

This page contains the full source code of the ZFTurbo/volumentations GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 21 files (127.5 KB), approximately 32.1k tokens, and a symbol index with 228 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!