[
  {
    "path": ".github/workflows/python-package.yml",
    "content": "# This workflow will install Python dependencies, run tests and lint with a variety of Python versions\n# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python\n\nname: Python package\n\non:\n  push:\n    branches: [ \"master\", \"develop\" ]\n  pull_request:\n    branches: [ \"master\" ]\n\njobs:\n  build:\n\n    runs-on: ubuntu-latest\n    strategy:\n      fail-fast: false\n      matrix:\n        python-version: [\"3.9\", \"3.10\", \"3.11\"]\n\n    steps:\n    - uses: actions/checkout@v3\n    - name: Set up Python ${{ matrix.python-version }}\n      uses: actions/setup-python@v3\n      with:\n        python-version: ${{ matrix.python-version }}\n    - name: Install dependencies\n      run: |\n        python -m pip install --upgrade pip\n        python -m pip install poetry\n        poetry install --with dev\n    - name: Lint with flake8\n      run: |\n        poetry run flake8 ./volumentations --count --select=E9,F63,F7,F82 --show-source --statistics\n    - name: Test with pytest\n      run: |\n        poetry run pytest\n"
  },
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\ncover/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\n.pybuilder/\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n#   For a library or package, you might want to ignore these files since the code is\n#   intended to run in multiple environments; otherwise, check them in:\n# .python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# poetry\n#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.\n#   This is especially recommended for binary packages to ensure reproducibility, and is more\n#   commonly ignored for libraries.\n#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control\n#poetry.lock\n\n# pdm\n#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.\n#pdm.lock\n#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it\n#   in version control.\n#   https://pdm.fming.dev/#use-with-ide\n.pdm.toml\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n# PyCharm\n#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can\n#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore\n#  and can be added to the global gitignore or merged into this file.  For a more nuclear\n#  option (not recommended) you can uncomment the following to ignore the entire idea folder.\n#.idea/"
  },
  {
    "path": "EXAMPLES.md",
    "content": "# Function\n$$ f(x, y, z) = \\frac{\\sin(xyz)}{xyz} $$\n# No Augmentation\n![](images/original.png)\n![](images/original_flat.png)\n# Downscale\n![](images/Downscale.png)\n![](images/Downscale_flat.png)\n# ElasticTransform\n![](images/ElasticTransform.png)\n![](images/ElasticTransform_flat.png)\n# GlassBlur\n![](images/GlassBlur.png)\n![](images/GlassBlur_flat.png)\n# GridDistortion\n![](images/GridDistortion.png)\n![](images/GridDistortion_flat.png)\n# GridDropout\n![](images/GridDropout.png)\n![](images/GridDropout_flat.png)\n# RandomGamma\n![](images/RandomGamma.png)\n![](images/RandomGamma_flat.png)\n# RandomScale2\n![](images/RandomScale2.png)\n![](images/RandomScale2_flat.png)\n# RotatePseudo2D\n![](images/RotatePseudo2D.png)\n![](images/RotatePseudo2D_flat.png)\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2021 ZFTurbo\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Volumentations 3D\n\n3D Volume data augmentation package inspired by albumentations.\n\nVolumentations is a working project, which originated from the following Git repositories:\n- Original:                 https://github.com/albumentations-team/albumentations\n- 3D Conversion:            https://github.com/ashawkey/volumentations\n- Continued Development:    https://github.com/ZFTurbo/volumentations\n\nNevertheless, if you are using this subpackage, please give credit to all authors including ashawkey, ZFTurbo, qubvel and muellerdo.\n\nInitially inspired by [albumentations](https://github.com/albumentations-team/albumentations) library for augmentation of 2D images.\n\n# Installation\n\n```sh\npip install volumentations-3D\n```\n\n# Simple Example\n\n```python\nfrom volumentations import *\n\ndef get_augmentation(patch_size):\n    return Compose([\n        Rotate((-15, 15), (0, 0), (0, 0), p=0.5),\n        RandomCropFromBorders(crop_value=0.1, p=0.5),\n        ElasticTransform((0, 0.25), interpolation=2, p=0.1),\n        Resize(patch_size, interpolation=1, resize_type=0, always_apply=True, p=1.0),\n        Flip(0, p=0.5),\n        Flip(1, p=0.5),\n        Flip(2, p=0.5),\n        RandomRotate90((1, 2), p=0.5),\n        GaussianNoise(var_limit=(0, 5), p=0.2),\n        RandomGamma(gamma_limit=(80, 120), p=0.2),\n    ], p=1.0)\n\naug = get_augmentation((64, 128, 128))\n\nimg = np.random.randint(0, 255, size=(128, 256, 256), dtype=np.uint8)\nlbl = np.random.randint(0, 1, size=(128, 256, 256), dtype=np.uint8)\n\n# with mask\ndata = {'image': img, 'mask': lbl}\naug_data = aug(**data)\nimg, lbl = aug_data['image'], aug_data['mask']\n\n# without mask\ndata = {'image': img}\naug_data = aug(**data)\nimg = aug_data['image']\n\n```\n\n* Check working usage example in [tst_volumentations_type_1.py](tst_volumentations_type_1.py)  \n* Added another usage example / testing in [tst_volumentations_type_2.py](tst_volumentations_type_2.py)  \n\n# Difference from initial version\n\n* Diverse bug fixes.\n* Implemented multiple augmentations.\n* Approximation enhancements to be closer to Albumentations.\n\n# Implemented 3D augmentations\n\nCheck the [EXAMPLES](EXAMPLES.md) page for visual demonstrations\n```python\nCenterCrop\nColorJitter\nContiguous\nCropNonEmptyMaskIfExists\nDownscale\nElasticTransform\nElasticTransformPseudo2D\nFlip\nFloat\nGaussianNoise\nGlassBlur\nGridDistortion\nGridDropout\nImageCompression\nNormalize\nPadIfNeeded\nRandomBrightnessContrast\nRandomCrop\nRandomCropFromBorders\nRandomDropPlane\nRandomGamma\nRandomResizedCrop\nRandomRotate90\nRandomScale\nRandomScale2\nRemoveEmptyBorder\nResize\nResizedCropNonEmptyMaskIfExists\nRotate\nRotatePseudo2D\nTranspose\n```\n\n# Speed table\n\nSpeed in seconds per one sample.\n\n| Aug name | Cube = 64px | Cube = 96px | Cube = 128px | Cube = 224px | Cube = 256px |\n|----------|-------------|-------------|--------------|--------------|--------------|\n| Rotate | 0.0402 | 0.1366 | 0.3246 | 1.7546 | 2.6349 | \n| RandomCropFromBorders| 0.0037 | 0.0129 | 0.0315 | 0.1634 | 0.2426 |\n| ElasticTransform | 0.1588 | 0.5439 | 2.8649 | 11.8937 | 42.3886 |\n| Resize (type = 0) | 0.4029 | 0.4077 | 0.4245 | 0.5545 | 0.6278 |\n| Resize (type = 1) | 0.3618 | 0.3696 | 0.3871 | 0.5174 | 0.5896 |\n| Flip | 0.0042 | 0.0134 | 0.0314 | 0.1649 | 0.2453 |\n| RandomRotate90 | 0.0040 | 0.0140 | 0.0306 | 0.1672 | 0.2439 |\n| GaussianNoise | 0.0143 | 0.0406 | 0.0956 | 0.4992 | 0.7381 |\n| RandomGamma | 0.0066 | 0.0211 | 0.0505 | 0.2654 |  0.3989 |\n| RandomScale | 0.0158 | 0.0518 | 0.1198 | 0.6391 | 0.9457 |\n\n### Related repositories\n\n * [timm_3d](https://github.com/ZFTurbo/timm_3d) - classification models in 3D for PyTorch\n * [classification_models_3D](https://github.com/ZFTurbo/classification_models_3D) - 3D volumes classification models for Keras/Tensorflow\n * [segmentation_models_pytorch_3d](https://github.com/ZFTurbo/segmentation_models_pytorch_3d) - 3D volumes segmentation models for PyTorch\n * [segmentation_models_3D](https://github.com/ZFTurbo/segmentation_models_3D) - segmentation models in 3D for Keras/Tensorflow\n\n# Citation\n\nFor more details, please refer to the publication: https://doi.org/10.1016/j.compbiomed.2021.105089\n\nIf you find this code useful, please cite it as:\n```\n@article{solovyev20223d,\n  title={3D convolutional neural networks for stalled brain capillary detection},\n  author={Solovyev, Roman and Kalinin, Alexandr A and Gabruseva, Tatiana},\n  journal={Computers in Biology and Medicine},\n  volume={141},\n  pages={105089},\n  year={2022},\n  publisher={Elsevier},\n  doi={10.1016/j.compbiomed.2021.105089}\n}\n```\n"
  },
  {
    "path": "images/examples.py",
    "content": "\"\"\"\nThis script requires (in addition to volumentations requirements):\n    * plotly\n    * kaleido\n\"\"\"\nimport numpy as np\nfrom volumentations import *\nfrom plotly import graph_objects as go\nfrom plotly import express as px\n\n\naugmentations = [\n    Downscale(.5, .51),\n    ElasticTransform((.7, .71)),\n    GlassBlur(),\n    GridDistortion(distort_limit=.5),\n    GridDropout(holes_number_x=3, holes_number_y=3, holes_number_z=3, random_offset=True, fill_value=.5),\n    RandomGamma(gamma_limit=(70, 71)),\n    RandomScale2(scale_limit=[1.5, 1.6]),\n    RotatePseudo2D((1, 2), limit=(40, 41)),\n]\n\nX, Y, Z = np.mgrid[-8:8:40j, -8:8:40j, -8:8:40j]\nvalues = np.sin(X*Y*Z) / (X*Y*Z)\n\nfig = go.Figure(data=go.Isosurface(\n    x=X.flatten(),\n    y=Y.flatten(),\n    z=Z.flatten(),\n    value=values.flatten(),\n    isomin=.1,\n    isomax=.9,\n    opacity=.5,\n    surface_count=6,\n    caps=dict(x_show=False, y_show=False, z_show=False),\n    colorscale=\"gray\"\n))\nfig.write_image(\"images/original.png\")\nfig = px.imshow(values[20], color_continuous_scale=\"gray\", zmin=values[20].min(), zmax=values[20].max())\nfig.write_image(\"images/original_flat.png\")\n\nfor aug in augmentations:\n    cube = aug(True, [\"image\"], image=values)[\"image\"]\n\n    fig = go.Figure(data=go.Isosurface(\n        x=X.flatten(),\n        y=Y.flatten(),\n        z=Z.flatten(),\n        value=cube.flatten(),\n        isomin=.1,\n        isomax=.9,\n        opacity=.5,\n        surface_count=6,\n        caps=dict(x_show=False, y_show=False, z_show=False),\n        colorscale=\"gray\"\n    ))\n    name = aug.__class__.__name__\n    print(f\"images/{name}.png\")\n    fig.write_image(f\"images/{name}.png\")\n\n    fig = px.imshow(cube[20], color_continuous_scale=\"gray\", zmin=values[20].min(), zmax=values[20].max())\n    print(f\"images/{name}_flat.png\")\n    fig.write_image(f\"images/{name}_flat.png\")\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[tool.poetry]\nname = \"volumentations-3d\"\nversion = \"1.0.4\"\ndescription = \"Library for 3D augmentations\"\nauthors = [\n    \"Roman Sol (ZFTurbo)\",\n    \"ashawkey\",\n    \"qubvel\",\n    \"muellerdo\"\n]\nlicense = \"MIT\"\nreadme = \"README.md\"\npackages = [{include = \"volumentations\"}]\n\n[tool.poetry.dependencies]\npython = \">=3.9, <3.12\"\nscikit-image = \"^0.20.0\"\nopencv-python = \"^4.7.0.72\"\nnumpy = \"^1.24.3\"\n\n[tool.poetry.group.dev.dependencies]\npytest = \"^7.3.1\"\nflake8 = \"^6.0.0\"\n\n[build-system]\nrequires = [\"poetry-core\"]\nbuild-backend = \"poetry.core.masonry.api\"\n"
  },
  {
    "path": "setup.py",
    "content": "try:\n    from setuptools import setup\nexcept ImportError:\n    from distutils.core import setup\n\nsetup(\n    name='volumentations_3D',\n    version='1.0.4',\n    author='Roman Sol (ZFTurbo), ashawkey, qubvel, muellerdo',\n    packages=['volumentations', 'volumentations/augmentations', 'volumentations/core'],\n    url='https://github.com/ZFTurbo/volumentations',\n    description='Library for 3D augmentations',\n    long_description='Library for 3D augmentations. Inspired by albumentations.'\n                     'More details: https://github.com/ZFTurbo/volumentations',\n    install_requires=[\n        'scikit-image',\n        'scipy',\n        'opencv-python',\n        \"numpy\",\n    ],\n)\n"
  },
  {
    "path": "test/test_basic.py",
    "content": "import pytest\nimport numpy as np\n\nfrom volumentations import *\n\n\naugmentations = [\n    CenterCrop,\n    ColorJitter,\n    Contiguous,\n    # CropNonEmptyMaskIfExists,\n    Downscale,\n    ElasticTransform,\n    ElasticTransformPseudo2D,\n    Flip,\n    Float,\n    GaussianNoise,\n    GlassBlur,\n    GridDistortion,\n    GridDropout,\n    ImageCompression,\n    Normalize,\n    PadIfNeeded,\n    RandomBrightnessContrast,\n    RandomCrop,\n    RandomCropFromBorders,\n    RandomDropPlane,\n    RandomGamma,\n    RandomResizedCrop,\n    RandomRotate90,\n    RandomScale,\n    RandomScale2,\n    RemoveEmptyBorder,\n    Resize,\n    # ResizedCropNonEmptyMaskIfExists,\n    Rotate,\n    RotatePseudo2D,\n    Transpose,\n]\n\narg_required = [\n    CenterCrop,\n    CropNonEmptyMaskIfExists,\n    PadIfNeeded,\n    RandomCrop,\n    RandomResizedCrop,\n    Resize,\n    ResizedCropNonEmptyMaskIfExists,\n]\n\n\n@pytest.fixture(scope=\"module\")\ndef cube():\n    X, Y, Z = np.mgrid[-8:8:40j, -8:8:40j, -8:8:40j]\n    values = np.sin(X*Y*Z) / (X*Y*Z)\n    return values\n\n\n@pytest.mark.parametrize(\"aug_class\", augmentations)\ndef test_augmentations(aug_class, cube):\n    if aug_class in arg_required:\n        aug = aug_class(shape=(30, 30, 30))\n    else:\n        aug = aug_class()\n    new_cube = aug(True, \"image\", image=cube)[\"image\"]\n    print(aug_class.__name__, new_cube.shape)\n"
  },
  {
    "path": "tst_volumentations_speed.py",
    "content": "# coding: utf-8\n__author__ = 'ZFTurbo: https://kaggle.com/zfturbo'\n\n\nfrom volumentations import *\nimport time\n\n\ndef tst_volumentations_speed():\n    total_volumes_to_check = 100\n    sizes_list = [\n        (64, 64, 64),\n        (96, 96, 96),\n        (128, 128, 128),\n        (224, 224, 224),\n        (256, 256, 256),\n    ]\n\n    for size in sizes_list:\n        patch_size1 = (32, 32, 32)\n        patch_size2 = (200, 200, 200)\n\n        full_list_to_check = [\n            Rotate((-15, 15), (-15, 15), (-15, 15), p=1.0),\n            RandomCropFromBorders(crop_value=0.1, p=1.0),\n            ElasticTransform((0, 0.25), interpolation=2, p=1.0),\n            Resize(patch_size1, interpolation=1, resize_type=0, always_apply=True, p=1.0),\n            Resize(patch_size1, interpolation=1, resize_type=1, always_apply=True, p=1.0),\n            Resize(patch_size2, interpolation=1, resize_type=0, always_apply=True, p=1.0),\n            Resize(patch_size2, interpolation=1, resize_type=1, always_apply=True, p=1.0),\n            Flip(0, p=1.0),\n            Flip(1, p=1.0),\n            Flip(2, p=1.0),\n            RandomRotate90((1, 2), p=1.0),\n            GaussianNoise(var_limit=(0, 5), p=1.0),\n            RandomGamma(gamma_limit=(80, 120), p=1.0),\n            RandomScale(scale_limit=[0.9, 1.1], interpolation=1, always_apply=True, p=1.0)\n        ]\n\n        for f in full_list_to_check:\n            name = f.__class__.__name__\n            aug1 = Compose([\n                f,\n            ], p=1.0)\n\n            start_time = time.time()\n            data = []\n            for i in range(total_volumes_to_check):\n                data.append(np.random.uniform(low=0.0, high=255, size=size))\n\n            for i, cube in enumerate(data):\n                try:\n                    cube1 = aug1(image=cube)['image']\n                except Exception as e:\n                    print('Augmentation error: {}'.format(str(e)))\n                    continue\n\n            delta = time.time() - start_time\n            print('Size: {} Aug: {} Time: {:.2f} sec Per sample: {:.4f} sec'.format(size, name, delta, delta / len(data)))\n            print(f.__dict__)\n\n\nif __name__ == '__main__':\n    tst_volumentations_speed()\n"
  },
  {
    "path": "tst_volumentations_type_1.py",
    "content": "# coding: utf-8\n__author__ = 'ZFTurbo: https://kaggle.com/zfturbo'\n\n\nfrom volumentations import *\nimport os\nimport cv2\nimport urllib.request\nimport time\n\n\nOUTPUT_DIR = './debug_videos/'\nif not os.path.isdir(OUTPUT_DIR):\n    os.mkdir(OUTPUT_DIR)\n\n\ndef read_video(f):\n    cap = cv2.VideoCapture(f)\n    length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))\n    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n    fps = cap.get(cv2.CAP_PROP_FPS)\n    current_frame = 0\n    frame_list = []\n    print('ID: {} Video length: {} Width: {} Height: {} FPS: {}'.format(os.path.basename(f), length, width, height, fps))\n    while (cap.isOpened()):\n        ret, frame = cap.read()\n        if ret is False:\n            break\n        frame_list.append(frame.copy())\n        current_frame += 1\n\n    frame_list = np.array(frame_list, dtype=np.uint8)\n    return frame_list\n\n\ndef get_augmentation_v1(patch_size):\n    return Compose([\n        Rotate((-15, 15), (0, 0), (0, 0), p=0.5),\n        RandomCropFromBorders(crop_value=0.1, p=0.5),\n        ElasticTransform((0, 0.25), interpolation=2, p=0.1),\n        RandomDropPlane(plane_drop_prob=0.1, axes=(0, 1, 2), p=0.5),\n        Resize(patch_size, interpolation=1, always_apply=True, p=1.0),\n        Flip(0, p=0.5),\n        Flip(1, p=0.5),\n        Flip(2, p=0.5),\n        RandomRotate90((1, 2), p=0.5),\n        GaussianNoise(var_limit=(0, 5), p=0.5),\n        RandomGamma(gamma_limit=(80, 120), p=0.5),\n    ], p=1.0)\n\n\ndef get_augmentation_v2(patch_size):\n    return Compose([\n        Resize(\n            patch_size,\n            interpolation=1,\n            resize_type=1,\n            always_apply=True,\n            p=1.0\n        ),\n    ], p=1.0)\n\n\n\ndef create_video(image_list, out_file, fps):\n    height, width = image_list[0].shape[:2]\n    # fourcc = cv2.VideoWriter_fourcc(*'DIB ')\n    fourcc = cv2.VideoWriter_fourcc(*'XVID')\n    # fourcc = cv2.VideoWriter_fourcc(*'H264')\n    # fourcc = -1\n    video = cv2.VideoWriter(out_file, fourcc, fps, (width, height), True)\n    for im in image_list:\n        if len(im.shape) == 2:\n            im = np.stack((im, im, im), axis=2)\n        video.write(im.astype(np.uint8))\n    cv2.destroyAllWindows()\n    video.release()\n\n\ndef tst_volumentations():\n    number_of_aug_videos = 10\n    out_shape = (150, 224, 360)\n    inp_video = 'sample.mp4'\n    if not os.path.isfile(inp_video):\n        print('Downloading sample.mp4...')\n        urllib.request.urlretrieve('https://github.com/ZFTurbo/volumentations/releases/download/v1.0/sample.mp4', inp_video)\n\n    cube = read_video(inp_video)\n    print('Sample video shape: {}'.format(cube.shape))\n    aug = get_augmentation_v1(out_shape)\n    start_time = time.time()\n    for i in range(number_of_aug_videos):\n        single_time = time.time()\n        data = {'image': cube}\n        aug_data = aug(**data)\n        img = aug_data['image']\n        create_video(img, OUTPUT_DIR + 'video_test_{}.avi'.format(i), 24)\n        print('Aug: {} Time: {:.2f} sec'.format(i, time.time() - single_time))\n    print('Total augm time: {:.2f} sec'.format(time.time() - start_time))\n\n\nif __name__ == '__main__':\n    tst_volumentations()\n"
  },
  {
    "path": "tst_volumentations_type_2.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\n#-----------------------------------------------------#\n#                   Library imports                   #\n#-----------------------------------------------------#\n# External libraries\nimport os\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom skimage.data import cells3d\n# Volumentations libraries\nfrom volumentations import Compose\nfrom volumentations import augmentations as ai\n\n# -----------------------------------------------------#\n#                    GIF Visualizer                    #\n# -----------------------------------------------------#\ndef grayscale_normalization(image):\n    # Identify minimum and maximum\n    max_value = np.max(image)\n    min_value = np.min(image)\n    # Scaling\n    image_scaled = (image - min_value) / (max_value - min_value)\n    image_normalized = np.around(image_scaled * 255, decimals=0)\n    # Return normalized image\n    return image_normalized\n\n\ndef visualize_evaluation(index, volume, viz_path=\"test_volumentations\"):\n    # Grayscale Normalization of Volume\n    volume_gray = grayscale_normalization(volume)\n\n    # Create a figure and two axes objects from matplot\n    fig = plt.figure()\n    img = plt.imshow(volume_gray[0, :, :], cmap='gray', vmin=0, vmax=255,\n                     animated=True)\n\n    # Update function to show the slice for the current frame\n    def update(i):\n        plt.suptitle(\"Augmentation: \" + str(index) + \" - \" + \"Slice: \" + str(i))\n        img.set_data(volume_gray[i, :, :])\n        return img\n\n    # Compute the animation (gif)\n    ani = animation.FuncAnimation(fig, update, frames=volume_gray.shape[0],\n                                  interval=5, repeat_delay=0, blit=False)\n    # Set up the output path for the gif\n    if not os.path.exists(viz_path):\n        os.mkdir(viz_path)\n    file_name = \"visualization.\" + str(index) + \".gif\"\n    out_path = os.path.join(viz_path, file_name)\n    # Save the animation (gif)\n    ani.save(out_path, writer='imagemagick', fps=None, dpi=None)\n    # Close the matplot\n    plt.close()\n\n\n#-----------------------------------------------------#\n#                Albumentations Builder               #\n#-----------------------------------------------------#\n\"\"\" Builds the albumenations augmentator by initializing  all transformations.\n    The activated transformation and their configurations are defined as\n    class variables.\n\n    -> Builds a new self.operator\n\"\"\"\ndef build(aug_flip, aug_rotate, aug_brightness, aug_contrast, aug_saturation,\n          aug_hue, aug_scale, aug_crop, aug_gridDistortion, aug_compression,\n          aug_gaussianNoise, aug_gaussianBlur, aug_downscaling, aug_gamma,\n          aug_elasticTransform):\n    # Initialize transform list\n    transforms = []\n    # Fill transform list\n    if aug_flip:\n        tf = ai.Flip(p=0.5)\n        transforms.append(tf)\n    if aug_rotate:\n        tf = ai.RandomRotate90(p=0.5)\n        transforms.append(tf)\n    if aug_brightness:\n        tf = ai.ColorJitter(contrast=0, hue=0, saturation=0,\n                            p=0.5)\n        transforms.append(tf)\n    if aug_contrast:\n        tf = ai.ColorJitter(brightness=0, hue=0, saturation=0,\n                            p=0.5)\n        transforms.append(tf)\n    if aug_saturation:\n        tf = ai.ColorJitter(brightness=0, contrast=0, hue=0,\n                            p=0.5)\n        transforms.append(tf)\n    if aug_hue:\n        tf = ai.ColorJitter(brightness=0, contrast=0, saturation=0,\n                            p=0.5)\n        transforms.append(tf)\n    if aug_scale:\n        tf = ai.RandomScale(p=0.5)\n        transforms.append(tf)\n    if aug_crop:\n        tf = ai.RandomCrop(shape=(30, 128, 128), p=0.5)\n        transforms.append(tf)\n    if aug_gridDistortion:\n        tf = ai.GridDistortion(p=0.5)\n        transforms.append(tf)\n    if aug_compression:\n        tf = ai.ImageCompression(p=0.5)\n        transforms.append(tf)\n    if aug_gaussianNoise:\n        tf = ai.GaussianNoise(p=0.5)\n        transforms.append(tf)\n    if aug_gaussianBlur:\n        tf = ai.GlassBlur(p=0.5)\n        transforms.append(tf)\n    if aug_downscaling:\n        tf = ai.Downscale(p=0.5)\n        transforms.append(tf)\n    if aug_gamma:\n        tf = ai.RandomGamma(p=0.5)\n        transforms.append(tf)\n    if aug_elasticTransform:\n        tf = ai.ElasticTransform(p=0.5)\n        transforms.append(tf)\n\n    # Compose transforms\n    return Compose(transforms)\n\n\n#-----------------------------------------------------#\n#                  Application Test                   #\n#-----------------------------------------------------#\nif __name__ == \"__main__\":\n    # Obtain 3D volume of fluorescence microscopy image of cells\n    data_raw = cells3d()\n    # Extract nuclei\n    data = np.reshape(data_raw[:,1,:,:], (60, 256, 256))\n    data = np.float32(data)\n    data = grayscale_normalization(data)\n    # Visualize original volume\n    visualize_evaluation(\"original\", data)\n    print(data)\n    print(\"original\", data.shape)\n    # Setup options\n    options = [False for x in range(15)]\n    options_labels = [\"flip\", \"rotate\", \"brightness\", \"contrast\", \"saturation\",\n                      \"hue\", \"scale\", \"crop\", \"grid_distortion\", \"compression\",\n                      \"gaussian_noise\", \"gaussian_blur\", \"downscaling\", \"gamma\",\n                      \"elastic_transform\"]\n    # Apply each augmentation once for testing\n    for i in range(15):\n        # Active current augmentation technique\n        options_curr = options.copy()\n        options_curr[i] = True\n        # Initialize Volumentations\n        data_aug = build(*options_curr)\n        # Apply augmentation\n        img_augmented = data_aug(image=data)[\"image\"]\n        # Visualize result\n        print(options_labels[i], img_augmented.shape)\n        visualize_evaluation(options_labels[i], img_augmented)\n"
  },
  {
    "path": "volumentations/__init__.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\nfrom .augmentations.transforms import *\nfrom .core.composition import *\nfrom .core.transforms_interface import *\n"
  },
  {
    "path": "volumentations/__version__.py",
    "content": "__version__ = \"1.0.4\"\n"
  },
  {
    "path": "volumentations/augmentations/__init__.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\nfrom .functional import *\nfrom .transforms import *\n"
  },
  {
    "path": "volumentations/augmentations/functional.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\nimport numpy as np\nfrom functools import wraps\nimport skimage.transform as skt\nimport scipy.ndimage.interpolation as sci\nfrom scipy.ndimage import zoom\nimport cv2\nfrom scipy.ndimage import gaussian_filter\nfrom scipy.ndimage import map_coordinates\nfrom warnings import warn\nfrom itertools import product\n\n\nMAX_VALUES_BY_DTYPE = {\n    np.dtype(\"uint8\"): 255,\n    np.dtype(\"uint16\"): 65535,\n    np.dtype(\"uint32\"): 4294967295,\n    np.dtype(\"float32\"): 1.0,\n}\n\n\n\"\"\"\nvol: [H, W, D(, C)]\n\nx, y, z <--> H, W, D\n\nyou should give (H, W, D) form shape.\n\nskimage interpolation notations:\n\norder = 0: Nearest-Neighbor\norder = 1: Bi-Linear (default)\norder = 2: Bi-Quadratic\norder = 3: Bi-Cubic\norder = 4: Bi-Quartic\norder = 5: Bi-Quintic\n\nInterpolation behaves strangely when input of type int.\n** Be sure to change volume and mask data type to float !!! **\n\"\"\"\n\n\ndef preserve_shape(func):\n    \"\"\"\n    Preserve shape of the image\n    \"\"\"\n\n    @wraps(func)\n    def wrapped_function(img, *args, **kwargs):\n        shape = img.shape\n        result = func(img, *args, **kwargs)\n        result = result.reshape(shape)\n        return result\n\n    return wrapped_function\n\ndef rotate2d(img, angle, axes=(0,1), reshape=False, interpolation=1, border_mode='reflect', value=0):\n    return sci.rotate(img, angle, axes, reshape=reshape, order=interpolation, mode=border_mode, cval=value)\n\n\ndef shift(img, shift, interpolation=1, border_mode='reflect', value=0):\n    return sci.shift(img, shift, order=interpolation, mode=border_mode, cval=value)\n\n\ndef crop(img, x1, y1, z1, x2, y2, z2):\n    height, width, depth = img.shape[:3]\n    if x2 <= x1 or y2 <= y1 or z2 <= z1:\n        raise ValueError\n    if x1 < 0 or y1 < 0 or z1 < 0:\n        raise ValueError\n    if x2 > height or y2 > width or z2 > depth:\n        img = pad(img, (x2, y2, z2))\n        warn('image size smaller than crop size, pad by default.', UserWarning)\n\n    return img[x1:x2, y1:y2, z1:z2]\n\n\ndef get_center_crop_coords(height, width, depth, crop_height, crop_width, crop_depth):\n    x1 = (height - crop_height) // 2\n    x2 = x1 + crop_height\n    y1 = (width - crop_width) // 2\n    y2 = y1 + crop_width\n    z1 = (depth - crop_depth) // 2\n    z2 = z1 + crop_depth\n    return x1, y1, z1, x2, y2, z2\n\n\ndef center_crop(img, crop_height, crop_width, crop_depth):\n    height, width, depth = img.shape[:3]\n    if height < crop_height or width < crop_width or depth < crop_depth:\n        raise ValueError\n    x1, y1, z1, x2, y2, z2 = get_center_crop_coords(height, width, depth, crop_height, crop_width, crop_depth)\n    img = img[x1:x2, y1:y2, z1:z2]\n    return img\n\n\ndef get_random_crop_coords(height, width, depth, crop_height, crop_width, crop_depth, h_start, w_start, d_start):\n    x1 = int((height - crop_height) * h_start)\n    x2 = x1 + crop_height\n    y1 = int((width - crop_width) * w_start)\n    y2 = y1 + crop_width\n    z1 = int((depth - crop_depth) * d_start)\n    z2 = z1 + crop_depth\n    return x1, y1, z1, x2, y2, z2\n\n\ndef random_crop(img, crop_height, crop_width, crop_depth, h_start, w_start, d_start):\n    height, width, depth = img.shape[:3]\n    if height < crop_height or width < crop_width or depth < crop_depth:\n        img = pad(img, (crop_width, crop_height, crop_depth))\n        warn('image size smaller than crop size, pad by default.', UserWarning)\n    else:\n        x1, y1, z1, x2, y2, z2 = get_random_crop_coords(height, width, depth, crop_height, crop_width, crop_depth, h_start, w_start, d_start)\n        img = img[x1:x2, y1:y2, z1:z2]\n    return img\n\n\ndef normalize(img, range_norm=True):\n    if range_norm:\n        mn = img.min()\n        mx = img.max()\n        if mx != mn:\n            img = (img - mn) / (mx - mn)\n    mean = img.mean()\n    std = img.std()\n    denominator = np.reciprocal(std)\n    if np.isinf(denominator).any():\n        img[...] = 0\n    else:\n        img = (img - mean) * denominator\n    return img\n\n\ndef pad(image, new_shape, border_mode=\"reflect\", value=0):\n    '''\n    image: [H, W, D, C] or [H, W, D]\n    new_shape: [H, W, D]\n    '''\n    axes_not_pad = len(image.shape) - len(new_shape)\n\n    old_shape = np.array(image.shape[:len(new_shape)])\n    new_shape = np.array([max(new_shape[i], old_shape[i]) for i in range(len(new_shape))])\n\n    difference = new_shape - old_shape\n    pad_below = difference // 2\n    pad_above = difference - pad_below\n\n    pad_list = [list(i) for i in zip(pad_below, pad_above)] + [[0, 0]] * axes_not_pad\n\n    if border_mode == 'reflect':\n        res = np.pad(image, pad_list, border_mode)\n    elif border_mode == 'constant':\n        res = np.pad(image, pad_list, border_mode, constant_values=value)\n    else:\n        raise ValueError\n\n    return res\n\n\ndef gaussian_noise(img, gauss):\n    img = img.astype(\"float32\")\n    return img + gauss\n\n\ndef resize(img, new_shape, interpolation=1, resize_type=0):\n    \"\"\"\n    img: [H, W, D, C] or [H, W, D]\n    new_shape: [H, W, D]\n    interpolation: The order of the spline interpolation (0-5)\n    resize_type: what type of resize to use: scikit-image (0) or zoom (1)\n    \"\"\"\n\n    if resize_type == 0:\n        new_img = skt.resize(\n            img,\n            new_shape,\n            order=interpolation,\n            mode='reflect',\n            cval=0,\n            clip=True,\n            anti_aliasing=False\n        )\n    else:\n        shp = tuple(np.array(new_shape) / np.array(img.shape[:3]))\n\n        if len(img.shape) == 4:\n            # Multichannel\n            data = []\n            for i in range(img.shape[-1]):\n                subimg = img[..., i].copy()\n                d0 = zoom(subimg, shp, order=interpolation)\n                data.append(d0.copy())\n            new_img = np.stack(data, axis=-1)\n        else:\n            new_img = zoom(img.copy(), shp, order=interpolation)\n\n    return new_img\n\n\ndef rescale(img, scale, interpolation=1):\n    \"\"\"\n    img: [H, W, D, C] or [H, W, D]\n    scale: scalar float\n    \"\"\"\n    return skt.rescale(img, scale, order=interpolation, mode='reflect', cval=0,\n                       clip=True, channel_axis=-1, anti_aliasing=False)\n    \"\"\"\n    shape = [int(scale * i) for i in img.shape[:3]]\n    return resize(img, shape, interpolation)\n    \"\"\"\n\n\n@preserve_shape\ndef gamma_transform(img, gamma):\n    if img.dtype == np.uint8:\n        table = (np.arange(0, 256.0 / 255, 1.0 / 255) ** gamma) * 255\n        img = cv2.LUT(img, table.astype(np.uint8))\n    else:\n        img = np.power(img, gamma)\n\n    return img\n\n\ndef elastic_transform_pseudo2D(img, alpha, sigma, alpha_affine, interpolation=cv2.INTER_LINEAR, border_mode=cv2.BORDER_REFLECT_101, value=None, random_state=42, approximate=False):\n    \"\"\"Elastic deformation of images as described in [Simard2003]_ (with modifications).\n    Based on https://gist.github.com/erniejunior/601cdf56d2b424757de5\n\n    .. [Simard2003] Simard, Steinkraus and Platt, \"Best Practices for\n         Convolutional Neural Networks applied to Visual Document Analysis\", in\n         Proc. of the International Conference on Document Analysis and\n         Recognition, 2003.\n    \"\"\"\n    random_state = np.random.RandomState(random_state)\n\n    depth, height, width  = img.shape[:3]\n\n    # Random affine\n    center_square = np.float32((height, width)) // 2\n    square_size = min((height, width)) // 3\n    alpha = float(alpha)\n    sigma = float(sigma)\n    alpha_affine = float(alpha_affine)\n\n    pts1 = np.float32(\n        [\n            center_square + square_size,\n            [center_square[0] + square_size, center_square[1] - square_size],\n            center_square - square_size,\n        ]\n    )\n    pts2 = pts1 + random_state.uniform(-alpha_affine, alpha_affine, size=pts1.shape).astype(np.float32)\n    matrix = cv2.getAffineTransform(pts1, pts2)\n\n    # pseoudo 2D\n    res = np.zeros_like(img)\n    for d in range(depth):\n        tmp = img[d, :, :] # [D, H, W, C]\n        tmp = cv2.warpAffine(tmp, M=matrix, dsize=(width, height), flags=interpolation, borderMode=border_mode, borderValue=value)\n        res[d, :, :] = tmp\n    img = res\n\n\n    if approximate:\n        # Approximate computation smooth displacement map with a large enough kernel.\n        # On large images (512+) this is approximately 2X times faster\n        dx = random_state.rand(height, width).astype(np.float32) * 2 - 1\n        cv2.GaussianBlur(dx, (17, 17), sigma, dst=dx)\n        dx *= alpha\n\n        dy = random_state.rand(height, width).astype(np.float32) * 2 - 1\n        cv2.GaussianBlur(dy, (17, 17), sigma, dst=dy)\n        dy *= alpha\n    else:\n        dx = np.float32(gaussian_filter((random_state.rand(height, width) * 2 - 1), sigma) * alpha)\n        dy = np.float32(gaussian_filter((random_state.rand(height, width) * 2 - 1), sigma) * alpha)\n\n    x, y = np.meshgrid(np.arange(width), np.arange(height))\n\n    map_x = np.float32(x + dx)\n    map_y = np.float32(y + dy)\n\n    # pseoudo 2D\n    res = np.zeros_like(img)\n    for d in range(depth):\n        tmp = img[:, :, d] # [H, W, C]\n        tmp = cv2.remap(tmp, map1=map_x, map2=map_y, interpolation=interpolation, borderMode=border_mode, borderValue=value)\n        res[:, :, d] = tmp\n    img = res\n\n    return img\n\n\n\"\"\"\nLater are coordinates-based 3D rotation and elastic transforms.\nreference: https://github.com/MIC-DKFZ/batchgenerators\n\"\"\"\n\ndef elastic_transform(img, sigmas, alphas, interpolation=1, border_mode='reflect', value=0, random_state=42):\n    \"\"\"\n    img: [H, W, D(, C)]\n    \"\"\"\n    coords = generate_coords(img.shape[:3])\n    coords = elastic_deform_coords(coords, sigmas, alphas, random_state)\n    coords = recenter_coords(coords)\n    if len(img.shape) == 4:\n        num_channels = img.shape[3]\n        res = []\n        for channel in range(num_channels):\n            res.append(map_coordinates(img[:,:,:,channel], coords, order=interpolation, mode=border_mode, cval=value))\n        return np.stack(res, -1)\n    else:\n        return map_coordinates(img, coords, order=interpolation, mode=border_mode, cval=value)\n\n\ndef generate_coords(shape):\n    \"\"\"\n    coords: [n_dim=3, H, W, D]\n    \"\"\"\n    tmp = tuple([np.arange(i) for i in shape])\n    coords = np.array(np.meshgrid(*tmp, indexing='ij')).astype(float)\n    for d in range(len(shape)):\n        coords[d] -= ((np.array(shape).astype(float) - 1) / 2)[d]\n    return coords\n\n\ndef elastic_deform_coords(coords, sigmas, alphas, random_state):\n    random_state = np.random.RandomState(random_state)\n    n_dim = coords.shape[0]\n    if not isinstance(alphas, (tuple, list)):\n        alphas = [alphas] * n_dim\n    if not isinstance(sigmas, (tuple, list)):\n        sigmas = [sigmas] * n_dim\n    offsets = []\n    for d in range(n_dim):\n        offset = gaussian_filter((random_state.rand(*coords.shape[1:]) * 2 - 1), sigmas, mode=\"constant\", cval=0)\n        mx = np.max(np.abs(offset))\n        offset = alphas[d] * offset / mx\n        offsets.append(offset)\n    offsets = np.array(offsets)\n    coords += offsets\n    return coords\n\n\ndef recenter_coords(coords):\n    n_dim = coords.shape[0]\n    mean = coords.mean(axis=tuple(range(1, len(coords.shape))), keepdims=True)\n    coords -= mean\n    for d in range(n_dim):\n        ctr = int(np.round(coords.shape[d+1]/2))\n        coords[d] += ctr\n    return coords\n\n\ndef rotate3d(img, x, y, z, interpolation=1, border_mode='reflect', value=0):\n    \"\"\"\n    img: [H, W, D(, C)]\n    x, y, z: angle in degree.\n    \"\"\"\n    x, y, z = [np.pi*i/180 for i in [x, y, z]]\n    coords = generate_coords(img.shape[:3])\n    coords = rotate_coords(coords, x, y, z)\n    coords = recenter_coords(coords)\n    if len(img.shape) == 4:\n        num_channels = img.shape[3]\n        res = []\n        for channel in range(num_channels):\n            res.append(map_coordinates(img[:,:,:,channel], coords, order=interpolation, mode=border_mode, cval=value))\n        return np.stack(res, -1)\n    else:\n        return map_coordinates(img, coords, order=interpolation, mode=border_mode, cval=value)\n\n\ndef rotate_coords(coords, angle_x, angle_y, angle_z):\n    rot_matrix = np.identity(len(coords))\n    rot_matrix = rot_matrix @ rot_x(angle_x)\n    rot_matrix = rot_matrix @ rot_y(angle_y)\n    rot_matrix = rot_matrix @ rot_z(angle_z)\n    coords = np.dot(coords.reshape(len(coords), -1).transpose(), rot_matrix).transpose().reshape(coords.shape)\n    return coords\n\n\ndef rot_x(angle):\n    rotation_x = np.array([[1, 0, 0],\n                           [0, np.cos(angle), -np.sin(angle)],\n                           [0, np.sin(angle), np.cos(angle)]])\n    return rotation_x\n\n\ndef rot_y(angle):\n    rotation_y = np.array([[np.cos(angle), 0, np.sin(angle)],\n                           [0, 1, 0],\n                           [-np.sin(angle), 0, np.cos(angle)]])\n    return rotation_y\n\n\ndef rot_z(angle):\n    rotation_z = np.array([[np.cos(angle), -np.sin(angle), 0],\n                           [np.sin(angle), np.cos(angle), 0],\n                           [0, 0, 1]])\n    return rotation_z\n\n\ndef rescale_warp(img, scale, interpolation=1, border_mode='reflect', value=0):\n    \"\"\"\n    img: [H, W, D(, C)]\n    \"\"\"\n    coords = generate_coords(img.shape[:3])\n    coords = scale_coords(coords, scale)\n    coords = recenter_coords(coords)\n    if len(img.shape) == 4:\n        num_channels = img.shape[3]\n        res = []\n        for channel in range(num_channels):\n            res.append(map_coordinates(img[:,:,:,channel], coords, order=interpolation, mode=border_mode, cval=value))\n        return np.stack(res, -1)\n    else:\n        return map_coordinates(img, coords, order=interpolation, mode=border_mode, cval=value)\n\n\ndef scale_coords(coords, scale):\n    if isinstance(scale, (tuple, list, np.ndarray)):\n        assert len(scale) == len(coords)\n        for i in range(len(scale)):\n            coords[i] *= scale[i]\n    else:\n        coords *= scale\n    return coords\n\n\ndef clamping_crop(img, sh0_min, sh1_min, sh2_min, sh0_max, sh1_max, sh2_max):\n    d, h, w = img.shape[:3]\n    if sh0_min < 0:\n        sh0_min = 0\n    if sh1_min < 0:\n        sh1_min = 0\n    if sh2_min < 0:\n        sh2_min = 0\n    if sh0_max > d:\n        sh0_max = d\n    if sh1_max > h:\n        sh1_max = h\n    if sh2_max > w:\n        sh2_max = w\n    return img[int(sh0_min): int(sh0_max), int(sh1_min): int(sh1_max), int(sh2_min): int(sh2_max)]\n\n\ndef cutout(img, holes, fill_value=0):\n    # Make a copy of the input image since we don't want to modify it directly\n    img = img.copy()\n    for x1, y1, z1, x2, y2, z2 in holes:\n        img[y1:y2, x1:x2, z1:z2] = fill_value\n    return img\n\ndef clip(img, dtype, maxval):\n    return np.clip(img, 0, maxval).astype(dtype)\n\ndef clipped(func):\n    @wraps(func)\n    def wrapped_function(img, *args, **kwargs):\n        dtype = img.dtype\n        maxval = MAX_VALUES_BY_DTYPE.get(dtype, 1.0)\n        return clip(func(img, *args, **kwargs), dtype, maxval)\n\n    return wrapped_function\n\n@clipped\ndef _brightness_contrast_adjust_non_uint(img, alpha=1, beta=0, beta_by_max=False):\n    dtype = img.dtype\n    img = img.astype(\"float32\")\n\n    if alpha != 1:\n        img *= alpha\n    if beta != 0:\n        if beta_by_max:\n            max_value = MAX_VALUES_BY_DTYPE[dtype]\n            img += beta * max_value\n        else:\n            img += beta * np.mean(img)\n    return img\n\n@preserve_shape\ndef _brightness_contrast_adjust_uint(img, alpha=1, beta=0, beta_by_max=False):\n    dtype = np.dtype(\"uint8\")\n\n    max_value = MAX_VALUES_BY_DTYPE[dtype]\n\n    lut = np.arange(0, max_value + 1).astype(\"float32\")\n\n    if alpha != 1:\n        lut *= alpha\n    if beta != 0:\n        if beta_by_max:\n            lut += beta * max_value\n        else:\n            lut += beta * np.mean(img)\n\n    lut = np.clip(lut, 0, max_value).astype(dtype)\n    img = cv2.LUT(img, lut)\n    return img\n\ndef brightness_contrast_adjust(img, alpha=1, beta=0, beta_by_max=False):\n    if img.dtype == np.uint8:\n        return _brightness_contrast_adjust_uint(img, alpha, beta, beta_by_max)\n\n    return _brightness_contrast_adjust_non_uint(img, alpha, beta, beta_by_max)\n\n\ndef _adjust_brightness_torchvision_uint8(img, factor):\n    lut = np.arange(0, 256) * factor\n    lut = np.clip(lut, 0, 255).astype(np.uint8)\n    return cv2.LUT(img, lut)\n\n@preserve_shape\ndef adjust_brightness_torchvision(img, factor):\n    if factor == 0:\n        return np.zeros_like(img)\n    elif factor == 1:\n        return img\n\n    if img.dtype == np.uint8:\n        return _adjust_brightness_torchvision_uint8(img, factor)\n\n    return clip(img * factor, img.dtype, MAX_VALUES_BY_DTYPE[img.dtype])\n\ndef _adjust_contrast_torchvision_uint8(img, factor, mean):\n    lut = np.arange(0, 256) * factor\n    lut = lut + mean * (1 - factor)\n    lut = clip(lut, img.dtype, 255)\n\n    return cv2.LUT(img, lut)\n\n@preserve_shape\ndef adjust_contrast_torchvision(img, factor):\n    if factor == 1:\n        return img\n\n    if is_2Dgrayscale_image(img):\n        mean = img.mean()\n    else:\n        mean = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY).mean()\n\n    if factor == 0:\n        return np.full_like(img, int(mean + 0.5), dtype=img.dtype)\n\n    if img.dtype == np.uint8:\n        return _adjust_contrast_torchvision_uint8(img, factor, mean)\n\n    return clip(\n        img.astype(np.float32) * factor + mean * (1 - factor),\n        img.dtype,\n        MAX_VALUES_BY_DTYPE[img.dtype],\n    )\n\n\n@preserve_shape\ndef adjust_saturation_torchvision(img, factor, gamma=0):\n    if factor == 1:\n        return img\n\n    if is_2Dgrayscale_image(img):\n        gray = img\n        return gray\n    else:\n        gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n        gray = cv2.cvtColor(gray, cv2.COLOR_GRAY2RGB)\n\n    if factor == 0:\n        return gray\n\n    result = cv2.addWeighted(img, factor, gray, 1 - factor, gamma=gamma)\n    if img.dtype == np.uint8:\n        return result\n\n    # OpenCV does not clip values for float dtype\n    return clip(result, img.dtype, MAX_VALUES_BY_DTYPE[img.dtype])\n\n\ndef _adjust_hue_torchvision_uint8(img, factor):\n    img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)\n\n    lut = np.arange(0, 256, dtype=np.int16)\n    lut = np.mod(lut + 180 * factor, 180).astype(np.uint8)\n    img[..., 0] = cv2.LUT(img[..., 0], lut)\n\n    return cv2.cvtColor(img, cv2.COLOR_HSV2RGB)\n\n\ndef adjust_hue_torchvision(img, factor):\n    if is_2Dgrayscale_image(img):\n        return img\n\n    if factor == 0:\n        return img\n\n    if img.dtype == np.uint8:\n        return _adjust_hue_torchvision_uint8(img, factor)\n\n    img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)\n    img[..., 0] = np.mod(img[..., 0] + factor * 360, 360)\n    return cv2.cvtColor(img, cv2.COLOR_HSV2RGB)\n\n\ndef is_3Drgb_image(image):\n    return len(image.shape) == 4 and image.shape[-1] == 3\n\ndef is_3Dgrayscale_image(image):\n    return (len(image.shape) == 3) or (len(image.shape) == 4 and image.shape[-1] == 1)\n\ndef is_2Drgb_image(image):\n    return len(image.shape) == 3 and image.shape[-1] == 3\n\ndef is_2Dgrayscale_image(image):\n    return (len(image.shape) == 2) or (len(image.shape) == 3 and image.shape[-1] == 1)\n\ndef _maybe_process_in_chunks(process_fn, **kwargs):\n    \"\"\"\n    Wrap OpenCV function to enable processing images with more than 4 channels.\n    Limitations:\n        This wrapper requires image to be the first argument and rest must be sent via named arguments.\n    Args:\n        process_fn: Transform function (e.g cv2.resize).\n        kwargs: Additional parameters.\n    Returns:\n        numpy.ndarray: Transformed image.\n    \"\"\"\n    def get_num_channels(image):\n        return image.shape[2] if len(image.shape) == 3 else 1\n\n    @wraps(process_fn)\n    def __process_fn(img):\n        num_channels = get_num_channels(img)\n        if num_channels > 4:\n            chunks = []\n            for index in range(0, num_channels, 4):\n                if num_channels - index == 2:\n                    # Many OpenCV functions cannot work with 2-channel images\n                    for i in range(2):\n                        chunk = img[:, :, index + i : index + i + 1]\n                        chunk = process_fn(chunk, **kwargs)\n                        chunk = np.expand_dims(chunk, -1)\n                        chunks.append(chunk)\n                else:\n                    chunk = img[:, :, index : index + 4]\n                    chunk = process_fn(chunk, **kwargs)\n                    chunks.append(chunk)\n            img = np.dstack(chunks)\n        else:\n            img = process_fn(img, **kwargs)\n        return img\n\n    return __process_fn\n\n@preserve_shape\ndef grid_distortion(\n    img,\n    num_steps=10,\n    xsteps=(),\n    ysteps=(),\n    interpolation=cv2.INTER_LINEAR,\n    border_mode=cv2.BORDER_REFLECT_101,\n    value=None,\n):\n    \"\"\"Perform a grid distortion of an input image.\n    Reference:\n        http://pythology.blogspot.sg/2014/03/interpolation-on-regular-distorted-grid.html\n    \"\"\"\n    height, width = img.shape[:2]\n\n    x_step = width // num_steps\n    xx = np.zeros(width, np.float32)\n    prev = 0\n    for idx in range(num_steps + 1):\n        x = idx * x_step\n        start = int(x)\n        end = int(x) + x_step\n        if end > width:\n            end = width\n            cur = width\n        else:\n            cur = prev + x_step * xsteps[idx]\n\n        xx[start:end] = np.linspace(prev, cur, end - start)\n        prev = cur\n\n    y_step = height // num_steps\n    yy = np.zeros(height, np.float32)\n    prev = 0\n    for idx in range(num_steps + 1):\n        y = idx * y_step\n        start = int(y)\n        end = int(y) + y_step\n        if end > height:\n            end = height\n            cur = height\n        else:\n            cur = prev + y_step * ysteps[idx]\n\n        yy[start:end] = np.linspace(prev, cur, end - start)\n        prev = cur\n\n    map_x, map_y = np.meshgrid(xx, yy)\n    map_x = map_x.astype(np.float32)\n    map_y = map_y.astype(np.float32)\n\n    remap_fn = _maybe_process_in_chunks(\n        cv2.remap,\n        map1=map_x,\n        map2=map_y,\n        interpolation=interpolation,\n        borderMode=border_mode,\n        borderValue=value,\n    )\n    return remap_fn(img)\n\n@preserve_shape\ndef downscale(img, scale, interpolation=cv2.INTER_NEAREST):\n    shape_org = img.shape[:3]\n    shape_down = tuple([int(x*scale) for x in shape_org])\n\n    need_cast = interpolation != cv2.INTER_NEAREST and img.dtype == np.uint8\n    if need_cast:\n        img = to_float(img)\n\n\n\n    downscaled = skt.resize(img, shape_down, order=interpolation, mode='reflect',\n                            cval=0, clip=True, anti_aliasing=False)\n    upscaled = skt.resize(downscaled, shape_org, order=interpolation, mode='reflect',\n                            cval=0, clip=True, anti_aliasing=False)\n    if need_cast:\n        upscaled = from_float(np.clip(upscaled, 0, 1), dtype=np.dtype(\"uint8\"))\n    return upscaled\n\ndef glass_blur(img, sigma, max_delta, iterations, dxy, mode):\n    x = cv2.GaussianBlur(np.array(img), sigmaX=sigma, ksize=(0, 0))\n\n    if mode == \"fast\":\n\n        hs = np.arange(img.shape[0] - max_delta, max_delta, -1)\n        ws = np.arange(img.shape[1] - max_delta, max_delta, -1)\n        h = np.tile(hs, ws.shape[0])\n        w = np.repeat(ws, hs.shape[0])\n\n        for i in range(iterations):\n            dy = dxy[:, i, 0]\n            dx = dxy[:, i, 1]\n            x[h, w], x[h + dy, w + dx] = x[h + dy, w + dx], x[h, w]\n\n    elif mode == \"exact\":\n        for ind, (i, h, w) in enumerate(\n            product(\n                range(iterations),\n                range(img.shape[0] - max_delta, max_delta, -1),\n                range(img.shape[1] - max_delta, max_delta, -1),\n            )\n        ):\n            ind = ind if ind < len(dxy) else ind % len(dxy)\n            dy = dxy[ind, i, 0]\n            dx = dxy[ind, i, 1]\n            x[h, w], x[h + dy, w + dx] = x[h + dy, w + dx], x[h, w]\n\n    return cv2.GaussianBlur(x, sigmaX=sigma, ksize=(0, 0))\n\n@preserve_shape\ndef image_compression(img, quality, image_type):\n    if image_type in [\".jpeg\", \".jpg\"]:\n        quality_flag = cv2.IMWRITE_JPEG_QUALITY\n    elif image_type == \".webp\":\n        quality_flag = cv2.IMWRITE_WEBP_QUALITY\n    else:\n        NotImplementedError(\"Only '.jpg' and '.webp' compression transforms are implemented. \")\n\n    input_dtype = img.dtype\n    needs_float = False\n\n    if input_dtype == np.float32:\n        warn(\n            \"Image compression augmentation \"\n            \"is most effective with uint8 inputs, \"\n            \"{} is used as input.\".format(input_dtype),\n            UserWarning,\n        )\n        img = from_float(img, dtype=np.dtype(\"uint8\"))\n        needs_float = True\n    elif input_dtype not in (np.uint8, np.float32):\n        raise ValueError(\"Unexpected dtype {} for image augmentation\".format(input_dtype))\n\n    _, encoded_img = cv2.imencode(image_type, img, (int(quality_flag), quality))\n    img = cv2.imdecode(encoded_img, cv2.IMREAD_UNCHANGED)\n\n    if needs_float:\n        img = to_float(img, max_value=255)\n    return img\n\ndef from_float(img, dtype, max_value=None):\n    if max_value is None:\n        try:\n            max_value = MAX_VALUES_BY_DTYPE[dtype]\n        except KeyError:\n            raise RuntimeError(\n                \"Can't infer the maximum value for dtype {}. You need to specify the maximum value manually by \"\n                \"passing the max_value argument\".format(dtype)\n            )\n    return (img * max_value).astype(dtype)\n\ndef to_float(img, max_value=None):\n    if max_value is None:\n        try:\n            max_value = MAX_VALUES_BY_DTYPE[img.dtype]\n        except KeyError:\n            raise RuntimeError(\n                \"Can't infer the maximum value for dtype {}. You need to specify the maximum value manually by \"\n                \"passing the max_value argument\".format(img.dtype)\n            )\n    return img.astype(\"float32\") / max_value\n"
  },
  {
    "path": "volumentations/augmentations/transforms.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\nimport cv2\nimport random\nimport numpy as np\nimport numbers\nfrom enum import Enum, IntEnum\nfrom ..core.transforms_interface import *\nfrom ..augmentations import functional as F\nfrom ..random_utils import *\n\nclass Float(DualTransform):\n    def apply(self, image):\n        return image.astype(np.float32)\n\nclass Contiguous(DualTransform):\n    def apply(self, image):\n        return np.ascontiguousarray(image)\n\n\nclass PadIfNeeded(DualTransform):\n    def __init__(self, shape, border_mode='constant', value=0, mask_value=0, always_apply=False, p=1):\n        super().__init__(always_apply, p)\n        self.shape = shape\n        self.border_mode = border_mode\n        self.value = value\n        self.mask_value = mask_value\n\n    def apply(self, img):\n        return F.pad(img, self.shape, self.border_mode, self.value)\n\n    def apply_to_mask(self, mask):\n        return F.pad(mask, self.shape, self.border_mode, self.mask_value)\n    \n\nclass Blur(ImageOnlyTransform):\n    \"\"\"Blur the input image using a random-sized kernel.\n    Args:\n        blur_limit (int, (int, int)): maximum kernel size for blurring the input image.\n            Should be in range [3, inf). Default: (3, 7).\n        p (float): probability of applying the transform. Default: 0.5.\n    Targets:\n        image\n    Image types:\n        uint8, float32\n    \"\"\"\n\n    def __init__(self, blur_limit=7, always_apply=False, p=0.5):\n        super(Blur, self).__init__(always_apply, p)\n        self.blur_limit = to_tuple(blur_limit, 3)\n\n    def apply(self, image, ksize=3, **params):\n        return F.blur(image, ksize)\n\n    def get_params(self, **data):\n        return {\"ksize\": int(random.choice(np.arange(self.blur_limit[0], self.blur_limit[1] + 1, 2)))}\n\n    def get_transform_init_args_names(self):\n        return (\"blur_limit\",)\n\n\nclass GaussianNoise(Transform):\n    def __init__(self, var_limit=(10.0, 50.0), mean=0, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.var_limit = var_limit\n        self.mean = mean\n\n    def apply(self, img, gauss=None):\n        return F.gaussian_noise(img, gauss=gauss)\n\n    def get_params(self, **data):\n        image = data[\"image\"]\n        var = uniform(self.var_limit[0], self.var_limit[1])\n        sigma = var ** 0.5\n\n        gauss = normal(self.mean, sigma, image.shape).astype(\"float32\")\n        return {\"gauss\": gauss}\n\n\nclass Resize(DualTransform):\n    def __init__(self, shape, interpolation=1, resize_type=1, always_apply=False, p=1):\n        super().__init__(always_apply, p)\n        self.shape = shape\n        self.interpolation = interpolation\n        self.resize_type = resize_type\n\n    def apply(self, img):\n        return F.resize(img, new_shape=self.shape, interpolation=self.interpolation, resize_type=self.resize_type)\n\n    def apply_to_mask(self, mask):\n        return F.resize(mask, new_shape=self.shape, interpolation=0, resize_type=self.resize_type)\n\n\nclass RandomScale(DualTransform):\n    def __init__(self, scale_limit=[0.9, 1.1], interpolation=1, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.scale_limit = scale_limit\n        self.interpolation = interpolation\n\n    def get_params(self, **data):\n        return {\"scale\": random.uniform(self.scale_limit[0], self.scale_limit[1])}\n\n    def apply(self, img, scale):\n        return F.rescale(img, scale, interpolation=self.interpolation)\n\n    def apply_to_mask(self, mask, scale):\n        return F.rescale(mask, scale, interpolation=0)\n\n\nclass RandomScale2(DualTransform):\n    \"\"\"\n    TODO: compare speeds with version 1.\n    \"\"\"\n    def __init__(self, scale_limit=[0.9, 1.1], interpolation=1, border_mode='constant', value=0, mask_value=0, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.scale_limit = scale_limit\n        self.interpolation = interpolation\n        self.border_mode = border_mode\n        self.value = value\n        self.mask_value = mask_value\n\n    def get_params(self, **data):\n        return {\"scale\": random.uniform(self.scale_limit[0], self.scale_limit[1])}\n\n    def apply(self, img, scale):\n        return F.rescale_warp(img, scale, interpolation=self.interpolation, border_mode=self.border_mode, value=self.value)\n\n    def apply_to_mask(self, mask, scale):\n        return F.rescale_warp(mask, scale, interpolation=0, border_mode=self.border_mode, value=self.mask_value)\n\n\nclass RotatePseudo2D(DualTransform):\n    def __init__(self, axes=(0,1), limit=(-90, 90), interpolation=1, border_mode='constant', value=0, mask_value=0, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.axes = axes\n        self.limit = limit\n        self.interpolation = interpolation\n        self.border_mode = border_mode\n        self.value = value\n        self.mask_value = mask_value\n\n    def apply(self, img, angle):\n        return F.rotate2d(img, angle, axes=self.axes, reshape=False, interpolation=self.interpolation, border_mode=self.border_mode, value=self.value)\n\n    def apply_to_mask(self, mask, angle):\n        return F.rotate2d(mask, angle, axes=self.axes, reshape=False, interpolation=0, border_mode=self.border_mode, value=self.mask_value)\n\n    def get_params(self, **data):\n        return {\"angle\": random.uniform(self.limit[0], self.limit[1])}\n\n\nclass RandomRotate90(DualTransform):\n    def __init__(self, axes=None, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.axes = axes\n\n    def apply(self, img, axes, factor):\n        return np.rot90(img, factor, axes=axes)\n\n    def get_params(self, **data):\n        # Pick predefined axis to flip\n        if self.axes is not None : axes = self.axes\n        # Pick random combination of axes to flip\n        else:\n            combinations = [(0,1), (1,0), (0,2), (2,0), (1,2), (2,1)]\n            axes = random.choice(combinations)\n        # Define params\n        return {\"factor\": random.randint(0, 3),\n                \"axes\": axes}\n\n\nclass Flip(DualTransform):\n    def __init__(self, axis=None, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.axis = axis\n\n    def apply(self, img):\n        # Pick predefined axis to flip\n        if self.axis is not None : axis = self.axis\n        # Pick random combination of axes to flip\n        else:\n            combinations = [(0,), (1,), (2,), (0,1), (0,2), (1,2), (0,1,2)]\n            axis = random.choice(combinations)\n        # Apply flipping\n        return np.flip(img, axis)\n\n\nclass Normalize(Transform):\n    def __init__(self, range_norm=False, always_apply=True, p=1.0):\n        super().__init__(always_apply, p)\n        self.range_norm = range_norm\n\n    def apply(self, img):\n        return F.normalize(img, range_norm=self.range_norm)\n\n\nclass Transpose(DualTransform):\n    def __init__(self, axes=(1,0,2), always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.axes = axes\n\n    def apply(self, img):\n        return np.transpose(img, self.axes)\n\n\nclass CenterCrop(DualTransform):\n    def __init__(self, shape, always_apply=False, p=1.0):\n        super().__init__(always_apply, p)\n        self.shape = shape\n\n    def apply(self, img):\n        return F.center_crop(img, self.shape[0], self.shape[1], self.shape[2])\n\n\nclass RandomResizedCrop(DualTransform):\n    def __init__(self, shape, scale_limit=(0.8, 1.2), interpolation=1, resize_type=1, always_apply=False, p=1.0):\n        super().__init__(always_apply, p)\n        self.shape = shape\n        self.scale_limit = scale_limit\n        self.interpolation = interpolation\n        self.resize_type = resize_type\n\n    def apply(self, img, scale=1, scaled_shape=None, h_start=0, w_start=0, d_start=0):\n        if scaled_shape is None:\n            scaled_shape = self.shape\n        img = F.random_crop(img, scaled_shape[0], scaled_shape[1], scaled_shape[2], h_start, w_start, d_start)\n        return F.resize(img, new_shape=self.shape, interpolation=self.interpolation, resize_type=self.resize_type)\n\n    def apply_to_mask(self, img, scale=1, scaled_shape=None, h_start=0, w_start=0, d_start=0):\n        if scaled_shape is None:\n            scaled_shape = self.shape\n        img = F.random_crop(img, scaled_shape[0], scaled_shape[1], scaled_shape[2], h_start, w_start, d_start)\n        return F.resize(img, new_shape=self.shape, interpolation=0, resize_type=self.resize_type)\n\n    def get_params(self, **data):\n        scale = random.uniform(self.scale_limit[0], self.scale_limit[1])\n        scaled_shape = [int(scale * i) for i in self.shape]\n        return {\n            \"scale\": scale,\n            \"scaled_shape\": scaled_shape,\n            \"h_start\": random.random(),\n            \"w_start\": random.random(),\n            \"d_start\": random.random(),\n        }\n\n\nclass RandomCrop(DualTransform):\n    def __init__(self, shape, always_apply=False, p=1.0):\n        super().__init__(always_apply, p)\n        self.shape = shape\n\n    def apply(self, img, h_start=0, w_start=0, d_start=0):\n        return F.random_crop(img, self.shape[0], self.shape[1], self.shape[2], h_start, w_start, d_start)\n\n    def get_params(self, **data):\n        return {\n            \"h_start\": random.random(),\n            \"w_start\": random.random(),\n            \"d_start\": random.random(),\n        }\n\n\nclass CropNonEmptyMaskIfExists(DualTransform):\n    def __init__(self, shape, always_apply=False, p=1.0):\n        super().__init__(always_apply, p)\n        self.height = shape[0]\n        self.width = shape[1]\n        self.depth = shape[2]\n\n    def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_max=0):\n        return F.crop(img, x_min, y_min, z_min, x_max, y_max, z_max)\n\n    def get_params(self, **data):\n        mask = data[\"mask\"] # [H, W, D]\n        mask_height, mask_width, mask_depth = mask.shape\n\n        if mask.sum() == 0:\n            x_min = random.randint(0, mask_height - self.height)\n            y_min = random.randint(0, mask_width - self.width)\n            z_min = random.randint(0, mask_depth - self.depth)\n        else:\n            non_zero = np.argwhere(mask)\n            x, y, z = random.choice(non_zero)\n            x_min = x - random.randint(0, self.height - 1)\n            y_min = y - random.randint(0, self.width - 1)\n            z_min = z - random.randint(0, self.depth - 1)\n            x_min = np.clip(x_min, 0, mask_height - self.height)\n            y_min = np.clip(y_min, 0, mask_width - self.width)\n            z_min = np.clip(z_min, 0, mask_depth - self.depth)\n\n        x_max = x_min + self.height\n        y_max = y_min + self.width\n        z_max = z_min + self.depth\n\n        return {\n            \"x_min\": x_min, \"x_max\": x_max,\n            \"y_min\": y_min, \"y_max\": y_max,\n            \"z_min\": z_min, \"z_max\": z_max,\n        }\n\n\nclass ResizedCropNonEmptyMaskIfExists(DualTransform):\n    def __init__(self, shape, scale_limit=(0.8, 1.2), interpolation=1, resize_type=1, always_apply=False, p=1.0):\n        super().__init__(always_apply, p)\n        self.shape = shape\n        self.scale_limit = scale_limit\n        self.interpolation = interpolation\n        self.resize_type = resize_type\n\n    def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_max=0):\n        img = F.crop(img, x_min, y_min, z_min, x_max, y_max, z_max)\n        return F.resize(img, self.shape, interpolation=self.interpolation, resize_type=self.resize_type)\n\n    def apply_to_mask(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_max=0):\n        img = F.crop(img, x_min, y_min, z_min, x_max, y_max, z_max)\n        return F.resize(img, self.shape, interpolation=0, resize_type=self.resize_type)\n\n    def get_params(self, **data):\n        mask = data[\"mask\"] # [H, W, D]\n        mask_height, mask_width, mask_depth = mask.shape\n\n        scale = random.uniform(self.scale_limit[0], self.scale_limit[1])\n        height, width, depth = [int(scale * i) for i in self.shape]\n\n        if mask.sum() == 0:\n            x_min = random.randint(0, mask_height - height)\n            y_min = random.randint(0, mask_width - width)\n            z_min = random.randint(0, mask_depth - depth)\n        else:\n            non_zero = np.argwhere(mask)\n            x, y, z = random.choice(non_zero)\n            x_min = x - random.randint(0, height - 1)\n            y_min = y - random.randint(0, width - 1)\n            z_min = z - random.randint(0, depth - 1)\n            x_min = np.clip(x_min, 0, mask_height - height)\n            y_min = np.clip(y_min, 0, mask_width - width)\n            z_min = np.clip(z_min, 0, mask_depth - depth)\n\n        x_max = x_min + height\n        y_max = y_min + width\n        z_max = z_min + depth\n\n        return {\n            \"x_min\": x_min, \"x_max\": x_max,\n            \"y_min\": y_min, \"y_max\": y_max,\n            \"z_min\": z_min, \"z_max\": z_max,\n        }\n\n\nclass RandomGamma(ImageOnlyTransform):\n    \"\"\"\n    Args:\n        gamma_limit (float or (float, float)): If gamma_limit is a single float value,\n            the range will be (-gamma_limit, gamma_limit). Default: (80, 120).\n        eps: Deprecated.\n    Targets:\n        image\n    Image types:\n        uint8, float32\n    \"\"\"\n\n    def __init__(self, gamma_limit=(80, 120), eps=None, always_apply=False, p=0.5):\n        super(RandomGamma, self).__init__(always_apply, p)\n        self.gamma_limit = to_tuple(gamma_limit)\n        self.eps = eps\n\n    def apply(self, img, gamma=1, **params):\n        return F.gamma_transform(img, gamma=gamma)\n\n    def get_params(self, **data):\n        return {\"gamma\": random.randint(self.gamma_limit[0], self.gamma_limit[1]) / 100.0}\n\n    def get_transform_init_args_names(self):\n        return (\"gamma_limit\", \"eps\")\n\n\nclass ElasticTransformPseudo2D(DualTransform):\n    def __init__(self, alpha=1000, sigma=50, alpha_affine=1, approximate=False, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.alpha = alpha\n        self.sigma = sigma\n        self.alpha_affine = alpha_affine\n        self.approximate = approximate\n\n    def apply(self, img, random_state=None):\n        return F.elastic_transform_pseudo2D(img, self.alpha, self.sigma, self.alpha_affine, interpolation=cv2.INTER_LINEAR, border_mode=cv2.BORDER_REFLECT_101, value=None, random_state=random_state, approximate=False)\n\n    def apply_to_mask(self, img, random_state=None):\n        return F.elastic_transform_pseudo2D(img, self.alpha, self.sigma, self.alpha_affine, interpolation=cv2.INTER_NEAREST, border_mode=cv2.BORDER_REFLECT_101, value=None, random_state=random_state, approximate=False)\n\n    def get_params(self, **data):\n        return {\"random_state\": random.randint(0, 10000)}\n\n\nclass ElasticTransform(DualTransform):\n    def __init__(self, deformation_limits=(0, 0.25), interpolation=1, border_mode='constant', value=0, mask_value=0, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.deformation_limits = deformation_limits\n        self.interpolation = interpolation\n        self.border_mode = border_mode\n        self.value = value\n        self.mask_value = mask_value\n\n    def apply(self, img, sigmas, alphas, random_state=None):\n        return F.elastic_transform(img, sigmas, alphas, interpolation=self.interpolation, random_state=random_state, border_mode=self.border_mode, value=self.value)\n\n    def apply_to_mask(self, img, sigmas, alphas, random_state=None):\n        return F.elastic_transform(img, sigmas, alphas, interpolation=0, random_state=random_state, border_mode=self.border_mode, value=self.mask_value)\n\n    def get_params(self, **data):\n        image = data[\"image\"] # [H, W, D]\n        random_state = random.randint(0, 10000)\n        deformation = random.uniform(*self.deformation_limits)\n        sigmas = [deformation * x for x in image.shape[:3]]\n        alphas = [random.uniform(x/8, x/2) for x in sigmas]\n        return {\n            \"random_state\": random_state,\n            \"sigmas\": sigmas,\n            \"alphas\": alphas,\n        }\n\n\nclass Rotate(DualTransform):\n    def __init__(self, x_limit=(-15,15), y_limit=(-15,15), z_limit=(-15,15), interpolation=1, border_mode='constant', value=0, mask_value=0, always_apply=False, p=0.5):\n        super().__init__(always_apply, p)\n        self.x_limit = x_limit\n        self.y_limit = y_limit\n        self.z_limit = z_limit\n        self.interpolation = interpolation\n        self.border_mode = border_mode\n        self.value = value\n        self.mask_value = mask_value\n\n    def apply(self, img, x, y, z):\n        return F.rotate3d(img, x, y, z, interpolation=self.interpolation, border_mode=self.border_mode, value=self.value)\n\n    def apply_to_mask(self, mask, x, y, z):\n        return F.rotate3d(mask, x, y, z, interpolation=0, border_mode=self.border_mode, value=self.mask_value)\n\n    def get_params(self, **data):\n        return {\n            \"x\": random.uniform(self.x_limit[0], self.x_limit[1]),\n            \"y\": random.uniform(self.y_limit[0], self.y_limit[1]),\n            \"z\": random.uniform(self.z_limit[0], self.z_limit[1]),\n        }\n\n\nclass RemoveEmptyBorder(DualTransform):\n    def __init__(self, border_value=0, always_apply=False, p=1.0):\n        super().__init__(always_apply, p)\n        self.border_value = border_value\n\n\n    def apply(self, img, x_min=0, y_min=0, z_min=0, x_max=0, y_max=0, z_max=0):\n        return F.crop(img, x_min, y_min, z_min, x_max, y_max, z_max)\n\n    def get_params(self, **data):\n        image = data[\"image\"] # [H, W, D, C]\n\n        borders = np.where(image != self.border_value)\n\n        return {\n            \"x_min\": np.min(borders[0]), \"x_max\": np.max(borders[0])+1,\n            \"y_min\": np.min(borders[1]), \"y_max\": np.max(borders[1])+1,\n            \"z_min\": np.min(borders[2]), \"z_max\": np.max(borders[2])+1,\n        }\n\n\nclass RandomCropFromBorders(DualTransform):\n    \"\"\"Crop bbox from image randomly cut parts from borders without resize at the end\n\n    Args:\n        crop_value (float): float value in (0.0, 0.5) range. Default 0.1\n        crop_0_min (float): float value in (0.0, 1.0) range. Default 0.1\n        crop_0_max (float): float value in (0.0, 1.0) range. Default 0.1\n        crop_1_min (float): float value in (0.0, 1.0) range. Default 0.1\n        crop_1_max (float): float value in (0.0, 1.0) range. Default 0.1\n        crop_2_min (float): float value in (0.0, 1.0) range. Default 0.1\n        crop_2_max (float): float value in (0.0, 1.0) range. Default 0.1\n        p (float): probability of applying the transform. Default: 1.\n\n    Targets:\n        image, mask, bboxes, keypoints\n\n    Image types:\n        uint8, float32\n    \"\"\"\n\n    def __init__(\n            self,\n            crop_value=None,\n            crop_0_min=None,\n            crop_0_max=None,\n            crop_1_min=None,\n            crop_1_max=None,\n            crop_2_min=None,\n            crop_2_max=None,\n            always_apply=False,\n            p=1.0\n    ):\n        super(RandomCropFromBorders, self).__init__(always_apply, p)\n        self.crop_0_min = 0.1\n        self.crop_0_max = 0.1\n        self.crop_1_min = 0.1\n        self.crop_1_max = 0.1\n        self.crop_2_min = 0.1\n        self.crop_2_max = 0.1\n        if crop_value is not None:\n            self.crop_0_min = crop_value\n            self.crop_0_max = crop_value\n            self.crop_1_min = crop_value\n            self.crop_1_max = crop_value\n            self.crop_2_min = crop_value\n            self.crop_2_max = crop_value\n        if crop_0_min is not None:\n            self.crop_0_min = crop_0_min\n        if crop_0_max is not None:\n            self.crop_0_max = crop_0_max\n        if crop_1_min is not None:\n            self.crop_1_min = crop_1_min\n        if crop_1_max is not None:\n            self.crop_1_max = crop_1_max\n        if crop_2_min is not None:\n            self.crop_2_min = crop_2_min\n        if crop_2_max is not None:\n            self.crop_2_max = crop_2_max\n\n    def get_params(self, **data):\n        img = data[\"image\"]\n        sh0_min = random.randint(0, int(self.crop_0_min * img.shape[0]))\n        sh0_max = random.randint(max(sh0_min + 1, int((1 - self.crop_0_max) * img.shape[0])), img.shape[0])\n\n        sh1_min = random.randint(0, int(self.crop_1_min * img.shape[1]))\n        sh1_max = random.randint(max(sh1_min + 1, int((1 - self.crop_1_max) * img.shape[1])), img.shape[1])\n\n        sh2_min = random.randint(0, int(self.crop_2_min * img.shape[2]))\n        sh2_max = random.randint(max(sh2_min + 1, int((1 - self.crop_2_max) * img.shape[2])), img.shape[2])\n\n        return {\n            \"sh0_min\": sh0_min, \"sh0_max\": sh0_max,\n            \"sh1_min\": sh1_min, \"sh1_max\": sh1_max,\n            \"sh2_min\": sh2_min, \"sh2_max\": sh2_max\n        }\n\n    def apply(self, img, sh0_min=0, sh0_max=0, sh1_min=0, sh1_max=0, sh2_min=0, sh2_max=0, **params):\n        return F.clamping_crop(img, sh0_min, sh1_min, sh2_min, sh0_max, sh1_max, sh2_max)\n\n    def apply_to_mask(self, mask, sh0_min=0, sh0_max=0, sh1_min=0, sh1_max=0, sh2_min=0, sh2_max=0, **params):\n        return F.clamping_crop(mask, sh0_min, sh1_min, sh2_min, sh0_max, sh1_max, sh2_max)\n\n\nclass GridDropout(DualTransform):\n    \"\"\"GridDropout, drops out rectangular regions of an image and the corresponding mask in a grid fashion.\n    Args:\n        ratio (float): the ratio of the mask holes to the unit_size (same for horizontal and vertical directions).\n            Must be between 0 and 1. Default: 0.5.\n        unit_size_min (int): minimum size of the grid unit. Must be between 2 and the image shorter edge.\n            If 'None', holes_number_x and holes_number_y are used to setup the grid. Default: `None`.\n        unit_size_max (int): maximum size of the grid unit. Must be between 2 and the image shorter edge.\n            If 'None', holes_number_x and holes_number_y are used to setup the grid. Default: `None`.\n        holes_number_x (int): the number of grid units in x direction. Must be between 1 and image width//2.\n            If 'None', grid unit width is set as image_width//10. Default: `None`.\n        holes_number_y (int): the number of grid units in y direction. Must be between 1 and image height//2.\n            If `None`, grid unit height is set equal to the grid unit width or image height, whatever is smaller.\n        holes_number_z (int): the number of grid units in z direction. Must be between 1 and image depth//2.\n            If `None`, grid unit depth is set equal to the grid unit width or image height, whatever is smaller.\n        shift_x (int): offsets of the grid start in x direction from (0,0) coordinate.\n            Clipped between 0 and grid unit_width - hole_width. Default: 0.\n        shift_y (int): offsets of the grid start in y direction from (0,0) coordinate.\n            Clipped between 0 and grid unit height - hole_height. Default: 0.\n        shift_z (int): offsets of the grid start in z direction from (0,0) coordinate.\n            Clipped between 0 and grid unit depth - hole_depth. Default: 0.\n        random_offset (boolean): weather to offset the grid randomly between 0 and grid unit size - hole size\n            If 'True', entered shift_x, shift_y, shift_z are ignored and set randomly. Default: `False`.\n        fill_value (int): value for the dropped pixels. Default = 0\n        mask_fill_value (int): value for the dropped pixels in mask.\n            If `None`, tranformation is not applied to the mask. Default: `None`.\n    Targets:\n        image, mask\n    Image types:\n        uint8, float32\n    References:\n        https://arxiv.org/abs/2001.04086\n    \"\"\"\n\n    def __init__(\n        self,\n        ratio: float = 0.5,\n        unit_size_min: int = None,\n        unit_size_max: int = None,\n        holes_number_x: int = None,\n        holes_number_y: int = None,\n        holes_number_z: int = None,\n        shift_x: int = 0,\n        shift_y: int = 0,\n        shift_z: int = 0,\n        random_offset: bool = False,\n        fill_value: int = 0,\n        mask_fill_value: int = None,\n        always_apply: bool = False,\n        p: float = 0.5,\n    ):\n        super(GridDropout, self).__init__(always_apply, p)\n        self.ratio = ratio\n        self.unit_size_min = unit_size_min\n        self.unit_size_max = unit_size_max\n        self.holes_number_x = holes_number_x\n        self.holes_number_y = holes_number_y\n        self.holes_number_z = holes_number_z\n        self.shift_x = shift_x\n        self.shift_y = shift_y\n        self.shift_z = shift_z\n        self.random_offset = random_offset\n        self.fill_value = fill_value\n        self.mask_fill_value = mask_fill_value\n        if not 0 < self.ratio <= 1:\n            raise ValueError(\"ratio must be between 0 and 1.\")\n\n    def apply(self, image, holes=(), **params):\n        return F.cutout(image, holes, self.fill_value)\n\n    def apply_to_mask(self, image, holes=(), **params):\n        if self.mask_fill_value is None:\n            return image\n\n        return F.cutout(image, holes, self.mask_fill_value)\n\n    def get_params(self, **data):\n        img = data[\"image\"]\n        height, width, depth = img.shape[:3]\n        # set grid using unit size limits\n        if self.unit_size_min and self.unit_size_max:\n            if not 2 <= self.unit_size_min <= self.unit_size_max:\n                raise ValueError(\"Max unit size should be >= min size, both at least 2 pixels.\")\n            if self.unit_size_max > min(height, width):\n                raise ValueError(\"Grid size limits must be within the shortest image edge.\")\n            unit_width = random.randint(self.unit_size_min, self.unit_size_max + 1)\n            unit_height = unit_width\n            unit_depth = unit_width\n        else:\n            # set grid using holes numbers\n            if self.holes_number_x is None:\n                unit_width = max(2, width // 10)\n            else:\n                if not 1 <= self.holes_number_x <= width // 2:\n                    raise ValueError(\"The hole_number_x must be between 1 and image width//2.\")\n                unit_width = width // self.holes_number_x\n            if self.holes_number_y is None:\n                unit_height = max(min(unit_width, height), 2)\n            else:\n                if not 1 <= self.holes_number_y <= height // 2:\n                    raise ValueError(\"The hole_number_y must be between 1 and image height//2.\")\n                unit_height = height // self.holes_number_y\n            if self.holes_number_z is None:\n                unit_depth = max(min(unit_height, depth), 2)\n            else:\n                if not 1 <= self.holes_number_z <= depth // 2:\n                    raise ValueError(\"The hole_number_z must be between 1 and image depth//2.\")\n                unit_depth = depth // self.holes_number_z\n\n        hole_width = int(unit_width * self.ratio)\n        hole_height = int(unit_height * self.ratio)\n        hole_depth = int(unit_depth * self.ratio)\n        # min 1 pixel and max unit length - 1\n        hole_width = min(max(hole_width, 1), unit_width - 1)\n        hole_height = min(max(hole_height, 1), unit_height - 1)\n        hole_depth = min(max(hole_depth, 1), unit_depth - 1)\n        # set offset of the grid\n        if self.shift_x is None:\n            shift_x = 0\n        else:\n            shift_x = min(max(0, self.shift_x), unit_width - hole_width)\n        if self.shift_y is None:\n            shift_y = 0\n        else:\n            shift_y = min(max(0, self.shift_y), unit_height - hole_height)\n        if self.shift_z is None:\n            shift_z = 0\n        else:\n            shift_z = min(max(0, self.shift_z), unit_depth - hole_depth)\n        if self.random_offset:\n            shift_x = random.randint(0, unit_width - hole_width)\n            shift_y = random.randint(0, unit_height - hole_height)\n            shift_z = random.randint(0, unit_depth - hole_depth)\n        holes = []\n        for i in range(width // unit_width + 1):\n            for j in range(height // unit_height + 1):\n                for k in range(depth // unit_depth + 1):\n                    x1 = min(shift_x + unit_width * i, width)\n                    y1 = min(shift_y + unit_height * j, height)\n                    z1 = min(shift_z + unit_depth * k, depth)\n                    x2 = min(x1 + hole_width, width)\n                    y2 = min(y1 + hole_height, height)\n                    z2 = min(z1 + hole_depth, depth)\n                    holes.append((x1, y1, z1, x2, y2, z2))\n\n        return {\"holes\": holes}\n\n    def get_transform_init_args_names(self):\n        return (\n            \"ratio\",\n            \"unit_size_min\",\n            \"unit_size_max\",\n            \"holes_number_x\",\n            \"holes_number_y\",\n            \"shift_x\",\n            \"shift_y\",\n            \"mask_fill_value\",\n            \"random_offset\",\n        )\n\n\nclass RandomDropPlane(DualTransform):\n    \"\"\"Randomly drop some planes in random axis\n\n    Args:\n        plane_drop_prob (float): float value in (0.0, 1.0) range. Default: 0.1\n        axes (tuple). Default: 0\n        p (float): probability of applying the transform. Default: 1.\n\n    Targets:\n        image, mask\n\n    Image types:\n        uint8, float32\n    \"\"\"\n\n    def __init__(\n            self,\n            plane_drop_prob=0.1,\n            axes=(0,),\n            always_apply=False,\n            p=1.0\n    ):\n        super(RandomDropPlane, self).__init__(always_apply, p)\n        self.plane_drop_prob = plane_drop_prob\n        self.axes = axes\n\n    def get_params(self, **data):\n        img = data[\"image\"]\n        axis = random.choice(self.axes)\n        r = img.shape[axis]\n        indexes = []\n        for i in range(r):\n            if random.uniform(0, 1) > self.plane_drop_prob:\n                indexes.append(i)\n        if len(indexes) == 0:\n            indexes.append(0)\n\n        return {\n            \"indexes\": indexes, \"axis\": axis,\n        }\n\n    def apply(self, img, indexes=(), axis=0, **params):\n        return np.take(img, indexes, axis=axis)\n\n    def apply_to_mask(self, mask, indexes=(), axis=0, **params):\n        return np.take(mask, indexes, axis=axis)\n\nclass RandomBrightnessContrast(ImageOnlyTransform):\n    \"\"\"Randomly change brightness and contrast of the input image.\n    Args:\n        brightness_limit ((float, float) or float): factor range for changing brightness.\n            If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).\n        contrast_limit ((float, float) or float): factor range for changing contrast.\n            If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).\n        brightness_by_max (Boolean): If True adjust contrast by image dtype maximum,\n            else adjust contrast by image mean.\n        p (float): probability of applying the transform. Default: 0.5.\n    Targets:\n        image\n    Image types:\n        uint8, float32\n    \"\"\"\n\n    def __init__(\n        self,\n        brightness_limit=0.2,\n        contrast_limit=0.2,\n        brightness_by_max=True,\n        always_apply=False,\n        p=0.5,\n    ):\n        super(RandomBrightnessContrast, self).__init__(always_apply, p)\n        self.brightness_limit = to_tuple(brightness_limit)\n        self.contrast_limit = to_tuple(contrast_limit)\n        self.brightness_by_max = brightness_by_max\n\n    def apply(self, img, alpha=1.0, beta=0.0, **params):\n        return F.brightness_contrast_adjust(img, alpha, beta, self.brightness_by_max)\n\n    def get_params(self, **data):\n        return {\n            \"alpha\": 1.0 + random.uniform(self.contrast_limit[0], self.contrast_limit[1]),\n            \"beta\": 0.0 + random.uniform(self.brightness_limit[0], self.brightness_limit[1]),\n        }\n\n    def get_transform_init_args_names(self):\n        return (\"brightness_limit\", \"contrast_limit\", \"brightness_by_max\")\n\n\nclass ColorJitter(ImageOnlyTransform):\n    \"\"\"Randomly changes the brightness, contrast, and saturation of an image. Compared to ColorJitter from torchvision,\n    this transform gives a little bit different results because Pillow (used in torchvision) and OpenCV (used in\n    Albumentations) transform an image to HSV format by different formulas. Another difference - Pillow uses uint8\n    overflow, but we use value saturation.\n    Args:\n        brightness (float or tuple of float (min, max)): How much to jitter brightness.\n            brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness]\n            or the given [min, max]. Should be non negative numbers.\n        contrast (float or tuple of float (min, max)): How much to jitter contrast.\n            contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast]\n            or the given [min, max]. Should be non negative numbers.\n        saturation (float or tuple of float (min, max)): How much to jitter saturation.\n            saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation]\n            or the given [min, max]. Should be non negative numbers.\n        hue (float or tuple of float (min, max)): How much to jitter hue.\n            hue_factor is chosen uniformly from [-hue, hue] or the given [min, max].\n            Should have 0 <= hue <= 0.5 or -0.5 <= min <= max <= 0.5.\n    \"\"\"\n\n    def __init__(\n        self,\n        brightness=0.2,\n        contrast=0.2,\n        saturation=0.2,\n        hue=0.2,\n        always_apply=False,\n        p=0.5,\n    ):\n        super(ColorJitter, self).__init__(always_apply=always_apply, p=p)\n\n        self.brightness = self.__check_values(brightness, \"brightness\")\n        self.contrast = self.__check_values(contrast, \"contrast\")\n        self.saturation = self.__check_values(saturation, \"saturation\")\n        self.hue = self.__check_values(hue, \"hue\", offset=0, bounds=[-0.5, 0.5], clip=False)\n\n    @staticmethod\n    def __check_values(value, name, offset=1, bounds=(0, float(\"inf\")), clip=True):\n        if isinstance(value, numbers.Number):\n            if value < 0:\n                raise ValueError(\"If {} is a single number, it must be non negative.\".format(name))\n            value = [offset - value, offset + value]\n            if clip:\n                value[0] = max(value[0], 0)\n        elif isinstance(value, (tuple, list)) and len(value) == 2:\n            if not bounds[0] <= value[0] <= value[1] <= bounds[1]:\n                raise ValueError(\"{} values should be between {}\".format(name, bounds))\n        else:\n            raise TypeError(\"{} should be a single number or a list/tuple with length 2.\".format(name))\n\n        return value\n\n    def get_params(self, **data):\n        brightness = random.uniform(self.brightness[0], self.brightness[1])\n        contrast = random.uniform(self.contrast[0], self.contrast[1])\n        saturation = random.uniform(self.saturation[0], self.saturation[1])\n        hue = random.uniform(self.hue[0], self.hue[1])\n\n        transforms = [\n            lambda x: F.adjust_brightness_torchvision(x, brightness),\n            lambda x: F.adjust_contrast_torchvision(x, contrast),\n            lambda x: F.adjust_saturation_torchvision(x, saturation),\n            lambda x: F.adjust_hue_torchvision(x, hue),\n        ]\n        random.shuffle(transforms)\n\n        return {\"transforms\": transforms}\n\n    def apply(self, img, transforms=(), **params):\n        if not F.is_3Drgb_image(img) and not F.is_3Dgrayscale_image(img):\n            raise TypeError(\"ColorJitter transformation expects 1-channel or 3-channel images.\")\n\n        for transform in transforms:\n            img_transformed = np.zeros(img.shape, dtype=np.float32)\n            for slice in range(img.shape[0]):\n                img_transformed[slice,:,:] = transform(img[slice,:,:].astype(np.float32))\n        return img_transformed\n\n    def get_transform_init_args_names(self):\n        return (\"brightness\", \"contrast\", \"saturation\", \"hue\")\n\n\nclass GridDistortion(DualTransform):\n    \"\"\"\n    Args:\n        num_steps (int): count of grid cells on each side.\n        distort_limit (float, (float, float)): If distort_limit is a single float, the range\n            will be (-distort_limit, distort_limit). Default: (-0.03, 0.03).\n        interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:\n            cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4.\n            Default: cv2.INTER_LINEAR.\n        border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of:\n            cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101.\n            Default: cv2.BORDER_REFLECT_101\n        value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT.\n        mask_value (int, float,\n                    list of ints,\n                    list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks.\n    Targets:\n        image, mask\n    Image types:\n        uint8, float32\n    \"\"\"\n\n    def __init__(\n        self,\n        num_steps=5,\n        distort_limit=0.3,\n        interpolation=cv2.INTER_LINEAR,\n        border_mode=cv2.BORDER_REFLECT_101,\n        value=None,\n        mask_value=None,\n        always_apply=False,\n        p=0.5,\n    ):\n        super(GridDistortion, self).__init__(always_apply, p)\n        self.num_steps = num_steps\n        self.distort_limit = to_tuple(distort_limit)\n        self.interpolation = interpolation\n        self.border_mode = border_mode\n        self.value = value\n        self.mask_value = mask_value\n\n    def apply(self, img, stepsx=(), stepsy=(), interpolation=cv2.INTER_LINEAR, **params):\n        img_transformed = np.zeros(img.shape, dtype=img.dtype)\n        for slice in range(img.shape[0]):\n            img_transformed[slice,:,:] = F.grid_distortion(img[slice,:,:],\n                                                           self.num_steps,\n                                                           stepsx,\n                                                           stepsy,\n                                                           interpolation,\n                                                           self.border_mode,\n                                                           self.value)\n        return img_transformed\n\n    def apply_to_mask(self, img, stepsx=(), stepsy=(), **params):\n        img_transformed = np.zeros(img.shape, dtype=img.dtype)\n        for slice in range(img.shape[0]):\n            img_transformed[slice,:,:] = F.grid_distortion(img[slice,:,:],\n                                                           self.num_steps,\n                                                           stepsx,\n                                                           stepsy,\n                                                           cv2.INTER_NEAREST,\n                                                           self.border_mode,\n                                                           self.mask_value)\n        return img_transformed\n\n    def get_params(self, **data):\n        stepsx = [1 + random.uniform(self.distort_limit[0], self.distort_limit[1]) for i in range(self.num_steps + 1)]\n        stepsy = [1 + random.uniform(self.distort_limit[0], self.distort_limit[1]) for i in range(self.num_steps + 1)]\n        return {\"stepsx\": stepsx, \"stepsy\": stepsy}\n\n    def get_transform_init_args_names(self):\n        return (\n            \"num_steps\",\n            \"distort_limit\",\n            \"interpolation\",\n            \"border_mode\",\n            \"value\",\n            \"mask_value\",\n        )\n\nclass Downscale(ImageOnlyTransform):\n    \"\"\"Decreases image quality by downscaling and upscaling back.\n    Args:\n        scale_min (float): lower bound on the image scale. Should be < 1.\n        scale_max (float):  upper bound on the image scale. Should be < 1.\n        interpolation: cv2 interpolation method. cv2.INTER_NEAREST by default\n    Targets:\n        image\n    Image types:\n        uint8, float32\n    \"\"\"\n\n    def __init__(\n        self,\n        scale_min=0.25,\n        scale_max=0.25,\n        interpolation=cv2.INTER_NEAREST,\n        always_apply=False,\n        p=0.5,\n    ):\n        super(Downscale, self).__init__(always_apply, p)\n        if scale_min > scale_max:\n            raise ValueError(\"Expected scale_min be less or equal scale_max, got {} {}\".format(scale_min, scale_max))\n        if scale_max >= 1:\n            raise ValueError(\"Expected scale_max to be less than 1, got {}\".format(scale_max))\n        self.scale_min = scale_min\n        self.scale_max = scale_max\n        self.interpolation = interpolation\n\n    def apply(self, image, scale, interpolation, **params):\n        return F.downscale(image, scale=scale, interpolation=interpolation)\n\n    def get_params(self, **data):\n        return {\n            \"scale\": random.uniform(self.scale_min, self.scale_max),\n            \"interpolation\": self.interpolation,\n        }\n\n    def get_transform_init_args_names(self):\n        return \"scale_min\", \"scale_max\", \"interpolation\"\n\nclass GlassBlur(Blur):\n    \"\"\"Apply glass noise to the input image.\n    Args:\n        sigma (float): standard deviation for Gaussian kernel.\n        max_delta (int): max distance between pixels which are swapped.\n        iterations (int): number of repeats.\n            Should be in range [1, inf). Default: (2).\n        mode (str): mode of computation: fast or exact. Default: \"fast\".\n        p (float): probability of applying the transform. Default: 0.5.\n    Targets:\n        image\n    Image types:\n        uint8, float32\n    Reference:\n    |  https://arxiv.org/abs/1903.12261\n    |  https://github.com/hendrycks/robustness/blob/master/ImageNet-C/create_c/make_imagenet_c.py\n    \"\"\"\n\n    def __init__(\n        self,\n        sigma=0.7,\n        max_delta=4,\n        iterations=2,\n        always_apply=False,\n        mode=\"fast\",\n        p=0.5,\n    ):\n        super(GlassBlur, self).__init__(always_apply=always_apply, p=p)\n        if iterations < 1:\n            raise ValueError(\"Iterations should be more or equal to 1, but we got {}\".format(iterations))\n\n        if mode not in [\"fast\", \"exact\"]:\n            raise ValueError(\"Mode should be 'fast' or 'exact', but we got {}\".format(iterations))\n\n        self.sigma = sigma\n        self.max_delta = max_delta\n        self.iterations = iterations\n        self.mode = mode\n\n    def apply(self, img, dxy, **data):\n        img_blurred = np.zeros(img.shape, dtype=img.dtype)\n        for slice in range(img.shape[0]):\n            img_processed = F.glass_blur(img[slice,:,:],\n                                         self.sigma,\n                                         self.max_delta,\n                                         self.iterations,\n                                         dxy,\n                                         self.mode)\n            if len(img.shape) == 4 and img.shape[-1] == 1:\n                img_blurred[slice,:,:] = np.reshape(img_processed,\n                                                        img_processed.shape+(1,))\n            else : img_blurred[slice,:,:] = img_processed\n        return img_blurred\n\n    def get_params(self, **data):\n        img = data[\"image\"]\n\n        # generate array containing all necessary values for transformations\n        width_pixels = img.shape[1] - self.max_delta * 2\n        height_pixels = img.shape[2] - self.max_delta * 2\n        total_pixels = width_pixels * height_pixels\n        dxy = randint(-self.max_delta, self.max_delta, size=(total_pixels, self.iterations, 2))\n\n        return {\"dxy\": dxy}\n\n    def get_transform_init_args_names(self):\n        return (\"sigma\", \"max_delta\", \"iterations\")\n\n    @property\n    def targets_as_params(self):\n        return [\"image\"]\n\nclass ImageCompression(ImageOnlyTransform):\n    \"\"\"Decrease Jpeg, WebP compression of an image.\n    Args:\n        quality_lower (float): lower bound on the image quality.\n                               Should be in [0, 100] range for jpeg and [1, 100] for webp.\n        quality_upper (float): upper bound on the image quality.\n                               Should be in [0, 100] range for jpeg and [1, 100] for webp.\n        compression_type (ImageCompressionType): should be ImageCompressionType.JPEG or ImageCompressionType.WEBP.\n            Default: ImageCompressionType.JPEG\n    Targets:\n        image\n    Image types:\n        uint8, float32\n    \"\"\"\n\n    class ImageCompressionType(IntEnum):\n        JPEG = 0\n        WEBP = 1\n\n    def __init__(\n        self,\n        quality_lower=99,\n        quality_upper=100,\n        compression_type=ImageCompressionType.JPEG,\n        always_apply=False,\n        p=0.5,\n    ):\n        super(ImageCompression, self).__init__(always_apply, p)\n\n        self.compression_type = ImageCompression.ImageCompressionType(compression_type)\n        low_thresh_quality_assert = 0\n\n        if self.compression_type == ImageCompression.ImageCompressionType.WEBP:\n            low_thresh_quality_assert = 1\n\n        if not low_thresh_quality_assert <= quality_lower <= 100:\n            raise ValueError(\"Invalid quality_lower. Got: {}\".format(quality_lower))\n        if not low_thresh_quality_assert <= quality_upper <= 100:\n            raise ValueError(\"Invalid quality_upper. Got: {}\".format(quality_upper))\n\n        self.quality_lower = quality_lower\n        self.quality_upper = quality_upper\n\n    def apply(self, img, quality=100, image_type=\".jpg\", **params):\n        if not F.is_3Drgb_image(img) and not F.is_3Dgrayscale_image(img):\n            raise TypeError(\"ImageCompression transformation expects 1, 3 or 4 channel images.\")\n\n\n        img_transformed = np.zeros(img.shape, dtype=img.dtype)\n        for slice in range(img.shape[0]):\n            img_transformed[slice,:,:] = F.image_compression(img[slice,:,:],\n                                                             quality,\n                                                             image_type)\n        return img_transformed\n\n    def get_params(self, **data):\n        image_type = \".jpg\"\n\n        if self.compression_type == ImageCompression.ImageCompressionType.WEBP:\n            image_type = \".webp\"\n\n        return {\n            \"quality\": random.randint(self.quality_lower, self.quality_upper),\n            \"image_type\": image_type,\n        }\n\n    def get_transform_init_args(self):\n        return {\n            \"quality_lower\": self.quality_lower,\n            \"quality_upper\": self.quality_upper,\n            \"compression_type\": self.compression_type.value,\n        }\n"
  },
  {
    "path": "volumentations/core/__init__.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\n"
  },
  {
    "path": "volumentations/core/composition.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\nimport random\nfrom ..augmentations import transforms as T\n\n\nclass Compose:\n    def __init__(self, transforms, p=1.0, targets=[['image'],['mask']]):\n        assert 0 <= p <= 1\n        self.transforms = [T.Float(always_apply=True)] + transforms + [T.Contiguous(always_apply=True)]\n        self.p = p\n        self.targets = targets\n\n    def get_always_apply_transforms(self):\n        res = []\n        for tr in self.transforms:\n            if tr.always_apply:\n                res.append(tr)\n        return res\n\n    def __call__(self, force_apply=False, **data):\n        need_to_run = force_apply or random.random() < self.p\n        transforms = self.transforms if need_to_run else self.get_always_apply_transforms()\n\n        for tr in transforms:\n            data = tr(force_apply, self.targets, **data)\n\n        return data\n\n\nclass ComposeChoice:\n    def __init__(self, transforms, p=1.0, n=1, targets=[['image'], ['mask']]):\n        assert 0 <= p <= 1\n        self.transforms = transforms\n        self.p = p\n        self.n = n\n        self.targets = targets\n\n    def __call__(self, **data):\n        if random.random() > self.p:\n            return data\n\n        transforms = random.sample(self.transforms, self.n)\n        transforms = [T.Float()] + transforms + [T.Contiguous()]\n\n        for tr in transforms:\n            data = tr(True, self.targets, **data)\n\n        return data\n"
  },
  {
    "path": "volumentations/core/transforms_interface.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\nimport random\nimport numpy as np\nfrom typing import Sequence, Tuple\n\n# DEBUG only flag\nVERBOSE = False\n\n\ndef to_tuple(param, low=None, bias=None):\n    \"\"\"Convert input argument to min-max tuple\n    Args:\n        param (scalar, tuple or list of 2+ elements): Input value.\n            If value is scalar, return value would be (offset - value, offset + value).\n            If value is tuple, return value would be value + offset (broadcasted).\n        low:  Second element of tuple can be passed as optional argument\n        bias: An offset factor added to each element\n    \"\"\"\n    if low is not None and bias is not None:\n        raise ValueError(\"Arguments low and bias are mutually exclusive\")\n\n    if param is None:\n        return param\n\n    if isinstance(param, (int, float)):\n        if low is None:\n            param = -param, +param\n        else:\n            param = (low, param) if low < param else (param, low)\n    elif isinstance(param, Sequence):\n        param = tuple(param)\n    else:\n        raise ValueError(\"Argument param must be either scalar (int, float) or tuple\")\n\n    if bias is not None:\n        return tuple(bias + x for x in param)\n\n    return tuple(param)\n\n\nclass Transform:\n    def __init__(self, always_apply=False, p=0.5):\n        assert 0 <= p <= 1\n        self.p = p\n        self.always_apply = always_apply\n\n    def __call__(self, force_apply, targets, **data):\n        if force_apply or self.always_apply or random.random() < self.p:\n            params = self.get_params(**data)\n\n            if VERBOSE:\n                print('RUN', self.__class__.__name__, params)\n\n            for k, v in data.items():\n                if k in targets[0]:\n                    data[k] = self.apply(v, **params)\n                else:\n                    data[k] = v\n\n        return data\n\n    def get_params(self, **data):\n        \"\"\"\n        shared parameters for one apply. (usually random values)\n        \"\"\"\n        return {}\n\n    def apply(self, volume, **params):\n        raise NotImplementedError\n\n\nclass DualTransform(Transform):\n    def __call__(self, force_apply, targets, **data):\n        if force_apply or self.always_apply or random.random() < self.p:\n            params = self.get_params(**data)\n\n            if VERBOSE:\n                print('RUN', self.__class__.__name__, params)\n\n            for k, v in data.items():\n                if k in targets[0]:\n                    data[k] = self.apply(v, **params)\n                elif k in targets[1]:\n                    data[k] = self.apply_to_mask(v, **params)\n                else:\n                    data[k] = v\n\n        return data\n\n    def apply_to_mask(self, mask, **params):\n        return self.apply(mask, **params)\n\n\nclass ImageOnlyTransform(Transform):\n    \"\"\"Transform applied to image only.\"\"\"\n\n    @property\n    def targets(self):\n        return {\"image\": self.apply}\n"
  },
  {
    "path": "volumentations/random_utils.py",
    "content": "#=================================================================================#\n#  Author:       Pavel Iakubovskii, ZFTurbo, ashawkey, Dominik Müller             #\n#  Copyright:    albumentations:    : https://github.com/albumentations-team      #\n#                Pavel Iakubovskii  : https://github.com/qubvel                   #\n#                ZFTurbo            : https://github.com/ZFTurbo                  #\n#                ashawkey           : https://github.com/ashawkey                 #\n#                Dominik Müller     : https://github.com/muellerdo                #\n#                                                                                 #\n#  Volumentations History:                                                        #\n#       - Original:                 https://github.com/albumentations-team/album  #\n#                                   entations                                     #\n#       - 3D Conversion:            https://github.com/ashawkey/volumentations    #\n#       - Continued Development:    https://github.com/ZFTurbo/volumentations     #\n#       - Enhancements:             https://github.com/qubvel/volumentations      #\n#       - Further Enhancements:     https://github.com/muellerdo/volumentations   #\n#                                                                                 #\n#  MIT License.                                                                   #\n#                                                                                 #\n#  Permission is hereby granted, free of charge, to any person obtaining a copy   #\n#  of this software and associated documentation files (the \"Software\"), to deal  #\n#  in the Software without restriction, including without limitation the rights   #\n#  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell      #\n#  copies of the Software, and to permit persons to whom the Software is          #\n#  furnished to do so, subject to the following conditions:                       #\n#                                                                                 #\n#  The above copyright notice and this permission notice shall be included in all #\n#  copies or substantial portions of the Software.                                #\n#                                                                                 #\n#  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR     #\n#  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,       #\n#  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE    #\n#  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER         #\n#  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,  #\n#  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE  #\n#  SOFTWARE.                                                                      #\n#=================================================================================#\nimport numpy as np\nfrom typing import Optional, Sequence, Union, Type, Any\nimport random as py_random\n\nNumType = Union[int, float, np.ndarray]\nIntNumType = Union[int, np.ndarray]\nSize = Union[int, Sequence[int]]\n\ndef get_random_state() -> np.random.RandomState:\n    return np.random.RandomState(py_random.randint(0, (1 << 32) - 1))\n\ndef randint(\n    low: IntNumType,\n    high: Optional[IntNumType] = None,\n    size: Optional[Size] = None,\n    dtype: Type = np.int32,\n    random_state: Optional[np.random.RandomState] = None,\n) -> Any:\n    if random_state is None:\n        random_state = get_random_state()\n    return random_state.randint(low, high, size, dtype)\n\ndef uniform(\n    low: NumType = 0.0,\n    high: NumType = 1.0,\n    size: Optional[Size] = None,\n    random_state: Optional[np.random.RandomState] = None,\n) -> Any:\n    if random_state is None:\n        random_state = get_random_state()\n    return random_state.uniform(low, high, size)\n\ndef normal(\n    loc: NumType = 0.0,\n    scale: NumType = 1.0,\n    size: Optional[Size] = None,\n    random_state: Optional[np.random.RandomState] = None,\n) -> Any:\n    if random_state is None:\n        random_state = get_random_state()\n    return random_state.normal(loc, scale, size)\n"
  }
]