[
  {
    "path": ".github/workflows/build_nnef.yml",
    "content": "name: Build, test and publish nnef\n\non:\n  push:\n    tags:\n      - 'nnef-v[0-9]+.[0-9]+.[0-9]+'\n\n\njobs:\n  build_wheels:\n    name: Build nnef wheels on ${{ matrix.os }}\n    runs-on: ${{ matrix.os }}\n    strategy:\n      matrix:\n        os:\n          - ubuntu-latest\n          - windows-latest\n          - macos-latest\n          - macos-14\n\n    steps:\n      - uses: actions/checkout@v4\n\n      - name: Build wheels for nnef\n        uses: pypa/cibuildwheel@v3.4.0\n        with:\n          package-dir: nnef-pyproject\n          output-dir: dist/\n          config-file: nnef-pyproject/pyproject.toml\n        env:\n          CIBW_BUILD: \"cp38-* cp39-* cp310-* cp311-* cp312-* cp313-* cp314-*\"\n\n      - uses: actions/upload-artifact@v4\n        with:\n          name: dist-${{ matrix.os }}-${{ github.ref_name }}\n          path: ./dist/*.whl\n\n\n  build_sdist:\n    name: Build nnef sdist\n    runs-on: ubuntu-22.04\n    steps:\n      - uses: actions/checkout@v4\n      - name: Set up Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: \"3.7\"\n      - name: Install dependencies\n        run: |\n          python -m pip install --upgrade pip\n          pip install build\n      - name: Build package\n        run: python -m build ./nnef-pyproject/ --sdist --outdir ./dist\n\n      - uses: actions/upload-artifact@v4\n        with:\n          name: dist-${{ github.ref_name }}\n          path: ./dist/*.tar.gz\n\n  publish:\n    name: Publish nnef\n    runs-on: ubuntu-latest\n    needs: [build_wheels, build_sdist]\n    steps:\n      - name: Download dist/\n        uses: actions/download-artifact@v4\n        with:\n          path: dist\n          merge-multiple: true\n\n      - name: publish to PyPI\n        uses: pypa/gh-action-pypi-publish@v1.13.0\n        with:\n          user: __token__\n          password: ${{ secrets.PYPI_TOKEN }}\n"
  },
  {
    "path": ".github/workflows/build_nnef_tools.yml",
    "content": "name: Build, test and publish nnef_tools\n\non:\n  push:\n    tags:\n      - 'nnef_tools-v[0-9]+.[0-9]+.[0-9]+'\n\n\njobs:\n  build_nnef_tools:\n    name: Build and publish nnef_tools\n    runs-on: \"ubuntu-latest\"\n    steps:\n      - uses: actions/checkout@v4\n      - name: Set up Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: \"3.10\"\n\n      - name: Install dependencies\n        run: |\n          python -m pip install --upgrade pip\n          pip install build pytest pytest-xdist\n\n      - name: Build package\n        run: python -m build ./nnef_tools-pyproject/ --outdir ./dist/\n\n      - name: Publish artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: dist-${{ github.ref_name }}\n          path: ./dist/*\n\n      - name: Install\n        run: python -m pip install ./nnef-pyproject/ ./nnef_tools-pyproject[full]\n\n      - name: Test\n        run: python -m pytest ./nnef_tools-pyproject/tests/ -n auto\n\n      - name: Publish to PyPI\n        uses: pypa/gh-action-pypi-publish@v1.13.0\n        with:\n          user: __token__\n          password: ${{ secrets.PYPI_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": "*.pyc\n__pycache__\n.idea\n/_models\n/out\n/*/build\n/*/dist\n*.egg-info\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "A reminder that this issue tracker is managed by the Khronos Group. Interactions here should follow the Khronos Code of Conduct (https://www.khronos.org/developers/code-of-conduct), which prohibits aggressive or derogatory language. Please keep the discussion friendly and civil.\n"
  },
  {
    "path": "README.md",
    "content": "[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n\n<p align=\"center\"><img src=\"https://www.khronos.org/images/jcogs_img/cache/nnef_500px_apr17_-_28de80_-_3c2b17797282ce265889b88b2035b24403f2d049.png\" /></p>\n\n**Development of the latest tools related to version 2.0 of the NNEF specification draft can be found on branch [v2.0](https://github.com/KhronosGroup/NNEF-Tools/tree/v2.0).**\n\n# NNEF-Tools\n\nNNEF reduces machine learning deployment fragmentation by enabling a rich mix of neural network training tools and inference engines to be used by applications across a diverse range of devices and platforms.\n\nThis repository contains tools to generate and consume NNEF documents, such as a parser (C++ and Python) that can be included in consumer applications and converters for deep learning frameworks.\n\n* [NNEF Model Zoo](models#nnef-model-zoo)\n* [NNEF Tools](nnef_tools-pyproject#nnef-tools)\n* [NNEF Parser](nnef-pyproject#nnef-parser---repository)\n\n\n## NNEF Model Zoo\nA **Model Zoo** is now available; the 'models' folder contains a variety of [NNEF models](models#nnef-model-zoo) converted from various sources.\n\n## NNEF Tools\n[NNEF Tools](nnef_tools-pyproject#nnef-tools) folder contains tools to convert pre-trained models in `tensorFlow`/`caffe`/`caffe2`/`ONNX` to NNEF format.\n\n## NNEF Parser\n[NNEF Parser](nnef-pyproject#nnef-parser---repository) folder contains `C++` and `Python` source code for a sample NNEF graph parser.\n\n## Release Notes\n\n### Added new operators in spec version 1.0.4 (06.15.2021)\n\nFollowing the update of the NNEF specification to version 1.0.4, conversion for the corresponding operators has been added. Furthermore, error handling of non-convertible models has been greately enhanced with error messages detailing the exact cause of failure listed for all non-convertible operations before conversion is started.\n\n### Reworked NNEF Tools (10.21.2020)\n\nThe tools for converting models to NNEF and transforming NNEF models has been thoroughly reworked to make them more robust and unified and easier to maintain. The basic functionality of the main scripts has been kept, however their parameterization has been simplified and unified in some places; please refer to the readme and the help (`-h` option) of the respective scripts for more details. The scripts cover the following major areas of functionality: model conversion, optimization, execution and visualization. A GMAC calculator is also provide, and further utility scripts may be added in the future.  \n\n### Change in quantization information in binary files (06.12.2020)\n\nAccording to the change in version 1.0.3 of the NNEF specification, quantization algorithm information has been deprecated in the tensor binary file format. The tensor binary only stores the item-type of the tensor data, and the binary reader does not return quantization information (also used to be called 'compression' info). Furthermore, the mapping between stored item-types and data-types in the structural description has been clarified, so that the reader of a tensor binary can tell what the data-type of the read tensor is. This enhances the reader as it can now properly map the binary data to C++ or Python numpy types upon reading. The C++ code has been updated to perform such a mapping, and is now able to return a typed array instead of just plain bytes.\n\n### Change in shape inference compared to previous version (04.10.2019)\n\nAccording to a change in version 1.0.1 of the NNEF specification, the `shape_of` operator in NNEF syntax is deprecated, and the parser does not support it. This enables the decoupling of parsing from shape inference, allowing parsing to succeed even if shape information is not available for all operations, such as custom defined operations before the graph definition. Shape inference can still be run after training, furthermore it can be customized (via function pointers) for custom defined operations.\n\n### TENSOR BINARY BUG FIX (10.19.2018)\n\nThere was a bug in the Python code that reads/writes the tensor binary files (the header contained 4 extra padding bytes therefore not conforming to the spec). The code has been updated to read/write and _check_ the proper header size. As a consequence, any files written out with the code that contained the bug cannot be read back with the updated code. To aid the usage of such existing files, a script was created called `fix_nnef_binary_size.py` that can be used to remove the excess 4 bytes from existing NNEF files. The script is located in the root folder of this repo, it has no dependencies (not even the NNEF parser). It can be run on the main folder of an NNEF model, and it fixes all binary files in the folder. In case one runs it on an NNEF model that does not contain the bug, it does nothing. It can be used as follows:\n```\npython fix_nnef_binary_size.py my_nnef_model_folder\n```\nSuch an invocation fixes the files in place. Optionally, a second argument can be supplied to the script to write the fixed files to a different output path. In this case, the script copies all non-binary files (such as graph.nnef) to the target folder, so the resulting folder contains the whole valid model.\n"
  },
  {
    "path": "_config.yml",
    "content": "theme: jekyll-theme-slate"
  },
  {
    "path": "fix_nnef_binary_size.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nimport os\nimport struct\n\n\ndef fix_nnef_binary(in_fn, out_fn):\n    header_size = 128\n\n    with open(in_fn, 'rb') as file:\n        file_size = os.fstat(file.fileno()).st_size\n        header = file.read(header_size)\n        excess = file.read(4)\n        data = file.read()\n\n    [magic1, magic2, major, minor] = bytearray(header[:4])\n    if magic1 != 0x4E or magic2 != 0xEF or major != 1 or minor != 0:\n        return False\n\n    data_length, = struct.unpack('i', header[4:8])\n\n    if file_size != header_size + data_length + 4:\n        return False\n\n    with open(out_fn, 'wb') as file:\n        file.write(header)\n        file.write(data)\n\n    return True\n\n\ndef fix_nnef_binaries(in_path, out_path):\n    for root, dirs, files in os.walk(in_path):\n        for filename in files:\n            if not filename.startswith('.'):\n                in_fn = os.path.join(root, filename)\n                out_fn = os.path.join(out_path, os.path.relpath(in_fn, in_path))\n                if os.path.splitext(filename)[1] == '.dat':\n                    if fix_nnef_binary(in_fn, out_fn):\n                        print('Fixed file: ' + in_fn)\n                elif out_fn != in_fn:\n                    with open(in_fn, 'rb') as in_file, open(out_fn, 'wb') as out_file:\n                        out_file.write(in_file.read())\n\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 2:\n        print('input path must be provided')\n        exit(-1)\n    elif len(sys.argv) > 3:\n        print('too many arguments provided')\n        exit(-1)\n\n    fix_nnef_binaries(in_path=sys.argv[1], out_path=sys.argv[2] if len(sys.argv) == 3 else sys.argv[1])\n"
  },
  {
    "path": "models/README.md",
    "content": "NNEF model zoo\n==============\n\nThe following collection of models were compiled by running the converter tools in this repository on publicly available models. Each entry provides a link to the original and the converted model.\n\n* TensorFlow models have been acquired from [https://www.tensorflow.org/lite/guide/hosted_models]\n* ONNX models have been acquired from [https://github.com/onnx/models]\n* Caffe models have been acquired from [https://github.com/BVLC/caffe/wiki/Model-Zoo]\n* Caffe2 models have been acquired from [https://github.com/caffe2/models]\n\n\nAlexNet\n-------\n\n_Floating point models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nBVLC AlexNet | 244 Mb | [Caffe](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/bvlc_alexnet.caffemodel.nnef.tgz)\nBVLC AlexNet | 244 Mb | [ONNX](https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_alexnet.tar.gz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/bvlc_alexnet.onnx.nnef.tgz)\n\n\nVGG\n---\n\n_Floating point models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nVGG-16 | 553.6 MB Mb | [Caffe](https://gist.github.com/ksimonyan/211839e770f7b538e2d8) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.caffemodel.nnef.tgz)\nVGG-19 | 574.8 MB Mb | [Caffe](https://gist.github.com/ksimonyan/3785162f95cd2d5fee77) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/vgg19.caffemodel.nnef.tgz)\nVGG-16 | 527.8 MB Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/vgg/vgg16/vgg16.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.onnx.nnef.tgz)\nVGG-19 | 548.1 MB Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/vgg/vgg19/vgg19.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/vgg19.onnx.nnef.tgz)\n\n\nGoogleNet\n---------\n\n_Floating point models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nInception v1 | 28 Mb | [Caffe2](https://github.com/caffe2/models/tree/master/inception_v1) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v1.caffe2.nnef.tgz)\nInception v1 | 28 Mb | [ONNX](https://s3.amazonaws.com/download.onnx/models/opset_9/inception_v1.tar.gz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v1.onnx.nnef.tgz)\nInception v2 | 45 Mb | [Caffe2](https://github.com/caffe2/models/tree/master/inception_v2) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v2.caffe2.nnef.tgz)\nInception v2 | 45 Mb | [ONNX](https://s3.amazonaws.com/download.onnx/models/opset_9/inception_v2.tar.gz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v2.onnx.nnef.tgz)\nInception v3 | 95.3 Mb | [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v3.tfpb.nnef.tgz)\nInception v4 | 170.7 Mb | [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v4_2018_04_27.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v4.tfpb.nnef.tgz)\nBVLC GoogleNet | 28 Mb | [Caffe](https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/bvlc_googlenet.caffemodel.nnef.tgz)\nBVLC GoogleNet | 28 Mb | [ONNX](https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_googlenet.tar.gz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/bvlc_googlenet.onnx.nnef.tgz)\n\n\n_Quantized models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nInception v1 | 6.4 Mb | [TensorFlow-Lite](http://download.tensorflow.org/models/inception_v1_224_quant_20181026.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v1_quant.tflite.nnef.tgz)\nInception v2 | 11 Mb | [TensorFlow-Lite](http://download.tensorflow.org/models/inception_v2_224_quant_20181026.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v2_quant.tflite.nnef.tgz)\nInception v3 | 23 Mb | [TensorFlow-Lite](http://download.tensorflow.org/models/tflite_11_05_08/inception_v3_quant.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v3_quant.tflite.nnef.tgz)\nInception v4 | 41 Mb | [TensorFlow-Lite](http://download.tensorflow.org/models/inception_v4_299_quant_20181026.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_v4_quant.tflite.nnef.tgz)\n\n\nResNet\n------\n\n_Floating point models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nResnet v1-18 | 44.7 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet18v1/resnet18v1.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v1_18.onnx.nnef.tgz)\nResnet v1-34 | 83.3 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet34v1/resnet34v1.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v1_34.onnx.nnef.tgz)\nResnet v1-50 | 97.8 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v1/resnet50v1.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v1_50.onnx.nnef.tgz)\nResnet v1-101 | 170.6 MB Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet101v1/resnet101v1.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v1_101.onnx.nnef.tgz)\nResnet v1-152 | 242.3 Mb | [Caffe](https://github.com/KaimingHe/deep-residual-networks) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v1_152.caffemodel.nnef.tgz)\nResnet v2-18 | 44.6 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet18v2/resnet18v2.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v2_18.onnx.nnef.tgz)\nResnet v2-34 | 83.2 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet34v2/resnet34v2.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v2_34.onnx.nnef.tgz)\nResnet v2-50 | 97.7 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v2_50.onnx.nnef.tgz)\nResnet v2-101 | 170.4 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet101v2/resnet101v2.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/resnet_v2_101.onnx.nnef.tgz)\nInception-Resnet v2 | 121 Mb | [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_resnet_v2_2018_04_27.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/inception_resnet_v2.tfpb.nnef.tgz)\n\n\nMobileNet\n---------\n\n_Floating point models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nMobileNet v1-1.0 | 16.9 Mb | [TensorFlow](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/mobilenet_v1_1.0.tfpb.nnef.tgz)\nMobileNet v1-1.0 | 17.2 Mb | [Caffe](https://github.com/shicai/MobileNet-Caffe) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/mobilenet_v1_1.0.caffemodel.nnef.tgz)\nMobileNet v2-1.0 | 14.0 Mb | [TensorFlow](http://download.tensorflow.org/models/tflite_11_05_08/mobilenet_v2_1.0_224.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/mobilenet_v2_1.0.tfpb.nnef.tgz)\nMobileNet v2-1.0 | 14.4 Mb | [Caffe](https://github.com/shicai/MobileNet-Caffe) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/mobilenet_v2_1.0.caffemodel.nnef.tgz)\nMobileNet v2-1.0 | 13.6 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/mobilenet/mobilenetv2-1.0/mobilenetv2-1.0.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/mobilenet_v2_1.0.onnx.nnef.tgz)\n\n\n\n_Quantized models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nMobileNet v1-1.0 | 4.3 Mb | [TensorFlow-Lite](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/mobilenet_v1_1.0_quant.tflite.nnef.tgz)\nMobileNet v2-1.0 | 3.4 Mb | [TensorFlow-Lite](http://download.tensorflow.org/models/tflite_11_05_08/mobilenet_v2_1.0_224_quant.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/mobilenet_v2_1.0_quant.tflite.nnef.tgz)\n\n\nSqueezeNet\n----------\n\n_Floating point models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nSqueezeNet | 5.0 Mb | [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/squeezenet_2018_04_27.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/squeezenet.tfpb.nnef.tgz)\nSqueezeNet 1.0 | 4.7 Mb | [ONNX](https://s3.amazonaws.com/download.onnx/models/opset_9/squeezenet.tar.gz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/squeezenet_v1.0.onnx.nnef.tgz)\nSqueezeNet 1.1 | 4.7 Mb | [ONNX](https://s3.amazonaws.com/onnx-model-zoo/squeezenet/squeezenet1.1/squeezenet1.1.onnx) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/squeezenet_v1.1.onnx.nnef.tgz)\nSqueezeNet 1.0 | 4.7 Mb | [Caffe](https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.0) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/squeezenet_v1.0.caffemodel.nnef.tgz)\nSqueezeNet 1.1 | 4.7 Mb | [Caffe](https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/squeezenet_v1.1.caffemodel.nnef.tgz)\n\n\nShuffleNet\n----------\n\n_Floating point models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nShuffleNet | 5.3 Mb | [ONNX](https://s3.amazonaws.com/download.onnx/models/opset_9/shufflenet.tar.gz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/shufflenet.onnx.nnef.tgz)\n\n\nNASNet\n------\n\n_Floating point models_\n\nName | Size | Original | Converted\n--- | --- | --- | ---\nNasNet mobile | 21.4 Mb | [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/nasnet_mobile_2018_04_27.tgz) | [NNEF](https://sfo2.digitaloceanspaces.com/nnef-public/nasnet_mobile.tfpb.nnef.tgz)\n"
  },
  {
    "path": "nnef-pyproject/README.md",
    "content": "NNEF Parser - repository\n===================\n\n\nIntroduction\n------------\n\nThe code consists of a C++ library that contains two example parsers (one for\nflat and one for compositional NNEF syntax). This library can be used to build tools\nthat require parsing NNEF files. It requires a C++11 compatible compiler. \n\nThe Python code wraps the C++ parser and adds some further utilities to load and save NNEF documents easily. It also contains a script to validate NNEF documents (`validate.py`) and optionally print a lowered version of the graph. If the tool encounters an invalid document, it prints the first error and stops parsing. Type `python validate.py -h` to show the usage help.\n\nC++ Library\n-----------\n\nDocumentation of the library: [cpp_api.md](cpp_api.md)\n\n\nPython Package\n--------------\n\nDocumentation of the Python package: [package_info.md](package_info.md)\n\n"
  },
  {
    "path": "nnef-pyproject/cpp_api.md",
    "content": "\nBuilding the C++ library\n------------------------\n\nThe C++ library can be compiled with cmake.\nThe `examples/samples/sample.cpp` contains a minimal example that showcases the use of the parser.\n\nExample of build commands under Linux:\n````\n$ cd nnef/cpp\n$ mkdir build && cd build\n$ cmake ..\n$ make\n````\n\n\nUsing the C++ library\n---------------------\n\nUsing the C++ parser is as simple as follows:\n\n```\n#include \"nnef.h\"\n\nnnef::Graph graph;\nstd::string error;\nbool success = nnef::load_graph(\"path/to/NNEF/folder\", graph, error);\n```\n\nUpon succeess, the graph structure is filled, while in case of an error, the error string is filled. The fields inside the graph structure, and further parameters to the `load_graph` function are documented in `nnef.h`. After the graph is successfully loaded, shape inference can be performed in a subsequent call if required:\n\n```\nsuccess = nnef::infer_shapes(graph, error);\n```\n\nUpon success, the shape fields of tensors are filled in.\n"
  },
  {
    "path": "nnef-pyproject/examples/alexnet.txt",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nversion 1.0;\n\ngraph alexnet( input ) -> ( output )\n{\n    input = external(shape = [1, 3, 224, 224]);\n    kernel1 = variable(shape = [64, 3, 11, 11], label = 'alexnet_v2/conv1/kernel');\n    bias1 = variable(shape = [1, 64], label = 'alexnet_v2/conv1/bias');\n    conv1 = conv(input, kernel1, bias1, padding = [(0, 0), (0, 0)], border = 'constant', stride = [4, 4], dilation = [1, 1]);\n    relu1 = relu(conv1);\n    pool1 = max_pool(relu1, size = [1, 1, 3, 3], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n    kernel2 = variable(shape = [192, 64, 5, 5], label = 'alexnet_v2/conv2/kernel');\n    bias2 = variable(shape = [1, 192], label = 'alexnet_v2/conv2/bias');\n    conv2 = conv(pool1, kernel2, bias2, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n    relu2 = relu(conv2);\n    pool2 = max_pool(relu2, size = [1, 1, 3, 3], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n    kernel3 = variable(shape = [384, 192, 3, 3], label = 'alexnet_v2/conv3/kernel');\n    bias3 = variable(shape = [1, 384], label = 'alexnet_v2/conv3/bias');\n    conv3 = conv(pool2, kernel3, bias3, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n    relu3 = relu(conv3);\n    kernel4 = variable(shape = [384, 384, 3, 3], label = 'alexnet_v2/conv4/kernel');\n    bias4 = variable(shape = [1, 384], label = 'alexnet_v2/conv4/bias');\n    conv4 = conv(relu3, kernel4, bias4, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n    relu4 = relu(conv4);\n    kernel5 = variable(shape = [256, 384, 3, 3], label = 'alexnet_v2/conv5/kernel');\n    bias5 = variable(shape = [1, 256], label = 'alexnet_v2/conv5/bias');\n    conv5 = conv(relu4, kernel5, bias5, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n    relu5 = relu(conv5);\n    pool3 = max_pool(relu5, size = [1, 1, 3, 3], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n    kernel6 = variable(shape = [4096, 256, 5, 5], label = 'alexnet_v2/fc6/kernel');\n    bias6 = variable(shape = [1, 4096], label = 'alexnet_v2/fc6/bias');\n    conv6 = conv(pool3, kernel6, bias6, padding = [(0, 0), (0, 0)], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n    relu6 = relu(conv6);\n    kernel7 = variable(shape = [4096, 4096, 1, 1], label = 'alexnet_v2/fc7/kernel');\n    bias7 = variable(shape = [1, 4096], label = 'alexnet_v2/fc7/bias');\n    conv7 = conv(relu6, kernel7, bias7, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n    relu7 = relu(conv7);\n    kernel8 = variable(shape = [1000, 4096, 1, 1], label = 'alexnet_v2/fc8/kernel');\n    bias8 = variable(shape = [1, 1000], label = 'alexnet_v2/fc8/bias');\n    output = conv(relu7, kernel8, bias8, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n}\n"
  },
  {
    "path": "nnef-pyproject/examples/googlenet.txt",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nversion 1.0;\n\ngraph googlenet( input ) -> ( output )\n{\n\tinput = external(shape = [1, 3, 224, 224]);\n\tkernel1 = variable(shape = [64, 3, 7, 7], label = 'InceptionV1/Conv2d_1a_7x7/kernel');\n\tbias1 = variable(shape = [1, 64], label = 'InceptionV1/Conv2d_1a_7x7/bias');\n\tconv1 = conv(input, kernel1, bias1, padding = [], border = 'constant', stride = [2, 2], dilation = [1, 1]);\n\trelu1 = relu(conv1);\n\tpool1 = max_pool(relu1, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel2 = variable(shape = [64, 64, 1, 1], label = 'InceptionV1/Conv2d_2b_1x1/kernel');\n\tbias2 = variable(shape = [1, 64], label = 'InceptionV1/Conv2d_2b_1x1/bias');\n\tconv2 = conv(pool1, kernel2, bias2, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu2 = relu(conv2);\n\tkernel3 = variable(shape = [192, 64, 3, 3], label = 'InceptionV1/Conv2d_2c_3x3/kernel');\n\tbias3 = variable(shape = [1, 192], label = 'InceptionV1/Conv2d_2c_3x3/bias');\n\tconv3 = conv(relu2, kernel3, bias3, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu3 = relu(conv3);\n\tpool2 = max_pool(relu3, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel4 = variable(shape = [64, 192, 1, 1], label = 'InceptionV1/Mixed_3b/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias4 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_3b/Branch_0/Conv2d_0a_1x1/bias');\n\tconv4 = conv(pool2, kernel4, bias4, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu4 = relu(conv4);\n\tkernel5 = variable(shape = [96, 192, 1, 1], label = 'InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias5 = variable(shape = [1, 96], label = 'InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/bias');\n\tconv5 = conv(pool2, kernel5, bias5, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu5 = relu(conv5);\n\tkernel6 = variable(shape = [128, 96, 3, 3], label = 'InceptionV1/Mixed_3b/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias6 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_3b/Branch_1/Conv2d_0b_3x3/bias');\n\tconv6 = conv(relu5, kernel6, bias6, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu6 = relu(conv6);\n\tkernel7 = variable(shape = [16, 192, 1, 1], label = 'InceptionV1/Mixed_3b/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias7 = variable(shape = [1, 16], label = 'InceptionV1/Mixed_3b/Branch_2/Conv2d_0a_1x1/bias');\n\tconv7 = conv(pool2, kernel7, bias7, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu7 = relu(conv7);\n\tkernel8 = variable(shape = [32, 16, 3, 3], label = 'InceptionV1/Mixed_3b/Branch_2/Conv2d_0b_3x3/kernel');\n\tbias8 = variable(shape = [1, 32], label = 'InceptionV1/Mixed_3b/Branch_2/Conv2d_0b_3x3/bias');\n\tconv8 = conv(relu7, kernel8, bias8, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu8 = relu(conv8);\n\tpool3 = max_pool(pool2, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel9 = variable(shape = [32, 192, 1, 1], label = 'InceptionV1/Mixed_3b/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias9 = variable(shape = [1, 32], label = 'InceptionV1/Mixed_3b/Branch_3/Conv2d_0b_1x1/bias');\n\tconv9 = conv(pool3, kernel9, bias9, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu9 = relu(conv9);\n\tconcat1 = concat([relu4,relu6,relu8,relu9], axis = 1);\n\tkernel10 = variable(shape = [128, 256, 1, 1], label = 'InceptionV1/Mixed_3c/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias10 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_3c/Branch_0/Conv2d_0a_1x1/bias');\n\tconv10 = conv(concat1, kernel10, bias10, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu10 = relu(conv10);\n\tkernel11 = variable(shape = [128, 256, 1, 1], label = 'InceptionV1/Mixed_3c/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias11 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_3c/Branch_1/Conv2d_0a_1x1/bias');\n\tconv11 = conv(concat1, kernel11, bias11, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu11 = relu(conv11);\n\tkernel12 = variable(shape = [192, 128, 3, 3], label = 'InceptionV1/Mixed_3c/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias12 = variable(shape = [1, 192], label = 'InceptionV1/Mixed_3c/Branch_1/Conv2d_0b_3x3/bias');\n\tconv12 = conv(relu11, kernel12, bias12, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu12 = relu(conv12);\n\tkernel13 = variable(shape = [32, 256, 1, 1], label = 'InceptionV1/Mixed_3c/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias13 = variable(shape = [1, 32], label = 'InceptionV1/Mixed_3c/Branch_2/Conv2d_0a_1x1/bias');\n\tconv13 = conv(concat1, kernel13, bias13, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu13 = relu(conv13);\n\tkernel14 = variable(shape = [96, 32, 3, 3], label = 'InceptionV1/Mixed_3c/Branch_2/Conv2d_0b_3x3/kernel');\n\tbias14 = variable(shape = [1, 96], label = 'InceptionV1/Mixed_3c/Branch_2/Conv2d_0b_3x3/bias');\n\tconv14 = conv(relu13, kernel14, bias14, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu14 = relu(conv14);\n\tpool4 = max_pool(concat1, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel15 = variable(shape = [64, 256, 1, 1], label = 'InceptionV1/Mixed_3c/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias15 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_3c/Branch_3/Conv2d_0b_1x1/bias');\n\tconv15 = conv(pool4, kernel15, bias15, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu15 = relu(conv15);\n\tconcat2 = concat([relu10,relu12,relu14,relu15], axis = 1);\n\tpool5 = max_pool(concat2, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel16 = variable(shape = [192, 480, 1, 1], label = 'InceptionV1/Mixed_4b/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias16 = variable(shape = [1, 192], label = 'InceptionV1/Mixed_4b/Branch_0/Conv2d_0a_1x1/bias');\n\tconv16 = conv(pool5, kernel16, bias16, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu16 = relu(conv16);\n\tkernel17 = variable(shape = [96, 480, 1, 1], label = 'InceptionV1/Mixed_4b/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias17 = variable(shape = [1, 96], label = 'InceptionV1/Mixed_4b/Branch_1/Conv2d_0a_1x1/bias');\n\tconv17 = conv(pool5, kernel17, bias17, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu17 = relu(conv17);\n\tkernel18 = variable(shape = [208, 96, 3, 3], label = 'InceptionV1/Mixed_4b/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias18 = variable(shape = [1, 208], label = 'InceptionV1/Mixed_4b/Branch_1/Conv2d_0b_3x3/bias');\n\tconv18 = conv(relu17, kernel18, bias18, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu18 = relu(conv18);\n\tkernel19 = variable(shape = [16, 480, 1, 1], label = 'InceptionV1/Mixed_4b/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias19 = variable(shape = [1, 16], label = 'InceptionV1/Mixed_4b/Branch_2/Conv2d_0a_1x1/bias');\n\tconv19 = conv(pool5, kernel19, bias19, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu19 = relu(conv19);\n\tkernel20 = variable(shape = [48, 16, 3, 3], label = 'InceptionV1/Mixed_4b/Branch_2/Conv2d_0b_3x3/kernel');\n\tbias20 = variable(shape = [1, 48], label = 'InceptionV1/Mixed_4b/Branch_2/Conv2d_0b_3x3/bias');\n\tconv20 = conv(relu19, kernel20, bias20, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu20 = relu(conv20);\n\tpool6 = max_pool(pool5, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel21 = variable(shape = [64, 480, 1, 1], label = 'InceptionV1/Mixed_4b/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias21 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_4b/Branch_3/Conv2d_0b_1x1/bias');\n\tconv21 = conv(pool6, kernel21, bias21, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu21 = relu(conv21);\n\tconcat3 = concat([relu16,relu18,relu20,relu21], axis = 1);\n\tkernel22 = variable(shape = [160, 512, 1, 1], label = 'InceptionV1/Mixed_4c/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias22 = variable(shape = [1, 160], label = 'InceptionV1/Mixed_4c/Branch_0/Conv2d_0a_1x1/bias');\n\tconv22 = conv(concat3, kernel22, bias22, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu22 = relu(conv22);\n\tkernel23 = variable(shape = [112, 512, 1, 1], label = 'InceptionV1/Mixed_4c/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias23 = variable(shape = [1, 112], label = 'InceptionV1/Mixed_4c/Branch_1/Conv2d_0a_1x1/bias');\n\tconv23 = conv(concat3, kernel23, bias23, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu23 = relu(conv23);\n\tkernel24 = variable(shape = [224, 112, 3, 3], label = 'InceptionV1/Mixed_4c/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias24 = variable(shape = [1, 224], label = 'InceptionV1/Mixed_4c/Branch_1/Conv2d_0b_3x3/bias');\n\tconv24 = conv(relu23, kernel24, bias24, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu24 = relu(conv24);\n\tkernel25 = variable(shape = [24, 512, 1, 1], label = 'InceptionV1/Mixed_4c/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias25 = variable(shape = [1, 24], label = 'InceptionV1/Mixed_4c/Branch_2/Conv2d_0a_1x1/bias');\n\tconv25 = conv(concat3, kernel25, bias25, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu25 = relu(conv25);\n\tkernel26 = variable(shape = [64, 24, 3, 3], label = 'InceptionV1/Mixed_4c/Branch_2/Conv2d_0b_3x3/kernel');\n\tbias26 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_4c/Branch_2/Conv2d_0b_3x3/bias');\n\tconv26 = conv(relu25, kernel26, bias26, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu26 = relu(conv26);\n\tpool7 = max_pool(concat3, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel27 = variable(shape = [64, 512, 1, 1], label = 'InceptionV1/Mixed_4c/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias27 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_4c/Branch_3/Conv2d_0b_1x1/bias');\n\tconv27 = conv(pool7, kernel27, bias27, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu27 = relu(conv27);\n\tconcat4 = concat([relu22,relu24,relu26,relu27], axis = 1);\n\tkernel28 = variable(shape = [128, 512, 1, 1], label = 'InceptionV1/Mixed_4d/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias28 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_4d/Branch_0/Conv2d_0a_1x1/bias');\n\tconv28 = conv(concat4, kernel28, bias28, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu28 = relu(conv28);\n\tkernel29 = variable(shape = [128, 512, 1, 1], label = 'InceptionV1/Mixed_4d/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias29 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_4d/Branch_1/Conv2d_0a_1x1/bias');\n\tconv29 = conv(concat4, kernel29, bias29, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu29 = relu(conv29);\n\tkernel30 = variable(shape = [256, 128, 3, 3], label = 'InceptionV1/Mixed_4d/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias30 = variable(shape = [1, 256], label = 'InceptionV1/Mixed_4d/Branch_1/Conv2d_0b_3x3/bias');\n\tconv30 = conv(relu29, kernel30, bias30, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu30 = relu(conv30);\n\tkernel31 = variable(shape = [24, 512, 1, 1], label = 'InceptionV1/Mixed_4d/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias31 = variable(shape = [1, 24], label = 'InceptionV1/Mixed_4d/Branch_2/Conv2d_0a_1x1/bias');\n\tconv31 = conv(concat4, kernel31, bias31, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu31 = relu(conv31);\n\tkernel32 = variable(shape = [64, 24, 3, 3], label = 'InceptionV1/Mixed_4d/Branch_2/Conv2d_0b_3x3/kernel');\n\tbias32 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_4d/Branch_2/Conv2d_0b_3x3/bias');\n\tconv32 = conv(relu31, kernel32, bias32, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu32 = relu(conv32);\n\tpool8 = max_pool(concat4, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel33 = variable(shape = [64, 512, 1, 1], label = 'InceptionV1/Mixed_4d/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias33 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_4d/Branch_3/Conv2d_0b_1x1/bias');\n\tconv33 = conv(pool8, kernel33, bias33, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu33 = relu(conv33);\n\tconcat5 = concat([relu28,relu30,relu32,relu33], axis = 1);\n\tkernel34 = variable(shape = [112, 512, 1, 1], label = 'InceptionV1/Mixed_4e/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias34 = variable(shape = [1, 112], label = 'InceptionV1/Mixed_4e/Branch_0/Conv2d_0a_1x1/bias');\n\tconv34 = conv(concat5, kernel34, bias34, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu34 = relu(conv34);\n\tkernel35 = variable(shape = [144, 512, 1, 1], label = 'InceptionV1/Mixed_4e/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias35 = variable(shape = [1, 144], label = 'InceptionV1/Mixed_4e/Branch_1/Conv2d_0a_1x1/bias');\n\tconv35 = conv(concat5, kernel35, bias35, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu35 = relu(conv35);\n\tkernel36 = variable(shape = [288, 144, 3, 3], label = 'InceptionV1/Mixed_4e/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias36 = variable(shape = [1, 288], label = 'InceptionV1/Mixed_4e/Branch_1/Conv2d_0b_3x3/bias');\n\tconv36 = conv(relu35, kernel36, bias36, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu36 = relu(conv36);\n\tkernel37 = variable(shape = [32, 512, 1, 1], label = 'InceptionV1/Mixed_4e/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias37 = variable(shape = [1, 32], label = 'InceptionV1/Mixed_4e/Branch_2/Conv2d_0a_1x1/bias');\n\tconv37 = conv(concat5, kernel37, bias37, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu37 = relu(conv37);\n\tkernel38 = variable(shape = [64, 32, 3, 3], label = 'InceptionV1/Mixed_4e/Branch_2/Conv2d_0b_3x3/kernel');\n\tbias38 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_4e/Branch_2/Conv2d_0b_3x3/bias');\n\tconv38 = conv(relu37, kernel38, bias38, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu38 = relu(conv38);\n\tpool9 = max_pool(concat5, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel39 = variable(shape = [64, 512, 1, 1], label = 'InceptionV1/Mixed_4e/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias39 = variable(shape = [1, 64], label = 'InceptionV1/Mixed_4e/Branch_3/Conv2d_0b_1x1/bias');\n\tconv39 = conv(pool9, kernel39, bias39, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu39 = relu(conv39);\n\tconcat6 = concat([relu34,relu36,relu38,relu39], axis = 1);\n\tkernel40 = variable(shape = [256, 528, 1, 1], label = 'InceptionV1/Mixed_4f/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias40 = variable(shape = [1, 256], label = 'InceptionV1/Mixed_4f/Branch_0/Conv2d_0a_1x1/bias');\n\tconv40 = conv(concat6, kernel40, bias40, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu40 = relu(conv40);\n\tkernel41 = variable(shape = [160, 528, 1, 1], label = 'InceptionV1/Mixed_4f/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias41 = variable(shape = [1, 160], label = 'InceptionV1/Mixed_4f/Branch_1/Conv2d_0a_1x1/bias');\n\tconv41 = conv(concat6, kernel41, bias41, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu41 = relu(conv41);\n\tkernel42 = variable(shape = [320, 160, 3, 3], label = 'InceptionV1/Mixed_4f/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias42 = variable(shape = [1, 320], label = 'InceptionV1/Mixed_4f/Branch_1/Conv2d_0b_3x3/bias');\n\tconv42 = conv(relu41, kernel42, bias42, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu42 = relu(conv42);\n\tkernel43 = variable(shape = [32, 528, 1, 1], label = 'InceptionV1/Mixed_4f/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias43 = variable(shape = [1, 32], label = 'InceptionV1/Mixed_4f/Branch_2/Conv2d_0a_1x1/bias');\n\tconv43 = conv(concat6, kernel43, bias43, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu43 = relu(conv43);\n\tkernel44 = variable(shape = [128, 32, 3, 3], label = 'InceptionV1/Mixed_4f/Branch_2/Conv2d_0b_3x3/kernel');\n\tbias44 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_4f/Branch_2/Conv2d_0b_3x3/bias');\n\tconv44 = conv(relu43, kernel44, bias44, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu44 = relu(conv44);\n\tpool10 = max_pool(concat6, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel45 = variable(shape = [128, 528, 1, 1], label = 'InceptionV1/Mixed_4f/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias45 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_4f/Branch_3/Conv2d_0b_1x1/bias');\n\tconv45 = conv(pool10, kernel45, bias45, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu45 = relu(conv45);\n\tconcat7 = concat([relu40,relu42,relu44,relu45], axis = 1);\n\tpool11 = max_pool(concat7, size = [1, 1, 2, 2], padding = [], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel46 = variable(shape = [256, 832, 1, 1], label = 'InceptionV1/Mixed_5b/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias46 = variable(shape = [1, 256], label = 'InceptionV1/Mixed_5b/Branch_0/Conv2d_0a_1x1/bias');\n\tconv46 = conv(pool11, kernel46, bias46, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu46 = relu(conv46);\n\tkernel47 = variable(shape = [160, 832, 1, 1], label = 'InceptionV1/Mixed_5b/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias47 = variable(shape = [1, 160], label = 'InceptionV1/Mixed_5b/Branch_1/Conv2d_0a_1x1/bias');\n\tconv47 = conv(pool11, kernel47, bias47, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu47 = relu(conv47);\n\tkernel48 = variable(shape = [320, 160, 3, 3], label = 'InceptionV1/Mixed_5b/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias48 = variable(shape = [1, 320], label = 'InceptionV1/Mixed_5b/Branch_1/Conv2d_0b_3x3/bias');\n\tconv48 = conv(relu47, kernel48, bias48, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu48 = relu(conv48);\n\tkernel49 = variable(shape = [32, 832, 1, 1], label = 'InceptionV1/Mixed_5b/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias49 = variable(shape = [1, 32], label = 'InceptionV1/Mixed_5b/Branch_2/Conv2d_0a_1x1/bias');\n\tconv49 = conv(pool11, kernel49, bias49, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu49 = relu(conv49);\n\tkernel50 = variable(shape = [128, 32, 3, 3], label = 'InceptionV1/Mixed_5b/Branch_2/Conv2d_0a_3x3/kernel');\n\tbias50 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_5b/Branch_2/Conv2d_0a_3x3/bias');\n\tconv50 = conv(relu49, kernel50, bias50, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu50 = relu(conv50);\n\tpool12 = max_pool(pool11, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel51 = variable(shape = [128, 832, 1, 1], label = 'InceptionV1/Mixed_5b/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias51 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_5b/Branch_3/Conv2d_0b_1x1/bias');\n\tconv51 = conv(pool12, kernel51, bias51, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu51 = relu(conv51);\n\tconcat8 = concat([relu46,relu48,relu50,relu51], axis = 1);\n\tkernel52 = variable(shape = [384, 832, 1, 1], label = 'InceptionV1/Mixed_5c/Branch_0/Conv2d_0a_1x1/kernel');\n\tbias52 = variable(shape = [1, 384], label = 'InceptionV1/Mixed_5c/Branch_0/Conv2d_0a_1x1/bias');\n\tconv52 = conv(concat8, kernel52, bias52, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu52 = relu(conv52);\n\tkernel53 = variable(shape = [192, 832, 1, 1], label = 'InceptionV1/Mixed_5c/Branch_1/Conv2d_0a_1x1/kernel');\n\tbias53 = variable(shape = [1, 192], label = 'InceptionV1/Mixed_5c/Branch_1/Conv2d_0a_1x1/bias');\n\tconv53 = conv(concat8, kernel53, bias53, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu53 = relu(conv53);\n\tkernel54 = variable(shape = [384, 192, 3, 3], label = 'InceptionV1/Mixed_5c/Branch_1/Conv2d_0b_3x3/kernel');\n\tbias54 = variable(shape = [1, 384], label = 'InceptionV1/Mixed_5c/Branch_1/Conv2d_0b_3x3/bias');\n\tconv54 = conv(relu53, kernel54, bias54, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu54 = relu(conv54);\n\tkernel55 = variable(shape = [48, 832, 1, 1], label = 'InceptionV1/Mixed_5c/Branch_2/Conv2d_0a_1x1/kernel');\n\tbias55 = variable(shape = [1, 48], label = 'InceptionV1/Mixed_5c/Branch_2/Conv2d_0a_1x1/bias');\n\tconv55 = conv(concat8, kernel55, bias55, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu55 = relu(conv55);\n\tkernel56 = variable(shape = [128, 48, 3, 3], label = 'InceptionV1/Mixed_5c/Branch_2/Conv2d_0b_3x3/kernel');\n\tbias56 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_5c/Branch_2/Conv2d_0b_3x3/bias');\n\tconv56 = conv(relu55, kernel56, bias56, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu56 = relu(conv56);\n\tpool13 = max_pool(concat8, size = [1, 1, 3, 3], padding = [], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel57 = variable(shape = [128, 832, 1, 1], label = 'InceptionV1/Mixed_5c/Branch_3/Conv2d_0b_1x1/kernel');\n\tbias57 = variable(shape = [1, 128], label = 'InceptionV1/Mixed_5c/Branch_3/Conv2d_0b_1x1/bias');\n\tconv57 = conv(pool13, kernel57, bias57, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu57 = relu(conv57);\n\tconcat9 = concat([relu52,relu54,relu56,relu57], axis = 1);\n\tpool14 = avg_pool(concat9, size = [1, 1, 7, 7], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 1, 1]);\n\tkernel58 = variable(shape = [1000, 1024, 1, 1], label = 'InceptionV1/Logits/Conv2d_0c_1x1/kernel');\n\tbias58 = variable(shape = [1, 1000], label = 'InceptionV1/Logits/Conv2d_0c_1x1/bias');\n\toutput = conv(pool14, kernel58, bias58, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n}\n"
  },
  {
    "path": "nnef-pyproject/examples/resnet.txt",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nversion 1.0;\n\ngraph resnet_v2_50( input ) -> ( output )\n{\n\tinput = external(shape = [1, 3, 224, 224]);\n\tkernel1 = variable(shape = [64, 3, 7, 7], label = 'resnet_v2_50/conv1/kernel');\n\tbias1 = variable(shape = [1, 64], label = 'resnet_v2_50/conv1/bias');\n\tconv1 = conv(input, kernel1, bias1, padding = [(3, 3), (3, 3)], border = 'constant', stride = [2, 2], dilation = [1, 1]);\n\tpool1 = max_pool(conv1, size = [1, 1, 3, 3], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tbeta1 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/preact/beta');\n\tmoving_mean1 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/preact/moving_mean');\n\tmoving_variance1 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/preact/moving_variance');\n\tnorm1 = batch_normalization(pool1, mean = moving_mean1, variance = moving_variance1, offset = beta1, scale = 1.0, epsilon = 0.001);\n\trelu1 = relu(norm1);\n\tkernel2 = variable(shape = [256, 64, 1, 1], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/shortcut/kernel');\n\tbias2 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/shortcut/bias');\n\tconv2 = conv(relu1, kernel2, bias2, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tkernel3 = variable(shape = [64, 64, 1, 1], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/conv1/kernel');\n\tbias3 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/conv1/bias');\n\tconv3 = conv(relu1, kernel3, bias3, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu2 = relu(conv3);\n\tkernel4 = variable(shape = [64, 64, 3, 3], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/conv2/kernel');\n\tbias4 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/conv2/bias');\n\tconv4 = conv(relu2, kernel4, bias4, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu3 = relu(conv4);\n\tkernel5 = variable(shape = [256, 64, 1, 1], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/conv3/kernel');\n\tbias5 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_1/bottleneck_v2/conv3/bias');\n\tconv5 = conv(relu3, kernel5, bias5, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd1 = add(conv2, conv5);\n\tbeta2 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/preact/beta');\n\tmoving_mean2 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/preact/moving_mean');\n\tmoving_variance2 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/preact/moving_variance');\n\tnorm2 = batch_normalization(add1, mean = moving_mean2, variance = moving_variance2, offset = beta2, scale = 1.0, epsilon = 0.001);\n\trelu4 = relu(norm2);\n\tkernel6 = variable(shape = [64, 256, 1, 1], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/conv1/kernel');\n\tbias6 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/conv1/bias');\n\tconv6 = conv(relu4, kernel6, bias6, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu5 = relu(conv6);\n\tkernel7 = variable(shape = [64, 64, 3, 3], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/conv2/kernel');\n\tbias7 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/conv2/bias');\n\tconv7 = conv(relu5, kernel7, bias7, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu6 = relu(conv7);\n\tkernel8 = variable(shape = [256, 64, 1, 1], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/conv3/kernel');\n\tbias8 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_2/bottleneck_v2/conv3/bias');\n\tconv8 = conv(relu6, kernel8, bias8, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd2 = add(add1, conv8);\n\tbeta3 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/preact/beta');\n\tmoving_mean3 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/preact/moving_mean');\n\tmoving_variance3 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/preact/moving_variance');\n\tnorm3 = batch_normalization(add2, mean = moving_mean3, variance = moving_variance3, offset = beta3, scale = 1.0, epsilon = 0.001);\n\trelu7 = relu(norm3);\n\tpool2 = max_pool(add2, size = [1, 1, 1, 1], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel9 = variable(shape = [64, 256, 1, 1], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/conv1/kernel');\n\tbias9 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/conv1/bias');\n\tconv9 = conv(relu7, kernel9, bias9, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu8 = relu(conv9);\n\tkernel10 = variable(shape = [64, 64, 3, 3], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/conv2/kernel');\n\tbias10 = variable(shape = [1, 64], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/conv2/bias');\n\tconv10 = conv(relu8, kernel10, bias10, padding = [(1, 1), (1, 1)], border = 'constant', stride = [2, 2], dilation = [1, 1]);\n\trelu9 = relu(conv10);\n\tkernel11 = variable(shape = [256, 64, 1, 1], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/conv3/kernel');\n\tbias11 = variable(shape = [1, 256], label = 'resnet_v2_50/block1/unit_3/bottleneck_v2/conv3/bias');\n\tconv11 = conv(relu9, kernel11, bias11, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd3 = add(pool2, conv11);\n\tbeta4 = variable(shape = [1, 256], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/preact/beta');\n\tmoving_mean4 = variable(shape = [1, 256], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/preact/moving_mean');\n\tmoving_variance4 = variable(shape = [1, 256], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/preact/moving_variance');\n\tnorm4 = batch_normalization(add3, mean = moving_mean4, variance = moving_variance4, offset = beta4, scale = 1.0, epsilon = 0.001);\n\trelu10 = relu(norm4);\n\tkernel12 = variable(shape = [512, 256, 1, 1], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/shortcut/kernel');\n\tbias12 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/shortcut/bias');\n\tconv12 = conv(relu10, kernel12, bias12, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tkernel13 = variable(shape = [128, 256, 1, 1], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/conv1/kernel');\n\tbias13 = variable(shape = [1, 128], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/conv1/bias');\n\tconv13 = conv(relu10, kernel13, bias13, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu11 = relu(conv13);\n\tkernel14 = variable(shape = [128, 128, 3, 3], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/conv2/kernel');\n\tbias14 = variable(shape = [1, 128], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/conv2/bias');\n\tconv14 = conv(relu11, kernel14, bias14, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu12 = relu(conv14);\n\tkernel15 = variable(shape = [512, 128, 1, 1], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/conv3/kernel');\n\tbias15 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_1/bottleneck_v2/conv3/bias');\n\tconv15 = conv(relu12, kernel15, bias15, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd4 = add(conv12, conv15);\n\tbeta5 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/preact/beta');\n\tmoving_mean5 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/preact/moving_mean');\n\tmoving_variance5 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/preact/moving_variance');\n\tnorm5 = batch_normalization(add4, mean = moving_mean5, variance = moving_variance5, offset = beta5, scale = 1.0, epsilon = 0.001);\n\trelu13 = relu(norm5);\n\tkernel16 = variable(shape = [128, 512, 1, 1], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/conv1/kernel');\n\tbias16 = variable(shape = [1, 128], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/conv1/bias');\n\tconv16 = conv(relu13, kernel16, bias16, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu14 = relu(conv16);\n\tkernel17 = variable(shape = [128, 128, 3, 3], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/conv2/kernel');\n\tbias17 = variable(shape = [1, 128], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/conv2/bias');\n\tconv17 = conv(relu14, kernel17, bias17, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu15 = relu(conv17);\n\tkernel18 = variable(shape = [512, 128, 1, 1], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/conv3/kernel');\n\tbias18 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_2/bottleneck_v2/conv3/bias');\n\tconv18 = conv(relu15, kernel18, bias18, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd5 = add(add4, conv18);\n\tbeta6 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/preact/beta');\n\tmoving_mean6 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/preact/moving_mean');\n\tmoving_variance6 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/preact/moving_variance');\n\tnorm6 = batch_normalization(add5, mean = moving_mean6, variance = moving_variance6, offset = beta6, scale = 1.0, epsilon = 0.001);\n\trelu16 = relu(norm6);\n\tkernel19 = variable(shape = [128, 512, 1, 1], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/conv1/kernel');\n\tbias19 = variable(shape = [1, 128], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/conv1/bias');\n\tconv19 = conv(relu16, kernel19, bias19, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu17 = relu(conv19);\n\tkernel20 = variable(shape = [128, 128, 3, 3], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/conv2/kernel');\n\tbias20 = variable(shape = [1, 128], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/conv2/bias');\n\tconv20 = conv(relu17, kernel20, bias20, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu18 = relu(conv20);\n\tkernel21 = variable(shape = [512, 128, 1, 1], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/conv3/kernel');\n\tbias21 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_3/bottleneck_v2/conv3/bias');\n\tconv21 = conv(relu18, kernel21, bias21, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd6 = add(add5, conv21);\n\tbeta7 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/preact/beta');\n\tmoving_mean7 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/preact/moving_mean');\n\tmoving_variance7 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/preact/moving_variance');\n\tnorm7 = batch_normalization(add6, mean = moving_mean7, variance = moving_variance7, offset = beta7, scale = 1.0, epsilon = 0.001);\n\trelu19 = relu(norm7);\n\tpool3 = max_pool(add6, size = [1, 1, 1, 1], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel22 = variable(shape = [128, 512, 1, 1], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/conv1/kernel');\n\tbias22 = variable(shape = [1, 128], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/conv1/bias');\n\tconv22 = conv(relu19, kernel22, bias22, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu20 = relu(conv22);\n\tkernel23 = variable(shape = [128, 128, 3, 3], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/conv2/kernel');\n\tbias23 = variable(shape = [1, 128], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/conv2/bias');\n\tconv23 = conv(relu20, kernel23, bias23, padding = [(1, 1), (1, 1)], border = 'constant', stride = [2, 2], dilation = [1, 1]);\n\trelu21 = relu(conv23);\n\tkernel24 = variable(shape = [512, 128, 1, 1], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/conv3/kernel');\n\tbias24 = variable(shape = [1, 512], label = 'resnet_v2_50/block2/unit_4/bottleneck_v2/conv3/bias');\n\tconv24 = conv(relu21, kernel24, bias24, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd7 = add(pool3, conv24);\n\tbeta8 = variable(shape = [1, 512], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/preact/beta');\n\tmoving_mean8 = variable(shape = [1, 512], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/preact/moving_mean');\n\tmoving_variance8 = variable(shape = [1, 512], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/preact/moving_variance');\n\tnorm8 = batch_normalization(add7, mean = moving_mean8, variance = moving_variance8, offset = beta8, scale = 1.0, epsilon = 0.001);\n\trelu22 = relu(norm8);\n\tkernel25 = variable(shape = [1024, 512, 1, 1], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/shortcut/kernel');\n\tbias25 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/shortcut/bias');\n\tconv25 = conv(relu22, kernel25, bias25, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tkernel26 = variable(shape = [256, 512, 1, 1], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/conv1/kernel');\n\tbias26 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/conv1/bias');\n\tconv26 = conv(relu22, kernel26, bias26, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu23 = relu(conv26);\n\tkernel27 = variable(shape = [256, 256, 3, 3], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/conv2/kernel');\n\tbias27 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/conv2/bias');\n\tconv27 = conv(relu23, kernel27, bias27, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu24 = relu(conv27);\n\tkernel28 = variable(shape = [1024, 256, 1, 1], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/conv3/kernel');\n\tbias28 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_1/bottleneck_v2/conv3/bias');\n\tconv28 = conv(relu24, kernel28, bias28, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd8 = add(conv25, conv28);\n\tbeta9 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/preact/beta');\n\tmoving_mean9 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/preact/moving_mean');\n\tmoving_variance9 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/preact/moving_variance');\n\tnorm9 = batch_normalization(add8, mean = moving_mean9, variance = moving_variance9, offset = beta9, scale = 1.0, epsilon = 0.001);\n\trelu25 = relu(norm9);\n\tkernel29 = variable(shape = [256, 1024, 1, 1], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/conv1/kernel');\n\tbias29 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/conv1/bias');\n\tconv29 = conv(relu25, kernel29, bias29, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu26 = relu(conv29);\n\tkernel30 = variable(shape = [256, 256, 3, 3], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/conv2/kernel');\n\tbias30 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/conv2/bias');\n\tconv30 = conv(relu26, kernel30, bias30, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu27 = relu(conv30);\n\tkernel31 = variable(shape = [1024, 256, 1, 1], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/conv3/kernel');\n\tbias31 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_2/bottleneck_v2/conv3/bias');\n\tconv31 = conv(relu27, kernel31, bias31, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd9 = add(add8, conv31);\n\tbeta10 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/preact/beta');\n\tmoving_mean10 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/preact/moving_mean');\n\tmoving_variance10 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/preact/moving_variance');\n\tnorm10 = batch_normalization(add9, mean = moving_mean10, variance = moving_variance10, offset = beta10, scale = 1.0, epsilon = 0.001);\n\trelu28 = relu(norm10);\n\tkernel32 = variable(shape = [256, 1024, 1, 1], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/conv1/kernel');\n\tbias32 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/conv1/bias');\n\tconv32 = conv(relu28, kernel32, bias32, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu29 = relu(conv32);\n\tkernel33 = variable(shape = [256, 256, 3, 3], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/conv2/kernel');\n\tbias33 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/conv2/bias');\n\tconv33 = conv(relu29, kernel33, bias33, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu30 = relu(conv33);\n\tkernel34 = variable(shape = [1024, 256, 1, 1], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/conv3/kernel');\n\tbias34 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_3/bottleneck_v2/conv3/bias');\n\tconv34 = conv(relu30, kernel34, bias34, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd10 = add(add9, conv34);\n\tbeta11 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/preact/beta');\n\tmoving_mean11 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/preact/moving_mean');\n\tmoving_variance11 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/preact/moving_variance');\n\tnorm11 = batch_normalization(add10, mean = moving_mean11, variance = moving_variance11, offset = beta11, scale = 1.0, epsilon = 0.001);\n\trelu31 = relu(norm11);\n\tkernel35 = variable(shape = [256, 1024, 1, 1], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/conv1/kernel');\n\tbias35 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/conv1/bias');\n\tconv35 = conv(relu31, kernel35, bias35, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu32 = relu(conv35);\n\tkernel36 = variable(shape = [256, 256, 3, 3], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/conv2/kernel');\n\tbias36 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/conv2/bias');\n\tconv36 = conv(relu32, kernel36, bias36, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu33 = relu(conv36);\n\tkernel37 = variable(shape = [1024, 256, 1, 1], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/conv3/kernel');\n\tbias37 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_4/bottleneck_v2/conv3/bias');\n\tconv37 = conv(relu33, kernel37, bias37, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd11 = add(add10, conv37);\n\tbeta12 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/preact/beta');\n\tmoving_mean12 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/preact/moving_mean');\n\tmoving_variance12 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/preact/moving_variance');\n\tnorm12 = batch_normalization(add11, mean = moving_mean12, variance = moving_variance12, offset = beta12, scale = 1.0, epsilon = 0.001);\n\trelu34 = relu(norm12);\n\tkernel38 = variable(shape = [256, 1024, 1, 1], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/conv1/kernel');\n\tbias38 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/conv1/bias');\n\tconv38 = conv(relu34, kernel38, bias38, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu35 = relu(conv38);\n\tkernel39 = variable(shape = [256, 256, 3, 3], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/conv2/kernel');\n\tbias39 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/conv2/bias');\n\tconv39 = conv(relu35, kernel39, bias39, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu36 = relu(conv39);\n\tkernel40 = variable(shape = [1024, 256, 1, 1], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/conv3/kernel');\n\tbias40 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_5/bottleneck_v2/conv3/bias');\n\tconv40 = conv(relu36, kernel40, bias40, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd12 = add(add11, conv40);\n\tbeta13 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/preact/beta');\n\tmoving_mean13 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/preact/moving_mean');\n\tmoving_variance13 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/preact/moving_variance');\n\tnorm13 = batch_normalization(add12, mean = moving_mean13, variance = moving_variance13, offset = beta13, scale = 1.0, epsilon = 0.001);\n\trelu37 = relu(norm13);\n\tpool4 = max_pool(add12, size = [1, 1, 1, 1], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel41 = variable(shape = [256, 1024, 1, 1], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/conv1/kernel');\n\tbias41 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/conv1/bias');\n\tconv41 = conv(relu37, kernel41, bias41, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu38 = relu(conv41);\n\tkernel42 = variable(shape = [256, 256, 3, 3], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/conv2/kernel');\n\tbias42 = variable(shape = [1, 256], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/conv2/bias');\n\tconv42 = conv(relu38, kernel42, bias42, padding = [(1, 1), (1, 1)], border = 'constant', stride = [2, 2], dilation = [1, 1]);\n\trelu39 = relu(conv42);\n\tkernel43 = variable(shape = [1024, 256, 1, 1], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/conv3/kernel');\n\tbias43 = variable(shape = [1, 1024], label = 'resnet_v2_50/block3/unit_6/bottleneck_v2/conv3/bias');\n\tconv43 = conv(relu39, kernel43, bias43, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd13 = add(pool4, conv43);\n\tbeta14 = variable(shape = [1, 1024], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/preact/beta');\n\tmoving_mean14 = variable(shape = [1, 1024], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/preact/moving_mean');\n\tmoving_variance14 = variable(shape = [1, 1024], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/preact/moving_variance');\n\tnorm14 = batch_normalization(add13, mean = moving_mean14, variance = moving_variance14, offset = beta14, scale = 1.0, epsilon = 0.001);\n\trelu40 = relu(norm14);\n\tkernel44 = variable(shape = [2048, 1024, 1, 1], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/shortcut/kernel');\n\tbias44 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/shortcut/bias');\n\tconv44 = conv(relu40, kernel44, bias44, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tkernel45 = variable(shape = [512, 1024, 1, 1], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/conv1/kernel');\n\tbias45 = variable(shape = [1, 512], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/conv1/bias');\n\tconv45 = conv(relu40, kernel45, bias45, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu41 = relu(conv45);\n\tkernel46 = variable(shape = [512, 512, 3, 3], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/conv2/kernel');\n\tbias46 = variable(shape = [1, 512], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/conv2/bias');\n\tconv46 = conv(relu41, kernel46, bias46, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu42 = relu(conv46);\n\tkernel47 = variable(shape = [2048, 512, 1, 1], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/conv3/kernel');\n\tbias47 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_1/bottleneck_v2/conv3/bias');\n\tconv47 = conv(relu42, kernel47, bias47, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd14 = add(conv44, conv47);\n\tbeta15 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/preact/beta');\n\tmoving_mean15 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/preact/moving_mean');\n\tmoving_variance15 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/preact/moving_variance');\n\tnorm15 = batch_normalization(add14, mean = moving_mean15, variance = moving_variance15, offset = beta15, scale = 1.0, epsilon = 0.001);\n\trelu43 = relu(norm15);\n\tkernel48 = variable(shape = [512, 2048, 1, 1], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/conv1/kernel');\n\tbias48 = variable(shape = [1, 512], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/conv1/bias');\n\tconv48 = conv(relu43, kernel48, bias48, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu44 = relu(conv48);\n\tkernel49 = variable(shape = [512, 512, 3, 3], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/conv2/kernel');\n\tbias49 = variable(shape = [1, 512], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/conv2/bias');\n\tconv49 = conv(relu44, kernel49, bias49, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu45 = relu(conv49);\n\tkernel50 = variable(shape = [2048, 512, 1, 1], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/conv3/kernel');\n\tbias50 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_2/bottleneck_v2/conv3/bias');\n\tconv50 = conv(relu45, kernel50, bias50, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd15 = add(add14, conv50);\n\tbeta16 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/preact/beta');\n\tmoving_mean16 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/preact/moving_mean');\n\tmoving_variance16 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/preact/moving_variance');\n\tnorm16 = batch_normalization(add15, mean = moving_mean16, variance = moving_variance16, offset = beta16, scale = 1.0, epsilon = 0.001);\n\trelu46 = relu(norm16);\n\tkernel51 = variable(shape = [512, 2048, 1, 1], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/conv1/kernel');\n\tbias51 = variable(shape = [1, 512], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/conv1/bias');\n\tconv51 = conv(relu46, kernel51, bias51, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu47 = relu(conv51);\n\tkernel52 = variable(shape = [512, 512, 3, 3], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/conv2/kernel');\n\tbias52 = variable(shape = [1, 512], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/conv2/bias');\n\tconv52 = conv(relu47, kernel52, bias52, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu48 = relu(conv52);\n\tkernel53 = variable(shape = [2048, 512, 1, 1], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/conv3/kernel');\n\tbias53 = variable(shape = [1, 2048], label = 'resnet_v2_50/block4/unit_3/bottleneck_v2/conv3/bias');\n\tconv53 = conv(relu48, kernel53, bias53, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\tadd16 = add(add15, conv53);\n\tbeta17 = variable(shape = [1, 2048], label = 'resnet_v2_50/postnorm/beta');\n\tmoving_mean17 = variable(shape = [1, 2048], label = 'resnet_v2_50/postnorm/moving_mean');\n\tmoving_variance17 = variable(shape = [1, 2048], label = 'resnet_v2_50/postnorm/moving_variance');\n\tnorm17 = batch_normalization(add16, mean = moving_mean17, variance = moving_variance17, offset = beta17, scale = 1.0, epsilon = 0.001);\n\trelu49 = relu(norm17);\n\treduce1 = mean_reduce(relu49, axes = [2, 3]);\n\tkernel54 = variable(shape = [1000, 2048, 1, 1], label = 'resnet_v2_50/logits/kernel');\n\tbias54 = variable(shape = [1, 1000], label = 'resnet_v2_50/logits/bias');\n\toutput = conv(reduce1, kernel54, bias54, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n}\n"
  },
  {
    "path": "nnef-pyproject/examples/samples/sample.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport nnef\n\n\ngraph = nnef.parse_string(\n    \"\"\"\n    version 1.0;\n    graph Net( input ) -> ( output )\n    {\n        input = external(shape = [1,3,224,224]);\n        filter = variable(shape = [32,3,5,5], label = 'conv/filter');\n        output = conv(input, filter);\n    }\n    \"\"\"\n)\n\nprint(nnef.format_graph(graph.name, graph.inputs, graph.outputs, graph.operations, graph.tensors))\n"
  },
  {
    "path": "nnef-pyproject/examples/samples/sample_ext.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport nnef\n\n\ndef shuffle_shape(input, groups):\n    assert input[1] % groups == 0, \"input channels ({}) is not divisible by groups ({})\".format(input[1], groups)\n    return input\n\n\ngraph = nnef.parse_string(\n    \"\"\"\n    version 1.0;\n    extension KHR_enable_fragment_definitions;\n\n    fragment shuffle<?>( input: tensor<?>, groups: integer ) -> ( output: tensor<?> );\n\n    graph Net( input ) -> ( output )\n    {\n        input = external(shape = [1,3,224,224]);\n        filter = variable(shape = [32,3,5,5], label = 'conv/filter');\n        conv = conv(input, filter);\n        output = shuffle(conv, groups = 4);\n    }\n    \"\"\"\n)\n\nnnef.infer_shapes(graph, custom_shapes={'shuffle': shuffle_shape})\n\nprint(nnef.format_graph(graph.name, graph.inputs, graph.outputs, graph.operations, graph.tensors))\n"
  },
  {
    "path": "nnef-pyproject/examples/samples/sample_gen.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport nnef\nimport numpy as np\nfrom collections import OrderedDict\n\n\ninput = nnef.Tensor('input', dtype='scalar')\nfilter = nnef.Tensor('filter', dtype='scalar', data=np.random.randn(32,3,5,5))\noutput = nnef.Tensor('output', dtype='scalar')\n\nexternal = nnef.Operation('external', attribs={'shape': [1,3,224,224]},\n                          inputs=OrderedDict(),\n                          outputs=OrderedDict([('output', nnef.Identifier('input'))]))\nvariable = nnef.Operation('variable', attribs={'shape': [32,3,5,5], 'label': 'conv/filter'},\n                          inputs=OrderedDict(),\n                          outputs=OrderedDict([('output', nnef.Identifier('filter'))]))\nconv = nnef.Operation('conv', attribs={},\n                      inputs=OrderedDict([('input', nnef.Identifier('input')), ('filter', nnef.Identifier('filter'))]),\n                      outputs=OrderedDict([('output', nnef.Identifier('output'))]))\n\ngraph = nnef.Graph('G', inputs=['input'], outputs=['output'], operations=[external, variable, conv],\n                   tensors={'input': input, 'filter': filter, 'output': output})\n\nnnef.save_graph(graph, 'G', annotate_shapes=True)\n"
  },
  {
    "path": "nnef-pyproject/examples/vgg.txt",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nversion 1.0;\n\ngraph vgg_19( input ) -> ( output )\n{\n\tinput = external(shape = [1, 3, 224, 224]);\n\tkernel1 = variable(shape = [64, 3, 3, 3], label = 'vgg_19/conv1/conv1_1/kernel');\n\tbias1 = variable(shape = [1, 64], label = 'vgg_19/conv1/conv1_1/bias');\n\tconv1 = conv(input, kernel1, bias1, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu1 = relu(conv1);\n\tkernel2 = variable(shape = [64, 64, 3, 3], label = 'vgg_19/conv1/conv1_2/kernel');\n\tbias2 = variable(shape = [1, 64], label = 'vgg_19/conv1/conv1_2/bias');\n\tconv2 = conv(relu1, kernel2, bias2, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu2 = relu(conv2);\n\tpool1 = max_pool(relu2, size = [1, 1, 2, 2], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel3 = variable(shape = [128, 64, 3, 3], label = 'vgg_19/conv2/conv2_1/kernel');\n\tbias3 = variable(shape = [1, 128], label = 'vgg_19/conv2/conv2_1/bias');\n\tconv3 = conv(pool1, kernel3, bias3, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu3 = relu(conv3);\n\tkernel4 = variable(shape = [128, 128, 3, 3], label = 'vgg_19/conv2/conv2_2/kernel');\n\tbias4 = variable(shape = [1, 128], label = 'vgg_19/conv2/conv2_2/bias');\n\tconv4 = conv(relu3, kernel4, bias4, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu4 = relu(conv4);\n\tpool2 = max_pool(relu4, size = [1, 1, 2, 2], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel5 = variable(shape = [256, 128, 3, 3], label = 'vgg_19/conv3/conv3_1/kernel');\n\tbias5 = variable(shape = [1, 256], label = 'vgg_19/conv3/conv3_1/bias');\n\tconv5 = conv(pool2, kernel5, bias5, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu5 = relu(conv5);\n\tkernel6 = variable(shape = [256, 256, 3, 3], label = 'vgg_19/conv3/conv3_2/kernel');\n\tbias6 = variable(shape = [1, 256], label = 'vgg_19/conv3/conv3_2/bias');\n\tconv6 = conv(relu5, kernel6, bias6, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu6 = relu(conv6);\n\tkernel7 = variable(shape = [256, 256, 3, 3], label = 'vgg_19/conv3/conv3_3/kernel');\n\tbias7 = variable(shape = [1, 256], label = 'vgg_19/conv3/conv3_3/bias');\n\tconv7 = conv(relu6, kernel7, bias7, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu7 = relu(conv7);\n\tkernel8 = variable(shape = [256, 256, 3, 3], label = 'vgg_19/conv3/conv3_4/kernel');\n\tbias8 = variable(shape = [1, 256], label = 'vgg_19/conv3/conv3_4/bias');\n\tconv8 = conv(relu7, kernel8, bias8, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu8 = relu(conv8);\n\tpool3 = max_pool(relu8, size = [1, 1, 2, 2], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel9 = variable(shape = [512, 256, 3, 3], label = 'vgg_19/conv4/conv4_1/kernel');\n\tbias9 = variable(shape = [1, 512], label = 'vgg_19/conv4/conv4_1/bias');\n\tconv9 = conv(pool3, kernel9, bias9, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu9 = relu(conv9);\n\tkernel10 = variable(shape = [512, 512, 3, 3], label = 'vgg_19/conv4/conv4_2/kernel');\n\tbias10 = variable(shape = [1, 512], label = 'vgg_19/conv4/conv4_2/bias');\n\tconv10 = conv(relu9, kernel10, bias10, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu10 = relu(conv10);\n\tkernel11 = variable(shape = [512, 512, 3, 3], label = 'vgg_19/conv4/conv4_3/kernel');\n\tbias11 = variable(shape = [1, 512], label = 'vgg_19/conv4/conv4_3/bias');\n\tconv11 = conv(relu10, kernel11, bias11, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu11 = relu(conv11);\n\tkernel12 = variable(shape = [512, 512, 3, 3], label = 'vgg_19/conv4/conv4_4/kernel');\n\tbias12 = variable(shape = [1, 512], label = 'vgg_19/conv4/conv4_4/bias');\n\tconv12 = conv(relu11, kernel12, bias12, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu12 = relu(conv12);\n\tpool4 = max_pool(relu12, size = [1, 1, 2, 2], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel13 = variable(shape = [512, 512, 3, 3], label = 'vgg_19/conv5/conv5_1/kernel');\n\tbias13 = variable(shape = [1, 512], label = 'vgg_19/conv5/conv5_1/bias');\n\tconv13 = conv(pool4, kernel13, bias13, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu13 = relu(conv13);\n\tkernel14 = variable(shape = [512, 512, 3, 3], label = 'vgg_19/conv5/conv5_2/kernel');\n\tbias14 = variable(shape = [1, 512], label = 'vgg_19/conv5/conv5_2/bias');\n\tconv14 = conv(relu13, kernel14, bias14, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu14 = relu(conv14);\n\tkernel15 = variable(shape = [512, 512, 3, 3], label = 'vgg_19/conv5/conv5_3/kernel');\n\tbias15 = variable(shape = [1, 512], label = 'vgg_19/conv5/conv5_3/bias');\n\tconv15 = conv(relu14, kernel15, bias15, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu15 = relu(conv15);\n\tkernel16 = variable(shape = [512, 512, 3, 3], label = 'vgg_19/conv5/conv5_4/kernel');\n\tbias16 = variable(shape = [1, 512], label = 'vgg_19/conv5/conv5_4/bias');\n\tconv16 = conv(relu15, kernel16, bias16, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu16 = relu(conv16);\n\tpool5 = max_pool(relu16, size = [1, 1, 2, 2], padding = [(0, 0), (0, 0), (0, 0), (0, 0)], border = 'ignore', stride = [1, 1, 2, 2]);\n\tkernel17 = variable(shape = [4096, 512, 7, 7], label = 'vgg_19/fc6/kernel');\n\tbias17 = variable(shape = [1, 4096], label = 'vgg_19/fc6/bias');\n\tconv17 = conv(pool5, kernel17, bias17, padding = [(0, 0), (0, 0)], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu17 = relu(conv17);\n\tkernel18 = variable(shape = [4096, 4096, 1, 1], label = 'vgg_19/fc7/kernel');\n\tbias18 = variable(shape = [1, 4096], label = 'vgg_19/fc7/bias');\n\tconv18 = conv(relu17, kernel18, bias18, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n\trelu18 = relu(conv18);\n\tkernel19 = variable(shape = [1000, 4096, 1, 1], label = 'vgg_19/fc8/kernel');\n\tbias19 = variable(shape = [1, 1000], label = 'vgg_19/fc8/bias');\n\toutput = conv(relu18, kernel19, bias19, padding = [], border = 'constant', stride = [1, 1], dilation = [1, 1]);\n}\n"
  },
  {
    "path": "nnef-pyproject/nnef/__init__.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport _nnef\nfrom .parser import *\nfrom .printer import *\nfrom .binary import read_tensor, write_tensor\nfrom .shapes import infer_shapes, _StandardShapeFuncs\nimport os\n\n\nIdentifier = _nnef.Identifier   # subclass of str\nError = _nnef.Error             # subclass of exception\n\nGraph = _nnef.Graph             # namedtuple('Graph', ['name': str, 'tensors': typing.Dict[str, Tensor], 'operations': typing.List[Operation],\n                                #                       'inputs': typing.List[str], 'outputs': typing.List['str']])\nTensor = _nnef.Tensor           # namedtuple('Tensor', ['name': str, 'dtype': str, 'shape': typing.List[int], 'data': numpy.ndarray,\n                                #                       'quantization': Dict[str, object]])\nOperation = _nnef.Operation     # namedtuple('Operation', ['name': str, 'attribs': OrderedDict[str, object], 'inputs': OrderedDict[str, object],\n                                #                           'outputs': OrderedDict[str, object], 'dtype': str])\n\n\nTensor.__new__.__defaults__ = (None, None, None)\nOperation.__new__.__defaults__ = (None,)\n\n\nStandardOperations = set(_StandardShapeFuncs.keys())\n\n\ndef load_graph(path, stdlib=None, lowered=None, load_variables=True):\n    if os.path.isfile(path):\n        return parse_file(path, stdlib=stdlib, lowered=lowered)\n\n    graph_fn = os.path.join(path, 'graph.nnef')\n    quant_fn = os.path.join(path, 'graph.quant')\n\n    graph = parse_file(graph_fn, quant_fn if os.path.isfile(quant_fn) else None, stdlib=stdlib, lowered=lowered)\n\n    if load_variables:\n        for operation in graph.operations:\n            if operation.name == 'variable':\n                variable_filename = operation.attribs['label'] + '.dat'\n                if variable_filename.startswith('/'):\n                    variable_filename = variable_filename[1:]\n                variable_filename = os.path.join(path, variable_filename)\n                tensor_name = operation.outputs['output']\n                with open(variable_filename) as variable_file:\n                    data = read_tensor(variable_file)\n\n                data_shape = list(data.shape)\n                shape = operation.attribs['shape']\n                if data_shape != shape:\n                    raise _nnef.Error('shape {} in variable file does not match shape {} defined in network structure'\n                                      .format(data_shape, shape))\n\n                tensor = graph.tensors[tensor_name]\n                graph.tensors[tensor_name] = _nnef.Tensor(tensor.name, tensor.dtype, data_shape, data, tensor.quantization)\n\n    return graph\n\n\ndef save_graph(graph, path, annotate_shapes=False):\n    if os.path.exists(path):\n        raise RuntimeError(\"folder already exists: '{}'\".format(path))\n\n    os.makedirs(path)\n\n    text = format_graph(graph.name, graph.inputs, graph.outputs, graph.operations, graph.tensors, annotate_shapes=annotate_shapes)\n\n    with open(os.path.join(path, 'graph.nnef'), mode='w') as file:\n        file.write('version 1.0;\\n\\n')\n        file.write(text)\n\n    for operation in graph.operations:\n        if operation.name == 'variable':\n            variable_filename = operation.attribs['label'] + '.dat'\n            if variable_filename.startswith('/'):\n                variable_filename = variable_filename[1:]\n            variable_filename = os.path.join(path, variable_filename)\n            os.makedirs(os.path.split(variable_filename)[0], exist_ok=True)\n\n            tensor_name = operation.outputs['output']\n            tensor = graph.tensors[tensor_name]\n            if tensor.data is not None:\n                with open(variable_filename, 'wb') as variable_file:\n                    write_tensor(variable_file, tensor.data, quantized=bool(tensor.quantization))\n\n\nclass Session:\n\n    def __init__(self, path, stdlib=None, lowered=None):\n        self._handle = _nnef.create_session(path, stdlib=stdlib, lowered=lowered)\n\n    def __enter__(self):\n        return self\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        _nnef.cleanup_session(self._handle)\n\n    def __call__(self, *inputs):\n        return _nnef.execute_session(self._handle, tuple(inputs))\n"
  },
  {
    "path": "nnef-pyproject/nnef/binary.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\n\nimport numpy as np\n\n\nclass ItemType:\n    FLOAT = 0\n    UINT = 1\n    QUINT = 2\n    QINT = 3\n    INT = 4\n    BOOL = 5\n\n\ndef _numpy_dtype_split(dtype):\n    splits = {\n        np.float16: (ItemType.FLOAT, 16),\n        np.float32: (ItemType.FLOAT, 32),\n        np.float64: (ItemType.FLOAT, 64),\n        np.int8: (ItemType.INT, 8),\n        np.uint8: (ItemType.UINT, 8),\n        np.int16: (ItemType.INT, 16),\n        np.uint16: (ItemType.UINT, 16),\n        np.int32: (ItemType.INT, 32),\n        np.uint32: (ItemType.UINT, 32),\n        np.int64: (ItemType.INT, 64),\n        np.uint64: (ItemType.UINT, 64),\n        np.bool_: (ItemType.BOOL, 1),\n    }\n    split = splits.get(dtype.type)\n    if split is None:\n        raise TypeError('unsupported tensor dtype: ' + str(dtype))\n    return split\n\n\ndef _numpy_dtype_make(item_type, bits):\n    dtypes = {\n        (ItemType.FLOAT, 16): np.float16,\n        (ItemType.FLOAT, 32): np.float32,\n        (ItemType.FLOAT, 64): np.float64,\n        (ItemType.INT, 8): np.int8,\n        (ItemType.INT, 16): np.int16,\n        (ItemType.INT, 32): np.int32,\n        (ItemType.INT, 64): np.int64,\n        (ItemType.UINT, 8): np.uint8,\n        (ItemType.UINT, 16): np.uint16,\n        (ItemType.UINT, 32): np.uint32,\n        (ItemType.UINT, 64): np.uint64,\n        (ItemType.QINT, 8): np.int8,\n        (ItemType.QINT, 16): np.int16,\n        (ItemType.QINT, 32): np.int32,\n        (ItemType.QINT, 64): np.int64,\n        (ItemType.QUINT, 8): np.uint8,\n        (ItemType.QUINT, 16): np.uint16,\n        (ItemType.QUINT, 32): np.uint32,\n        (ItemType.QUINT, 64): np.uint64,\n        (ItemType.BOOL, 1): np.bool_,\n    }\n    dtype = dtypes.get((item_type, bits))\n    if dtype is None:\n        raise ValueError('unsupported combination of item type ({}) and bits per item ({})'.format(item_type, bits))\n    return dtype\n\n\nMaxTensorRank = 8\n\n\ndef _rank_of(shape):\n    rank = len(shape)\n    while rank > 1 and shape[rank - 1] == 1:\n        rank -= 1\n    return rank\n\n\n_is_little_endian = sys.byteorder == 'little'\n\n\ndef _tofile(data, file):\n    if not _is_little_endian and data.dtype != np.uint8 and data.dtype != np.int8:\n        data = data.byteswap()\n    if file.seekable():\n        data.tofile(file)\n    else:\n        file.write(data.tobytes())\n\n\ndef _fromfile(file, dtype, count):\n    if file.seekable():\n        data = np.fromfile(file, dtype, count)\n    else:\n        data = np.frombuffer(file.read(count * np.dtype(dtype).itemsize), dtype, count)\n\n    if not _is_little_endian and data.dtype != np.uint8 and data.dtype != np.int8:\n        data = data.byteswap()\n    return data\n\n\ndef write_tensor(file, tensor, quantized=False, version=(1, 0)):\n    if isinstance(file, str):\n        raise ValueError('file parameter must be a file object not a file name')\n\n    _tofile(np.asarray([0x4E, 0xEF, version[0], version[1]], dtype=np.uint8), file)\n\n    item_type, bits = _numpy_dtype_split(tensor.dtype)\n    if quantized:\n        if item_type == ItemType.INT:\n            item_type = ItemType.QINT\n        elif item_type == ItemType.UINT:\n            item_type = ItemType.QUINT\n        else:\n            raise ValueError(\"invalid tensor dtype '{}' for quantized tensor\".format(tensor.dtype))\n\n    count = int(np.prod(tensor.shape))\n    data_length = (count + 7) // 8 if bits == 1 else count * (bits // 8)\n    _tofile(np.asarray([data_length, tensor.ndim], dtype=np.uint32), file)\n\n    if tensor.ndim > MaxTensorRank:\n        raise ValueError('tensor rank exceeds maximum possible value of {}'.format(MaxTensorRank))\n\n    _tofile(np.asarray(tensor.shape, dtype=np.uint32), file)\n    _tofile(np.asarray([0] * (MaxTensorRank - tensor.ndim), dtype=np.uint32), file)\n\n    _tofile(np.asarray([bits, item_type], dtype=np.uint32), file)\n    _tofile(np.asarray([0] * 19, dtype=np.uint32), file)\n\n    data = np.packbits(tensor) if bits == 1 else tensor\n    _tofile(data, file)\n\n\ndef read_tensor(file, return_quantization=False):\n    if isinstance(file, str):\n        raise ValueError('file parameter must be a file object not a file name')\n\n    [magic1, magic2, major, minor] = _fromfile(file, dtype=np.uint8, count=4)\n    if magic1 != 0x4E or magic2 != 0xEF:\n        raise ValueError('not a valid NNEF file')\n\n    if major > 1 or minor > 0:\n        raise ValueError('unsupported file version')\n\n    [data_length, rank] = _fromfile(file, dtype=np.uint32, count=2)\n\n    if file.seekable():\n        header_size = 128\n        file_size = os.fstat(file.fileno()).st_size\n        if file_size != header_size + data_length:\n            raise ValueError('invalid tensor file; size does not match header info')\n\n    if rank > MaxTensorRank:\n        raise ValueError('tensor rank exceeds maximum possible value of {}'.format(MaxTensorRank))\n\n    shape = _fromfile(file, dtype=np.uint32, count=MaxTensorRank)\n    shape = shape[:rank]\n\n    [bits, item_type] = _fromfile(file, dtype=np.uint32, count=2)\n    _reserved = _fromfile(file, dtype=np.uint32, count=19)\n    if item_type == ItemType.UINT and _reserved[0] != 0:\n        item_type = ItemType.INT\n\n    quantized = item_type == ItemType.QINT or item_type == ItemType.QUINT\n    count = int(np.prod(shape))\n    if bits == 1:\n        byte_count = int((count + 7) // 8)\n        data = _fromfile(file, dtype=np.uint8, count=byte_count)\n        if len(data) != byte_count:\n            raise ValueError('could not read tensor data')\n        data = np.unpackbits(data).astype(bool)[:count]\n    else:\n        data = _fromfile(file, dtype=_numpy_dtype_make(item_type, bits), count=count)\n        if len(data) != count:\n            raise ValueError('could not read tensor data')\n\n    tensor = data.reshape(shape)\n\n    return (tensor, quantized) if return_quantization else tensor\n\n\ndef _write_tensor_provisional(file, tensor, version=(1, 0)):\n    _tofile(np.asarray([0x4E, 0xEF, version[0], version[1]], dtype=np.uint8), file)\n\n    header_length = 4 + 4 + (tensor.ndim + 1) * 4 + 4\n    _tofile(np.asarray([header_length], dtype=np.uint32), file)\n\n    _tofile(np.asarray([tensor.ndim], dtype=np.uint32), file)\n    _tofile(np.asarray(tensor.shape, dtype=np.uint32), file)\n\n    dtype, bits = _numpy_dtype_split(tensor.dtype)\n    _tofile(np.asarray([dtype, bits], dtype=np.uint8), file)\n\n    _tofile(np.asarray([0], dtype=np.uint16), file)\n\n    _tofile(tensor, file)\n\n\ndef _read_tensor_provisional(file):\n    [magic1, magic2, major, minor] = _fromfile(file, dtype=np.uint8, count=4)\n    if magic1 != 0x4E or magic2 != 0xEF:\n        raise ValueError('not a valid NNEF file')\n    if major > 1 or minor > 0:\n        raise ValueError('unsupported file version')\n\n    [_header_length] = _fromfile(file, dtype=np.uint32, count=1)\n\n    [rank] = _fromfile(file, dtype=np.uint32, count=1)\n    shape = _fromfile(file, dtype=np.uint32, count=rank)\n\n    [code, bits] = _fromfile(file, dtype=np.uint8, count=2)\n    [qlen] = _fromfile(file, dtype=np.uint16, count=1)\n\n    assert (code == 0)\n    assert (bits == 32)\n    assert (qlen == 0)\n\n    return _fromfile(file, dtype=np.float32, count=int(np.prod(shape))).reshape(shape)\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/CMakeLists.txt",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\ncmake_minimum_required(VERSION 3.0)\n\nproject(nnef CXX)\n\n# build information\nmessage(STATUS \"Build Configuration: ${CMAKE_BUILD_TYPE}\")\nmessage(STATUS \"Build executables in: ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}\")\n\n# nnef library\nadd_library(${PROJECT_NAME}\n        include/cnnef.h\n        include/nnef.h\n        include/nnef/common/binary.h\n        include/nnef/common/dictionary.h\n        include/nnef/common/error.h\n        include/nnef/common/lexer.h\n        include/nnef/common/parser.h\n        include/nnef/common/prototype.h\n        include/nnef/common/shapes.h\n        include/nnef/common/typespec.h\n        include/nnef/common/typeutils.h\n        include/nnef/common/value.h\n        include/nnef/comp/comp_parser.h\n        include/nnef/comp/evaluation.h\n        include/nnef/comp/expression.h\n        include/nnef/comp/fragment.h\n        include/nnef/comp/stdlib_source.h\n        include/nnef/flat/flat_parser.h\n        include/nnef/flat/quant_parser.h\n        include/nnef/flat/stdlib_protos.h\n        src/nnef.cpp\n        src/cnnef.cpp\n        )\n\n# build interface include dir is used when this cmake is included into\n# a larger project\n# install interface include dir will be put into the generated cmake config file\n# during install step\ntarget_include_directories(${PROJECT_NAME}\n        PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>\n        PUBLIC $<INSTALL_INTERFACE:include>)\n\nset_target_properties(${PROJECT_NAME} PROPERTIES CXX_STANDARD 11)\nset_target_properties(${PROJECT_NAME} PROPERTIES DEBUG_POSTFIX _d)\n\ntarget_link_libraries(${PROJECT_NAME})\n\n# install the library\ninstall(TARGETS ${PROJECT_NAME} EXPORT ${PROJECT_NAME}\n        ARCHIVE DESTINATION lib\n        LIBRARY DESTINATION lib\n        RUNTIME DESTINATION bin)\n\n# then the headers\ninstall(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/include DESTINATION .)\n\n# generate and install cmake config file for find_package\ninstall(EXPORT ${PROJECT_NAME} DESTINATION lib/cmake/${PROJECT_NAME})\n\n# generate an auxiliary config file also needed by find_package\n# it just includes the previously generated nnef.cmake\nfile(WRITE ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake \"include(\\${CMAKE_CURRENT_LIST_DIR}/${PROJECT_NAME}.cmake)\")\ninstall(FILES ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake DESTINATION lib/cmake/${PROJECT_NAME})\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/cnnef.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _CNNEF_H_\n#define _CNNEF_H_\n\n#include <stddef.h>\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#ifdef    __cplusplus\n#if _WIN32\n#define EXPORTDLL extern \"C\" __declspec(dllexport)\n#else\n#define EXPORTDLL extern \"C\"\n#endif\n#else  // __cplusplus\n#if _WIN32\n#define EXPORTDLL __declspec(dllexport)\n#else\n#define EXPORTDLL\n#endif\n#endif // __cplusplus\n\n    \n    typedef void* nnef_graph_t;\n    typedef void* nnef_tensor_t;\n\n    /*\n     * Load NNEF graph from file\n     *\n     * @param path: the path to the NNEF model folder\n     * @param error: the string to store the error message if any\n     *\n     * @return NNEF graph\n     */\n    EXPORTDLL nnef_graph_t nnef_graph_load( const char* path, char *error );\n\n    /*\n     * Copy an NNEF graph\n     *\n     * @param graph: NNEF graph\n     *\n     * @return the copy of NNEF graph\n     */\n    EXPORTDLL nnef_graph_t nnef_graph_copy( nnef_graph_t graph );\n    \n    /*\n     * Release NNEF graph\n     *\n     * @param graph: NNEF graph\n     */\n    EXPORTDLL void nnef_graph_release( nnef_graph_t graph );\n    \n    /*\n     * Perform shape inference on the graph\n     *\n     * @param graph: the graph object\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    EXPORTDLL int nnef_graph_infer_shapes( nnef_graph_t graph, char *error );\n    \n    /*\n     * Allocate tensor buffers in the graph\n     *\n     * @param graph: the graph object\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    EXPORTDLL int nnef_graph_allocate_buffers( nnef_graph_t graph, char *error );\n    \n    /*\n     * Execute a graph\n     *\n     * @param graph: the graph object\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    EXPORTDLL int nnef_graph_execute( nnef_graph_t graph, char *error );\n    \n    /*\n     * Query input names from NNEF graph\n     *\n     * @param graph: NNEF graph\n     * @param inputs: input names\n     *\n     * @return input count\n     */\n    EXPORTDLL size_t nnef_graph_input_names( nnef_graph_t graph, const char** inputs );\n\n    /*\n     * Query output names from NNEF graph\n     *\n     * @param graph: NNEF graph\n     * @param inputs: output names\n     *\n     * @return output count\n     */\n    EXPORTDLL size_t nnef_graph_output_names( nnef_graph_t graph, const char** outputs );\n\n    /*\n     * Find tensor in NNEF graph by name\n     *\n     * @param graph: NNEF graph\n     * @param tensor_name: tensor name\n     *\n     * @return tensor\n     */\n    EXPORTDLL nnef_tensor_t nnef_graph_find_tensor( nnef_graph_t graph, const char* tensor_name );\n    \n    /*\n     * Query name of an NNEF graph\n     *\n     * @param graph: NNEF graph\n     *\n     * @return graph name\n     */\n    EXPORTDLL const char* nnef_graph_name( nnef_graph_t graph );\n\n    \n    \n    /*\n     * Create a new tensor\n     *\n     * @return tensor\n     */\n    EXPORTDLL nnef_tensor_t nnef_tensor_create(void);\n\n    /*\n     * Release a tensor\n     */\n    EXPORTDLL void nnef_tensor_release( nnef_tensor_t tensor );\n    \n    /*\n     * Query tensor name\n     *\n     * @param tensor: tensor\n     *\n     * @return tensor name\n     */\n    EXPORTDLL const char* nnef_tensor_name( nnef_tensor_t tensor );\n    \n    /*\n     * Query tensor data-type\n     *\n     * @param tensor: tensor\n     *\n     * @return data-type name\n     */\n    EXPORTDLL const char* nnef_tensor_dtype( nnef_tensor_t tensor );\n    \n    /*\n     * Query tensor rank\n     *\n     * @param tensor: tensor\n     *\n     * @return tensor rank\n     */\n    EXPORTDLL size_t nnef_tensor_rank( nnef_tensor_t tensor );\n\n    /*\n     * Query tensor dims\n     *\n     * @param tensor: tensor\n     *\n     * @return tensor rank\n     */\n    EXPORTDLL const int* nnef_tensor_dims( nnef_tensor_t tensor );\n    \n    /*\n     * Query tensor data\n     *\n     * @param tensor: tensor\n     *\n     * @return tensor data\n     */\n    EXPORTDLL void* nnef_tensor_data( nnef_tensor_t tensor );\n\n    /*\n     * Read tensor from binary file\n     *\n     * @param url: the name of the file to read from\n     * @param tensor: tensor\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    EXPORTDLL int nnef_tensor_read( const char* path, nnef_tensor_t tensor, char *error );\n\n    /*\n     * Write tensor to binary file\n     *\n     * @param url: the name of the file to write to\n     * @param tensor: tensor\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    EXPORTDLL int nnef_tensor_write( const char* path, nnef_tensor_t tensor, char *error );\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/binary.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_BINARY_H_\n#define _NNEF_BINARY_H_\n\n#include \"error.h\"\n#include <cstdint>\n#include <functional>\n#include <algorithm>\n#include <numeric>\n#include <iostream>\n#include <string>\n\n\nnamespace nnef\n{\n\n    struct TensorHeader\n    {\n        enum { MaxRank = 8 };\n\n        enum ItemType { Float, Uint, Quint, Qint, Int, Bool };\n\n        uint8_t magic[2];\n        uint8_t version[2];\n        uint32_t data_length;\n        uint32_t rank;\n        uint32_t extents[MaxRank];\n        uint32_t bits_per_item;\n        uint32_t item_type;\n        uint32_t reserved[19];\n    };\n\n\n    template<typename In, typename Out>\n    void copy_and_cast_n( In* input, size_t n, Out* output )\n    {\n        for ( size_t i = 0; i < n; ++i )\n        {\n            *output++ = (Out)*input++;\n        }\n    }\n\n\n    template<typename T>\n    inline void fill_tensor_header( TensorHeader& header, const size_t version[2], const size_t rank, const T* extents,\n                                   const size_t bits_per_item, const TensorHeader::ItemType item_type )\n    {\n        const char* magic = \"N\\xEF\";\n\n        std::fill_n((uint8_t*)&header, sizeof(header), (uint8_t)0);\n\n        header.magic[0] = (uint8_t)magic[0];\n        header.magic[1] = (uint8_t)magic[1];\n\n        header.version[0] = (uint8_t)version[0];\n        header.version[1] = (uint8_t)version[1];\n\n        if ( rank > TensorHeader::MaxRank )\n        {\n            throw Error(\"tensor rank %d exceeds maximum possible value (%d)\", (int)rank, (int)TensorHeader::MaxRank);\n        }\n\n        const uint32_t item_count = std::accumulate(extents, extents + rank, (uint32_t)1, std::multiplies<uint32_t>());\n        header.data_length = (uint32_t)((item_count * bits_per_item + 7) / 8);\n        header.bits_per_item = (uint32_t)bits_per_item;\n        header.rank = (uint32_t)rank;\n        header.item_type = item_type;\n\n        std::copy_n(extents, rank, header.extents);\n    }\n\n    inline void validate_tensor_header( const TensorHeader& header )\n    {\n        if ( header.magic[0] != 'N' || header.magic[1] != 0xEF )\n        {\n            throw Error(\"invliad magic number in tensor binary\");\n        }\n        if ( header.version[0] != 1 || header.version[1] != 0 )\n        {\n            throw Error(\"unknown version number %d.%d\", (int)header.version[0], (int)header.version[1]);\n        }\n        if ( header.rank > TensorHeader::MaxRank )\n        {\n            throw Error(\"tensor rank %d exceeds maximum allowed rank (%d)\", (int)header.rank, (int)TensorHeader::MaxRank);\n        }\n\n        const size_t item_count = std::accumulate(header.extents, header.extents + header.rank, (size_t)1, std::multiplies<size_t>());\n        if ( (size_t)header.data_length != (item_count * header.bits_per_item + 7) / 8 )\n        {\n            throw Error(\"data length is not compatible with extents and bits per item\");\n        }\n\n        if ( (header.item_type & 0xffff0000) == 0 )     // Khronos-defined item type\n        {\n            const uint32_t code = (header.item_type & 0x0000ffff);\n\n            switch ( code )\n            {\n                case TensorHeader::Float:\n                {\n                    if ( header.bits_per_item != 16 && header.bits_per_item != 32 && header.bits_per_item != 64 )\n                    {\n                        throw Error(\"invalid bits per item for float item type: %d\", (int)header.bits_per_item);\n                    }\n                    break;\n                }\n                case TensorHeader::Int:\n                case TensorHeader::Uint:\n                case TensorHeader::Quint:\n                case TensorHeader::Qint:\n                {\n                    if ( header.bits_per_item > 64 )\n                    {\n                        throw Error(\"invalid bits per item for integer item type: %d\", (int)header.bits_per_item);\n                    }\n                    break;\n                }\n                case TensorHeader::Bool:\n                {\n                    if ( header.bits_per_item != 1 && header.bits_per_item != 8 )\n                    {\n                        throw Error(\"invalid bits per item for bool item type: %d\", (int)header.bits_per_item);\n                    }\n                    break;\n                }\n                default:\n                {\n                    throw Error(\"unkown Khronos-defined item type code: %x\", (int)code);\n                }\n            }\n        }\n    }\n\n    inline void pack_bits( const size_t n, const bool* data, char* bytes )\n    {\n        for ( size_t i = 0; i < n; ++i )\n        {\n            bytes[i / 8] |= (data[i] << (7 - (i % 8)));\n        }\n    }\n\n    inline void unpack_bits( const size_t n, const char* bytes, bool* data )\n    {\n        for ( size_t i = 0; i < n; ++i )\n        {\n            data[i] = (bytes[i / 8] >> (7 - (i % 8))) & 0x01;\n        }\n    }\n    \n    inline void from_bytes( const char* bytes, const size_t count, const size_t bits_per_item, float* data )\n    {\n        if ( bits_per_item == 32 )\n        {\n            copy_and_cast_n((const float*)bytes, count, data);\n        }\n        else if ( bits_per_item == 64 )\n        {\n            copy_and_cast_n((const double*)bytes, count, data);\n        }\n        else\n        {\n            throw std::runtime_error(\"cannot load float data of \" + std::to_string(bits_per_item) + \" bits per item\");\n        }\n    }\n\n    inline void from_bytes( const char* bytes, const size_t count, const size_t bits_per_item, int* data, const bool is_signed )\n    {\n        if ( bits_per_item == 8 )\n        {\n            if ( is_signed )\n            {\n                copy_and_cast_n((const int8_t*)bytes, count, data);\n            }\n            else\n            {\n                copy_and_cast_n((const uint8_t*)bytes, count, data);\n            }\n        }\n        else if ( bits_per_item == 16 )\n        {\n            if ( is_signed )\n            {\n                copy_and_cast_n((const int16_t*)bytes, count, data);\n            }\n            else\n            {\n                copy_and_cast_n((const uint16_t*)bytes, count, data);\n            }\n        }\n        else if ( bits_per_item == 32 )\n        {\n            if ( is_signed )\n            {\n                copy_and_cast_n((const int32_t*)bytes, count, data);\n            }\n            else\n            {\n                copy_and_cast_n((const uint32_t*)bytes, count, data);\n            }\n        }\n        else if ( bits_per_item == 64 )\n        {\n            if ( is_signed )\n            {\n                copy_and_cast_n((const int64_t*)bytes, count, data);\n            }\n            else\n            {\n                copy_and_cast_n((const uint64_t*)bytes, count, data);\n            }\n        }\n        else\n        {\n            throw std::runtime_error(\"cannot load int data of \" + std::to_string(bits_per_item) + \" bits per item\");\n        }\n    }\n\n    inline void from_bytes( const char* bytes, const size_t count, const size_t bits_per_item, bool* data )\n    {\n        if ( bits_per_item == 1 )\n        {\n            unpack_bits(count, bytes, data);\n        }\n        else if ( bits_per_item == 8 )\n        {\n            copy_and_cast_n((const int8_t*)bytes, count, data);\n        }\n        else\n        {\n            throw std::runtime_error(\"cannot load bool data of \" + std::to_string(bits_per_item) + \" bits per item\");\n        }\n    }\n\n    inline void to_bytes( const float* data, const size_t count, char* bytes )\n    {\n        copy_and_cast_n(data, count, (float*)bytes);\n    }\n\n    inline void to_bytes( const int* data, const size_t count, char* bytes, const bool as_signed )\n    {\n        if ( as_signed )\n        {\n            copy_and_cast_n(data, count, (int32_t*)bytes);\n        }\n        else\n        {\n            copy_and_cast_n(data, count, (uint32_t*)bytes);\n        }\n    }\n\n    inline void to_bytes( const bool* data, const size_t count, char* bytes )\n    {\n        pack_bits(count, data, bytes);\n    }\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/dictionary.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_DICTIONARY_H_\n#define _NNEF_DICTIONARY_H_\n\n#include <string>\n#include <map>\n\n\nnamespace nnef\n{\n\n    template<typename T>\n    using Dictionary = std::map<std::string,T>;\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/error.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_ERROR_H_\n#define _NNEF_ERROR_H_\n\n#include <exception>\n#include <cstdarg>\n#include <string>\n\n\nnamespace nnef\n{\n    \n    class Error : public std::exception\n    {\n    public:\n        \n        struct Position\n        {\n            unsigned line;\n            unsigned column;\n            const char* filename;\n            const Position* origin;\n        };\n        \n    public:\n        \n        template<class... Args>\n        Error( const Position& position, const char* format, Args&&... args )\n        : _position(position), _message(formatString(format, std::forward<Args>(args)...))\n        {\n        }\n        \n        template<class... Args>\n        Error( const char* format, Args&&... args )\n        : _position({0,0,nullptr,nullptr}), _message(formatString(format, std::forward<Args>(args)...))\n        {\n        }\n        \n        virtual const char* what() const noexcept\n        {\n            return _message.c_str();\n        }\n        \n        const Position& position() const\n        {\n            return _position;\n        }\n        \n    public:\n        \n        static std::string formatString( const char* fmt, ... )\n        {\n            va_list args;\n            \n            va_start(args, fmt);\n            auto length = vsnprintf(nullptr, 0, fmt, args);\n            va_end(args);\n            \n            if ( length < 0 )\n            {\n                throw std::logic_error(\"string formatting error\");\n            }\n            \n            std::string str(length, '\\0');\n            \n            va_start(args, fmt);\n            vsnprintf((char*)str.data(), length + 1, fmt, args);\n            va_end(args);\n            \n            return str;\n        }\n        \n    private:\n        \n        Position _position;\n        std::string _message;\n    };\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/lexer.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_LEXER_H_\n#define _NNEF_LEXER_H_\n\n#include \"error.h\"\n#include <iostream>\n#include <cctype>\n#include <string>\n#include <map>\n\n\nnamespace nnef\n{\n    \n    class Lexer\n    {\n    public:\n        \n        typedef Error::Position Position;\n        \n    public:\n        \n        enum Token\n        {\n            Eof,\n            Version,\n            Extension,\n            Identifier,\n            Characters,\n            Decimal,\n            Fractional,\n            Graph,\n            Fragment,\n            Tensor,\n            Integer,\n            Scalar,\n            Logical,\n            String,\n            True,\n            False,\n            For,\n            In,\n            If,\n            Else,\n            Yield,\n            LengthOf,\n            ShapeOf,\n            RangeOf,\n            Arrow,\n            And,\n            Or,\n            Le,\n            Ge,\n            Eq,\n            Ne,\n        };\n        \n        static std::string tokenString( int token )\n        {\n            static const std::string strings[] =\n            {\n                \"eof\",\n                \"version\",\n                \"extension\",\n                \"identifier\",\n                \"literal\",\n                \"decimal\",\n                \"fractional\",\n                \"graph\",\n                \"fragment\",\n                \"tensor\",\n                \"integer\",\n                \"scalar\",\n                \"logical\",\n                \"string\",\n                \"true\",\n                \"false\",\n                \"for\",\n                \"in\",\n                \"if\",\n                \"else\",\n                \"yield\",\n                \"length_of\",\n                \"shape_of\",\n                \"range_of\",\n                \"->\",\n                \"&&\",\n                \"||\",\n                \"<=\",\n                \">=\",\n                \"==\",\n                \"!=\",\n            };\n            \n            char ch = (char)token;\n            return token <= Ne ? strings[token] : std::string(&ch, 1);\n        }\n        \n        static bool isType( int token )\n        {\n            return token >= Tensor && token <= String;\n        }\n\n        static bool isKeyword( int token )\n        {\n            return token >= Fragment && token <= False;\n        }\n        \n        static bool isOperator( int token )\n        {\n            return token >= LengthOf;\n        }\n        \n    public:\n        \n        Lexer( std::istream& input, const char* filename )\n        : _input(input), _position({1,1,filename,nullptr}), _token(Eof)\n        {\n        }\n        \n        void next()\n        {\n            _position.column += (unsigned)_string.length() + 2 * (_token == Characters);\n            \n            skipSpace();\n            skipComment();\n\n            _string.clear();\n\n            if ( _input.peek() == EOF )\n            {\n                _token = Eof;\n            }\n            else if ( _input.peek() == '\\'' || _input.peek() == '\\\"' )\n            {\n                _token = getCharacters();\n            }\n            else if ( std::isalpha(_input.peek()) || _input.peek() == '_' )\n            {\n                _token = getIdentifier();\n            }\n            else if ( std::isdigit(_input.peek()) )\n            {\n                _token = getNumber();\n            }\n            else\n            {\n                _token = getOperator();\n            }\n        }\n        \n        int token() const\n        {\n            return _token;\n        }\n        \n        const std::string& string() const\n        {\n            return _string;\n        }\n        \n        const Position& position() const\n        {\n            return _position;\n        }\n\n        void readToken( int token )\n        {\n            if ( _token != token )\n            {\n                throw Error(_position, \"expected token '%s', found '%s'\", tokenString(token).c_str(), tokenString(_token).c_str());\n            }\n            next();\n        }\n\n        bool readIfToken( int token )\n        {\n            if ( _token == token )\n            {\n                next();\n                return true;\n            }\n            return false;\n        }\n        \n    private:\n        \n        Token getCharacters()\n        {\n            char delim = _input.get();\n            while ( _input.peek() != delim && _input.peek() != EOF )\n            {\n                _string += (char)_input.get();\n            }\n            if ( _input.peek() == EOF )\n            {\n                const Position position = { _position.line, _position.column + (unsigned)_string.length() + 1, _position.filename, nullptr };\n                throw Error(position, \"expected %c\", delim);\n            }\n            _input.get();\n            return Token::Characters;\n        }\n        \n        Token getIdentifier()\n        {\n            static const std::map<std::string,Token> keywords =\n            {\n                std::make_pair(\"version\", Token::Version),\n                std::make_pair(\"extension\", Token::Extension),\n                std::make_pair(\"graph\", Token::Graph),\n                std::make_pair(\"fragment\", Token::Fragment),\n                std::make_pair(\"tensor\", Token::Tensor),\n                std::make_pair(\"integer\", Token::Integer),\n                std::make_pair(\"scalar\", Token::Scalar),\n                std::make_pair(\"logical\", Token::Logical),\n                std::make_pair(\"string\", Token::String),\n                std::make_pair(\"true\", Token::True),\n                std::make_pair(\"false\", Token::False),\n                std::make_pair(\"for\", Token::For),\n                std::make_pair(\"in\", Token::In),\n                std::make_pair(\"if\", Token::If),\n                std::make_pair(\"else\", Token::Else),\n                std::make_pair(\"yield\", Token::Yield),\n                std::make_pair(\"length_of\", Token::LengthOf),\n                std::make_pair(\"shape_of\", Token::ShapeOf),\n                std::make_pair(\"range_of\", Token::RangeOf),\n            };\n            \n            do\n            {\n                _string += _input.get();\n            }\n            while ( std::isalnum(_input.peek()) || _input.peek() == '_' );\n            \n            auto it = keywords.find(_string);\n            return it == keywords.end() ? Token::Identifier : it->second;\n        }\n        \n        Token getNumber()\n        {\n            bool real = false;\n            \n            do\n            {\n                _string += _input.get();\n                \n                if ( _input.peek() == '.' && !real )\n                {\n                    _string += _input.get();\n                    real = true;\n                }\n            }\n            while ( std::isdigit(_input.peek()) );\n            \n            if ( _input.peek() == 'e' || _input.peek() == 'E' )\n            {\n                _string += _input.get();\n                if ( _input.peek() == '+' || _input.peek() == '-' )\n                {\n                    _string += _input.get();\n                }\n                if ( !std::isdigit(_input.peek()) )\n                {\n                    const Position position = { _position.line, _position.column + (unsigned)_string.length(), _position.filename, nullptr };\n                    throw Error(position, \"expected digit\");\n                }\n                while ( std::isdigit(_input.peek()) )\n                {\n                    _string += _input.get();\n                }\n                real = true;\n            }\n            \n            return real ? Token::Fractional : Token::Decimal;\n        }\n        \n        int getOperator()\n        {\n            int token = _input.get();\n            _string += (char)token;\n            \n            if ( _input.peek() == '=' )\n            {\n                if ( token == '<' )\n                {\n                    _string += (char)_input.get();\n                    token = Le;\n                }\n                else if ( token == '>' )\n                {\n                    _string += (char)_input.get();\n                    token = Ge;\n                }\n                else if ( token == '=' )\n                {\n                    _string += (char)_input.get();\n                    token = Eq;\n                }\n                else if ( token == '!' )\n                {\n                    _string += (char)_input.get();\n                    token = Ne;\n                }\n            }\n            if ( token == '&' && _input.peek() == '&' )\n            {\n                _string += (char)_input.get();\n                token = And;\n            }\n            else if ( token == '|' && _input.peek() == '|' )\n            {\n                _string += (char)_input.get();\n                token = Or;\n            }\n            else if ( token == '-' && _input.peek() == '>' )\n            {\n                _string += (char)_input.get();\n                token = Arrow;\n            }\n            \n            return token;\n        }\n        \n        void skipSpace()\n        {\n            while ( std::isspace(_input.peek()) )\n            {\n                ++_position.column;\n                \n                char ch = _input.get();\n                if ( ch == '\\r' || ch == '\\n' )\n                {\n                    ++_position.line;\n                    _position.column = 1;\n                }\n                if ( ch == '\\r' && _input.peek() == '\\n' )\n                {\n                    _input.get();\n                }\n            }\n        }\n        \n        void skipComment()\n        {\n            while ( _input.peek() == '#' )\n            {\n                while ( _input.peek() != '\\n' && _input.peek() != '\\r' && _input.peek() != EOF )\n                {\n                    _input.get();\n                    ++_position.column;\n                }\n                \n                skipSpace();\n            }\n        }\n        \n    private:\n        \n        std::istream& _input;\n        std::string _string;\n        Position _position;\n        int _token;\n    };\n\n\n    inline float getScalarValue( Lexer& lexer )\n    {\n        return (float)std::atof(lexer.string().c_str());\n    }\n\n    inline int getIntegerValue( Lexer& lexer )\n    {\n        return std::atoi(lexer.string().c_str());\n    }\n    \n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/parser.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_PARSER_H_\n#define _NNEF_PARSER_H_\n\n#include \"value.h\"\n#include \"lexer.h\"\n#include \"prototype.h\"\n#include \"dictionary.h\"\n#include <functional>\n\n\nnamespace nnef\n{\n\n    class Parser\n    {\n    public:\n\n        typedef std::pair<int,int> version_t;\n        typedef std::vector<std::string> extensions_t;\n\n        enum Flags { KHR_ENABLE_FRAGMENT_DEFINITIONS = 0x1, KHR_ENABLE_OPERATOR_EXPRESSIONS = 0x2 };\n\n    public:\n\n        struct Callback\n        {\n            virtual ~Callback() {}\n            \n            virtual void beginDocument( const std::string& filename, const version_t& version ) {}\n            virtual void endDocument( const std::string& filename ) {}\n\n            virtual bool handleExtension( const std::string& extension ) { return false; }\n\n            virtual void beginGraph( const Prototype& proto, const Dictionary<Prototype>& fragments ) {}\n            virtual void endGraph( const Prototype& proto, const Dictionary<Typename>& dtypes ) {}\n\n            virtual void operation( const Prototype& proto, const Dictionary<Value>& args, const Dictionary<Typename>& dtypes ) = 0;\n        };\n\n    public:\n        \n        virtual ~Parser() {}\n\n        virtual void parse( std::istream& is, const char* filename, Callback& callback ) = 0;\n\n    protected:\n\n        static Typename getTypename( Lexer& lexer )\n        {\n            switch ( lexer.token() )\n            {\n                case Lexer::Integer:\n                    return Typename::Integer;\n                case Lexer::Scalar:\n                    return Typename::Scalar;\n                case Lexer::Logical:\n                    return Typename::Logical;\n                case Lexer::String:\n                    return Typename::String;\n                case '?':\n                    return Typename::Generic;\n                default:\n                    throw Error(lexer.position(), \"expected type name, found '%s'\", Lexer::tokenString(lexer.token()).c_str());\n            }\n        }\n\n        static version_t readVersion( Lexer& lexer )\n        {\n            lexer.readToken(Lexer::Version);\n\n            if ( lexer.token() != Lexer::Fractional )\n            {\n                throw Error(lexer.position(), \"expected version number\");\n            }\n\n            auto str = lexer.string();\n\n            const size_t dots = std::count(str.begin(), str.end(), '.');\n            bool isdigits = std::all_of(str.begin(), str.end(), []( char ch ){ return std::isdigit(ch) || ch == '.'; });\n\n            if ( !isdigits || dots != 1 )\n            {\n                throw Error(lexer.position(), \"invalid version number format: %s\", str.c_str());\n            }\n\n            lexer.next();\n\n            auto dot = str.find('.');\n            auto major = std::atoi(str.substr(0,dot).c_str());\n            auto minor = std::atoi(str.substr(dot+1).c_str());\n\n            static const version_t MaxSupportedVersion(1,0);\n\n            auto version = version_t(major,minor);\n            if ( version > MaxSupportedVersion )\n            {\n                throw Error(lexer.position(), \"unsupported version %d.%d; maximum supported version is %d.%d\",\n                            (int)major, (int)minor, (int)MaxSupportedVersion.first, (int)MaxSupportedVersion.second);\n            }\n\n            lexer.readToken(';');\n\n            return version;\n        }\n\n        static extensions_t readExtensions( Lexer& lexer, std::function<bool( const std::string& )> handler )\n        {\n            extensions_t extensions;\n\n            while ( lexer.readIfToken(Lexer::Extension) )\n            {\n                do\n                {\n                    auto position = lexer.position();\n\n                    extensions.push_back(lexer.string());\n                    lexer.readToken(Lexer::Identifier);\n\n                    if ( !handler(extensions.back()) )\n                    {\n                        throw Error(position, \"could not handle extension '%s'\", extensions.back().c_str());\n                    }\n                }\n                while ( lexer.readIfToken(',') );\n\n                lexer.readToken(';');\n            }\n\n            return extensions;\n        }\n    };\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/prototype.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_PROTOTYPE_H_\n#define _NNEF_PROTOTYPE_H_\n\n#include \"typespec.h\"\n#include \"value.h\"\n#include <vector>\n#include <string>\n#include <initializer_list>\n\n\nnamespace nnef\n{\n\n    class Typed\n    {\n    public:\n\n        Typed( const std::string& name, const Type* type )\n        : _name(name), _type(type)\n        {\n        }\n\n        const std::string& name() const\n        {\n            return _name;\n        }\n\n        const Type* type() const\n        {\n            return _type;\n        }\n\n    private:\n\n        std::string _name;\n        const Type* _type;\n    };\n\n\n    class Param : public Typed\n    {\n    public:\n\n        Param( const std::string& name, const Type* type, const Value& defaultValue = Value::none() )\n        : Typed(name,type), _default(defaultValue)\n        {\n        }\n\n        const Value& defaultValue() const\n        {\n            return _default;\n        }\n\n    private:\n\n        Value _default;\n    };\n\n\n    typedef Typed Result;\n\n\n    class Prototype\n    {\n    private:\n\n        void initGeneric()\n        {\n            auto isGeneric = []( const Typed& typed ){ return typed.type()->isGeneric(); };\n            _hasGenericParams = std::any_of(_params.begin(), _params.end(), isGeneric);\n            _hasGenericResults = std::any_of(_results.begin(), _results.end(), isGeneric);\n        }\n\n    public:\n\n        Prototype( const std::string& name, std::initializer_list<Param> params, std::initializer_list<Result> results,\n                  const PrimitiveType* genericParamDefault = nullptr )\n        : _name(name), _params(params), _results(results), _genericParamDefault(genericParamDefault)\n        {\n            initGeneric();\n        }\n        \n        Prototype( const std::string& name, std::vector<Param>& params, std::vector<Result>& results,\n                  const PrimitiveType* genericParamDefault = nullptr )\n        : _name(name), _params(std::move(params)), _results(std::move(results)), _genericParamDefault(genericParamDefault)\n        {\n            initGeneric();\n        }\n\n        const std::string& name() const\n        {\n            return _name;\n        }\n        \n        const PrimitiveType* genericParamDefault() const\n        {\n            return _genericParamDefault;\n        }\n\n        size_t paramCount() const\n        {\n            return _params.size();\n        }\n\n        const Param& param( const size_t i ) const\n        {\n            return _params[i];\n        }\n\n        const Param* param( const std::string& name ) const\n        {\n            for ( auto& param : _params )\n            {\n                if ( param.name() == name )\n                {\n                    return &param;\n                }\n            }\n            return nullptr;\n        }\n\n        size_t resultCount() const\n        {\n            return _results.size();\n        }\n\n        const Result& result( const size_t i ) const\n        {\n            return _results[i];\n        }\n\n        const Result* result( const std::string& name ) const\n        {\n            for ( auto& result : _results )\n            {\n                if ( result.name() == name )\n                {\n                    return &result;\n                }\n            }\n            return nullptr;\n        }\n\n        bool hasGenericParams() const\n        {\n            return _hasGenericParams;\n        }\n\n        bool hasGenericResults() const\n        {\n            return _hasGenericResults;\n        }\n\n        bool isGeneric() const\n        {\n            return _hasGenericParams || _hasGenericResults;\n        }\n\n    private:\n\n        std::string _name;\n        std::vector<Param> _params;\n        std::vector<Result> _results;\n\n        bool _hasGenericParams;\n        bool _hasGenericResults;\n        const PrimitiveType* _genericParamDefault;\n    };\n    \n    \n    \n    inline std::ostream& operator<<( std::ostream& os, const Typed& typed )\n    {\n        os << typed.name() << \": \" << typed.type()->toString();\n        return os;\n    }\n    \n    inline std::ostream& operator<<( std::ostream& os, const Prototype& proto )\n    {\n        os << proto.name();\n        \n        if ( proto.isGeneric() )\n        {\n            os << \"<?\";\n            if ( proto.genericParamDefault() )\n            {\n                os << \" = \" << proto.genericParamDefault()->toString();\n            }\n            os << \">\";\n        }\n        \n        os << \"( \";\n        for ( size_t i = 0; i < proto.paramCount(); ++i )\n        {\n            if ( i )\n            {\n                os << \", \";\n            }\n            os << proto.param(i);\n        }\n        os << \" )\";\n        \n        os << \" -> \";\n        \n        os << \"( \";\n        for ( size_t i = 0; i < proto.resultCount(); ++i )\n        {\n            if ( i )\n            {\n                os << \", \";\n            }\n            os << proto.result(i);\n        }\n        os << \" )\";\n        \n        return os;\n    }\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/shapes.h",
    "content": "/*\n* Copyright (c) 2017 The Khronos Group Inc.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n\n#ifndef _NNEF_SHAPES_H_\n#define _NNEF_SHAPES_H_\n\n#include <vector>\n#include <string>\n#include <numeric>\n#include <iostream>\n#include <functional>\n#include <algorithm>\n#include <limits>\n#include \"value.h\"\n#include \"error.h\"\n\n\nnamespace nnef\n{\n\n    typedef std::vector<int> Shape;\n\n\n    inline std::string to_string( const Shape& shape )\n    {\n        std::string str;\n        \n        str += '[';\n        for ( size_t i = 0; i < shape.size(); ++i )\n        {\n            if ( i )\n            {\n                str += ',';\n            }\n            str += std::to_string(shape[i]);\n        }\n        str += ']';\n        \n        return str;\n    }\n\n    inline Shape make_shape( const Value& arg, const size_t offset = 0 )\n    {\n        Shape shape = Shape(offset + arg.size(), 1);\n        for ( size_t i = 0; i < arg.size(); ++i )\n        {\n            shape[i + offset] = arg[i].integer();\n        }\n        return shape;\n    }\n\n    inline Shape make_padding_shape( const Value& arg, const size_t offset = 0 )\n    {\n        Shape padding(offset + arg.size(), 0);\n        for ( size_t i = 0; i < arg.size(); ++i )\n        {\n            padding[i + offset] = arg[i][0].integer() + arg[i][1].integer();\n        }\n        return padding;\n    }\n\n    inline size_t volume_of( const Shape& shape )\n    {\n        return std::accumulate(shape.begin(), shape.end(), (size_t)1, std::multiplies<size_t>());\n    }\n    \n    inline size_t volume_of( const Shape& shape, const size_t offset, const size_t length )\n    {\n        return std::accumulate(shape.begin() + offset, shape.begin() + offset + length, (size_t)1, std::multiplies<size_t>());\n    }\n    \n    inline bool broadcastable( const Shape& xShape, const Shape& yShape, const size_t n )\n    {\n        for ( size_t i = 0; i < n; ++i )\n        {\n            auto xi = i < xShape.size() ? xShape[i] : 1;\n            auto yi = i < yShape.size() ? yShape[i] : 1;\n            if ( !(xi == yi || xi == 1) )\n            {\n                return false;\n            }\n        }\n        return true;\n    }\n    \n    inline bool broadcastable( const Shape& xShape, const Shape& yShape )\n    {\n        const size_t rank = std::max(xShape.size(), yShape.size());\n        return broadcastable(xShape, yShape, rank);\n    }\n\n    inline bool broadcast_compatible( const Shape& xShape, const Shape& yShape, const size_t n )\n    {\n        for ( size_t i = 0; i < n; ++i )\n        {\n            auto xi = i < xShape.size() ? xShape[i] : 1;\n            auto yi = i < yShape.size() ? yShape[i] : 1;\n            if ( !(xi == yi || xi == 1 || yi == 1) )\n            {\n                return false;\n            }\n        }\n        return true;\n    }\n\n    inline bool broadcast_compatible( const Shape& xShape, const Shape& yShape )\n    {\n        const size_t rank = std::max(xShape.size(), yShape.size());\n        return broadcast_compatible(xShape, yShape, rank);\n    }\n\n    inline bool axes_compatible_with_rank( const Value& axes, const size_t rank )\n    {\n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            auto axis = axes[i].integer();\n            if ( axis < 0 || axis >= (Value::integer_t)rank )\n            {\n                return false;\n            }\n        }\n        return true;\n    }\n\n    inline bool contains_axis( const Value& axes, const size_t axis )\n    {\n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            if ( axes[i].integer() == (Value::integer_t)axis )\n            {\n                return true;\n            }\n        }\n        return false;\n    }\n    \n    template <typename T>\n    inline int sign( T val )\n    {\n        return (T(0) < val) - (val < T(0));\n    }\n\n    inline int ceil_div( int x, int y )\n    {\n        return y > 0 ? (x + y - 1) / y : (x + y + 1) / y;\n    }\n    \n    template<typename T>\n    inline T downsize( const T input, const T size, const T padding, const T stride, const T dilation )\n    {\n        const T window = 1 + (size - 1) * dilation;\n        return sign(input) * ((std::abs(input) + padding - window) / stride + 1);\n    }\n    \n    template<typename T>\n    inline T downsize( const T input, const T stride )\n    {\n        return sign(input) * ((std::abs(input) + stride - 1) / stride);\n    }\n    \n    template<typename T>\n    inline T upsize( const T input, const T size, const T padding, const T stride, const T dilation )\n    {\n        const T window = 1 + (size - 1) * dilation;\n        return sign(input) * ((std::abs(input) - 1) * stride + window - padding);\n    }\n    \n    template<typename T>\n    inline T upsize( const T input, const T stride )\n    {\n        return input * stride;\n    }\n    \n    \n    template<typename... Args>\n    inline void check( bool condition, const char* message, Args&&... args )\n    {\n        if ( !condition )\n        {\n            throw std::logic_error(Error::formatString(message, std::forward<Args>(args)...));\n        }\n    }\n\n    inline void check_axis_compatible_with_rank( const Value& axis, const size_t rank )\n    {\n        check(axis.integer() >= 0 && axis.integer() < (Value::integer_t)rank,\n                \"axis must be in range [0,%d); found %d\", (int)rank, (int)axis.integer());\n    }\n\n    inline void check_axes_compatible_with_rank( const Value& axes, const size_t rank )\n    {\n        check(axes_compatible_with_rank(axes, rank),\n                \"axes must be in range [0,%d); found %s\", (int)rank, axes.toString().c_str());\n    }\n\n    inline void check_range( const char* name, const Value& value, const Value::integer_t min )\n    {\n        if ( value.kind() == Value::Array || value.kind() == Value::Tuple )\n        {\n            for ( size_t i = 0; i < value.size(); ++i )\n            {\n                check_range(name, value[i], min);\n            }\n        }\n        else if ( value.kind() == Value::Integer )\n        {\n            check(value.integer() >= min, \"'%s' must be >= %d (found %d)\", name, min, (int)value.integer());\n        }\n    }\n\n    inline void check_rank( const char* name, const Value& value, const size_t rank )\n    {\n        check(value.size() == rank, \"length of array '%s' must be %d to match rank of operation (found %d)\",\n                name, (int)rank, (int)value.size());\n    }\n\n    \n    inline Shape broadcast_shape( const Shape& xShape, const Shape& yShape, const size_t n )\n    {\n        const size_t rank = std::max(xShape.size(), yShape.size());\n        Shape zShape(rank);\n        \n        for ( size_t i = 0; i < n; ++i )\n        {\n            auto xi = i < xShape.size() ? xShape[i] : 1;\n            auto yi = i < yShape.size() ? yShape[i] : 1;\n            zShape[i] = std::max(xi, yi);\n        }\n        return zShape;\n    }\n    \n    inline Shape broadcast_shape( const Shape& xShape, const Shape& yShape )\n    {\n        const size_t rank = std::max(xShape.size(), yShape.size());\n        return broadcast_shape(xShape, yShape, rank);\n    }\n\n    inline Shape nullary_shape( const Value& shape )\n    {\n        return make_shape(shape);\n    }\n\n    inline Shape constant_shape( const Value& shape, const Value& value )\n    {\n        auto result = nullary_shape(shape);\n        check(value.size() == volume_of(result) || value.size() == 1,\n                \"shape volume (%d) does not match number of values (%d)\", (int)volume_of(result), (int)value.size());\n        \n        return result;\n    }\n\n    inline Shape unary_shape( const Shape& shape )\n    {\n        return shape;\n    }\n\n    inline Shape binary_shape( const Shape& shape1, const Shape& shape2 )\n    {\n        check(broadcast_compatible(shape1, shape2),\n              \"incompatible tensor shapes for broadcasting (%s vs %s)\",\n              to_string(shape1).c_str(), to_string(shape2).c_str());\n        \n        return broadcast_shape(shape1, shape2);\n    }\n    \n    inline Shape asymmetric_binary_shape( const Shape& shape1, const Shape& shape2 )\n    {\n        check(broadcastable(shape2, shape1),\n              \"cannot broadcast second argument shape (%s) to first argument shape (%s)\",\n              to_string(shape2).c_str(), to_string(shape1).c_str());\n        \n        return shape1;\n    }\n\n    inline Shape ternary_shape( const Shape& shape1, const Shape& shape2, const Shape& shape3 )\n    {\n        return binary_shape(binary_shape(shape1, shape2), shape3);\n    }\n\n    inline Shape reduce_shape( const Shape& input, const Value& axes )\n    {\n        check_axes_compatible_with_rank(axes, input.size());\n        \n        Shape output = input;\n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            auto axis = axes[i].integer();\n            output[axis] = 1;\n        }\n        \n        return output;\n    }\n    \n    inline Shape downsample_shape( const Shape& input, const Value& factor )\n    {\n        for ( size_t i = 0; i < factor.size(); ++i )\n        {\n            auto scale = factor[i].integer();\n            check(input[i+2] % scale == 0, \"input extent (%d) must be divisible by factor (%d)\", (int)input[i+2], (int)scale);\n        }\n        \n        Shape output = input;\n        for ( size_t i = 0; i < factor.size(); ++i )\n        {\n            output[i+2] /= factor[i].integer();\n        }\n        return output;\n    }\n    \n    inline Shape upsample_shape( const Shape& input, const Value& factor )\n    {\n        check_rank(\"factor\", factor, input.size() - 2);\n        \n        Shape output = input;\n        for ( size_t i = 0; i < factor.size(); ++i )\n        {\n            output[i+2] *= factor[i].integer();\n        }\n        return output;\n    }\n    \n    inline Shape downsize_shape( const Shape& input, const Shape& kernel, const Shape& padding, const Shape& stride, const Shape& dilation,\n                                const size_t offset )\n    {\n        Shape output(input.size());\n        for ( size_t i = offset; i < output.size(); ++i )\n        {\n            output[i] = padding.size() ? downsize(input[i], kernel[i], padding[i], stride[i], dilation[i]) : downsize(input[i], stride[i]);\n        }\n        return output;\n    }\n    \n    inline Shape upsize_shape( const Shape& input, const Shape& kernel, const Shape& padding, const Shape& stride, const Shape& dilation,\n                              const size_t offset )\n    {\n        Shape output(input.size());\n        for ( size_t i = offset; i < output.size(); ++i )\n        {\n            output[i] = padding.size() ? upsize(input[i], kernel[i], padding[i], stride[i], dilation[i]) : upsize(input[i], stride[i]);\n        }\n        return output;\n    }\n\n    inline Shape conv_like_shape( const Shape& input, const Shape& filter, const Shape& bias,\n                                 const Value& /*border*/, const Value& padding, const Value& stride, const Value& dilation,\n                                 const Value& groups, const Value& output_shape, const bool transposed )\n    {\n        auto rank = input.size();\n        \n        if ( padding.size() )\n        {\n            check_rank(\"padding\", padding, rank - 2);\n        }\n        if ( stride.size() )\n        {\n            check_rank(\"stride\", stride, rank - 2);\n        }\n        if ( dilation.size() )\n        {\n            check_rank(\"dilation\", dilation, rank - 2);\n        }\n        \n        check_range(\"stride\", stride, 1);\n        check_range(\"dilation\", dilation, 1);\n        check_range(\"groups\", groups, 0);\n        \n        auto groupCount = groups.integer() != 0 ? groups.integer() : transposed && output_shape && output_shape.size() ? output_shape[1].integer() : input[1];\n        \n        if ( transposed )\n        {\n            check(input[1] == filter[0], \"filter batch (%d) does not match input channels (%d)\",\n                    (int)filter[0], (int)input[1]);\n        }\n        else\n        {\n            check(input[1] == filter[1] * groupCount, \"filter channels (%d) does not match input channels (%d) times groups (%d)\",\n                    (int)filter[1], (int)input[1], (int)groupCount);\n        }\n        \n        check(filter[0] % groupCount == 0, \"filter batch (%d) must be divisible by groups (%d)\", (int)filter[0], (int)groupCount);\n        check(bias.size() <= 2, \"bias shape must be of rank at most 2, found %d\", (int)bias.size());\n        \n        if ( bias.size() == 2 )\n        {\n            check(bias[0] == 1, \"bias shape must be singular for the batch dimension\");\n        }\n        if ( bias.size() > 0 )\n        {\n            auto channels = transposed ? filter[1] * groupCount : filter[0];\n            check(bias.back() == channels || bias.back() == 1, \"bias channels (%d) does not match output channels (%d)\",\n                    (int)bias.back(), (int)channels);\n        }\n        \n        const Shape strideShape = make_shape(stride, stride.size() ? 2 : rank);\n        const Shape dilationShape = make_shape(dilation, dilation.size() ? 2 : rank);\n        const Shape paddingShape = padding.size() ? make_padding_shape(padding, 2) : Shape();\n        \n        if ( output_shape && output_shape.size() )\n        {\n            const Shape outputShape = make_shape(output_shape);\n            \n            check_rank(\"output_shape\", output_shape, rank);\n            check_range(\"output_shape\", output_shape, 1);\n            \n            check(outputShape[0] == input[0], \"output batch (%d) does not match input batch (%d)\", (int)outputShape[0], (int)input[0]);\n            check(outputShape[1] == filter[1] * groupCount, \"output channels (%d) does not match filter channels (%d) times groups (%d)\",\n                    (int)outputShape[1], (int)filter[1], (int)groupCount);\n            \n            Shape expected = downsize_shape(outputShape, filter, paddingShape, strideShape, dilationShape, 2);\n            std::copy_n(input.begin(), 2, expected.begin());\n            \n            check(input == expected, \"expected input shape %s derived from output shape is incompatible with actual input shape %s\",\n                    to_string(expected).c_str(), to_string(input).c_str());\n            \n            return outputShape;\n        }\n        \n        if ( transposed )\n        {\n            auto output = upsize_shape(input, filter, paddingShape, strideShape, dilationShape, 2);\n            output[0] = input[0];\n            output[1] = filter[1] * groupCount;\n            return output;\n        }\n        else\n        {\n            auto output = downsize_shape(input, filter, paddingShape, strideShape, dilationShape, 2);\n            output[0] = input[0];\n            output[1] = filter[0];\n            return output;\n        }\n    }\n    \n    inline Shape separable_conv_like_shape( const Shape& input, const Shape& plane_filter, const Shape& point_filter, const Shape& bias,\n                                           const Value& border, const Value& padding, const Value& stride, const Value& dilation,\n                                           const Value& groups, const Value& output_shape, const bool transposed )\n    {\n        for ( size_t i = 2; i < point_filter.size(); ++i )\n        {\n            check(point_filter[i] == 1, \"point filter must have singular extents in spatial dimensions\");\n        }\n        check(point_filter[1] == plane_filter[0], \"channel dimension of point filter must equal batch dimension of plane filter\");\n        check(plane_filter[1] == 1, \"channel dimension of plane filter must be singular\");\n        \n        Shape filter = plane_filter;\n        filter[0] = point_filter[0];\n        filter[1] = transposed ? point_filter[1] : input[1];\n        \n        return conv_like_shape(input, filter, bias, border, padding, stride, dilation, groups, output_shape, transposed);\n    }\n    \n    inline Shape conv_shape( const Shape& input, const Shape& filter, const Shape& bias,\n                            const Value& border, const Value& padding, const Value& stride, const Value& dilation,\n                            const Value& groups )\n    {\n        return conv_like_shape(input, filter, bias, border, padding, stride, dilation, groups, Value::none(), false);\n    }\n\n    inline Shape deconv_shape( const Shape& input, const Shape& filter, const Shape& bias,\n                              const Value& border, const Value& padding, const Value& stride, const Value& dilation,\n                              const Value& output_shape, const Value& groups )\n    {\n        return conv_like_shape(input, filter, bias, border, padding, stride, dilation, groups, output_shape, true);\n    }\n    \n    inline Shape separable_conv_shape( const Shape& input, const Shape& plane_filter, const Shape& point_filter, const Shape& bias,\n                                      const Value& border, const Value& padding, const Value& stride, const Value& dilation,\n                                      const Value& groups )\n    {\n        return separable_conv_like_shape(input, plane_filter, point_filter, bias, border, padding, stride, dilation, groups, Value::none(), false);\n    }\n    \n    inline Shape separable_deconv_shape( const Shape& input, const Shape& plane_filter, const Shape& point_filter, const Shape& bias,\n                                        const Value& border, const Value& padding, const Value& stride, const Value& dilation,\n                                        const Value& output_shape, const Value& groups )\n    {\n        return separable_conv_like_shape(input, plane_filter, point_filter, bias, border, padding, stride, dilation, groups, output_shape, true);\n    }\n    \n    inline Shape pool_like_shape( const Shape& input, const Value& size, const Value& /*border*/, const Value& padding,\n                                 const Value& stride, const Value& dilation, const Value& output_shape, const bool transposed )\n    {\n        auto rank = input.size();\n        \n        check_rank(\"size\", size, rank);\n        if ( padding.size() )\n        {\n            check_rank(\"padding\", padding, rank);\n        }\n        if ( stride.size() )\n        {\n            check_rank(\"stride\", stride, rank);\n        }\n        if ( dilation.size() )\n        {\n            check_rank(\"dilation\", dilation, rank);\n        }\n        \n        check_range(\"size\", size, 1);\n        check_range(\"stride\", stride, 1);\n        check_range(\"dilation\", dilation, 1);\n        \n        auto kernelShape = make_shape(size);\n        auto strideShape = make_shape(stride, stride.size() ? 0 : rank);\n        auto dilationShape = make_shape(dilation, dilation.size() ? 0 : rank);\n        auto paddingShape = padding.size() ? make_padding_shape(padding) : Shape();\n        \n        if ( output_shape && output_shape.size() )\n        {\n            const Shape outputShape = make_shape(output_shape);\n            \n            check_rank(\"output_shape\", output_shape, rank);\n            check_range(\"output_shape\", output_shape, 1);\n            \n            const Shape expected = downsize_shape(outputShape, kernelShape, paddingShape, strideShape, dilationShape, 0);\n            check(input == expected, \"expected input shape %s derived from output shape is incompatible with actual input shape %s\",\n                    to_string(expected).c_str(), to_string(input).c_str());\n            \n            return outputShape;\n        }\n        \n        if ( transposed )\n        {\n            return upsize_shape(input, kernelShape, paddingShape, strideShape, dilationShape, 0);\n        }\n        else\n        {\n            return downsize_shape(input, kernelShape, paddingShape, strideShape, dilationShape, 0);\n        }\n    }\n    \n    inline Shape sample_like_shape( const Shape& input, const Shape& index, const Value& size, const Value& border, const Value& padding,\n                                   const Value& stride, const Value& dilation, const Value& output_shape, const bool transposed )\n    {\n        check(index == input, \"index shape incompatible with input shape (%s vs %s)\",\n                to_string(index).c_str(), to_string(input).c_str());\n        return pool_like_shape(input, size, border, padding, stride, dilation, output_shape, transposed);\n    }\n\n    inline Shape pool_shape( const Shape& input, const Value& size, const Value& border, const Value& padding,\n                            const Value& stride, const Value& dilation )\n    {\n        return pool_like_shape(input, size, border, padding, stride, dilation, Value::none(), false);\n    }\n\n    inline Shape unpool_shape( const Shape& input, const Value& size, const Value& border, const Value& padding,\n                              const Value& stride, const Value& dilation, const Value& output_shape )\n    {\n        return pool_like_shape(input, size, border, padding, stride, dilation, output_shape, true);\n    }\n    \n    inline Shape sample_shape( const Shape& input, const Shape& index, const Value& size, const Value& border, const Value& padding,\n                              const Value& stride, const Value& dilation )\n    {\n        return sample_like_shape(input, index, size, border, padding, stride, dilation, Value::none(), false);\n    }\n    \n    inline Shape desample_shape( const Shape& input, const Shape& index, const Value& size, const Value& border, const Value& padding,\n                                const Value& stride, const Value& dilation, const Value& output_shape )\n    {\n        return sample_like_shape(input, index, size, border, padding, stride, dilation, output_shape, true);\n    }\n\n    inline Shape normalize_shape_axes( const Shape& input, const Value& axes )\n    {\n        check_axes_compatible_with_rank(axes, input.size());\n        \n        return input;\n    }\n\n    inline Shape normalize_shape_size( const Shape& input, const Value& size )\n    {\n        check_rank(\"size\", size, input.size());\n        check_range(\"size\", size, 1);\n        \n        return input;\n    }\n\n    inline Shape batchnorm_shape( const Shape& input, const Shape& mean, const Shape& variance, const Shape& offset, const Shape& scale, const Value& /*epsilon*/ )\n    {\n        check(broadcastable(mean, input), \"cannot broadcast 'mean' shape (%s) to 'input' shape (%s)\",\n              to_string(mean).c_str(), to_string(input).c_str());\n        check(broadcastable(variance, input), \"cannot broadcast 'variance' shape (%s) to 'input' shape (%s)\",\n              to_string(variance).c_str(), to_string(input).c_str());\n        check(broadcastable(offset, input), \"cannot broadcast 'offset' shape (%s) to 'input' shape (%s)\",\n              to_string(offset).c_str(), to_string(input).c_str());\n        check(broadcastable(scale, input), \"cannot broadcast 'scale' shape (%s) to 'input' shape (%s)\",\n              to_string(scale).c_str(), to_string(input).c_str());\n        \n        return input;\n    }\n\n    inline Shape roi_shape( const Shape& input, const Shape& rois, const Shape& index, const Value& size )\n    {\n        check_rank(\"output_size\", size, input.size() - 2);\n        check_range(\"output_size\", size, 1);\n        \n        check(rois.size() == 2, \"'rois' must be a rank-2 tensor\");\n        check(index.size() == 1, \"'batch_index' must be a rank-1 tensor\");\n        check(rois[1] == 4, \"rois must be of extent 4 along dimension 1 (found %d)\", (int)rois[1]);\n        check(index[0] == rois[0], \"'batch_index' must be of same length as dimension 0 of rois; found (%d vs %d)\", (int)index[0], (int)rois[0]);\n        \n        Shape output(input.size());\n        output[0] = rois[0];\n        output[1] = input[1];\n        for ( size_t i = 0; i < size.size(); ++i )\n        {\n            output[i+2] = (Shape::value_type)size[i].integer();\n        }\n        return output;\n    }\n\n    inline Shape roi_shape_resample( const Shape& input, const Shape& rois, const Shape& index, const Value& size, const Value& rate )\n    {\n        check_rank(\"sampling_rate\", rate, input.size() - 2);\n        check_range(\"sampling_rate\", rate, 1);\n        \n        return roi_shape(input, rois, index, size);\n    }\n\n    inline Shape reshape_shape( const Shape& input, const Value& shape, const Value& axis_start, const Value& axis_count )\n    {\n        check_axis_compatible_with_rank(axis_start, input.size() + 1);\n        check_range(\"axis_count\", axis_start, -1);\n        \n        const size_t offset = axis_start.integer();\n        const size_t length = axis_count.integer() == -1 ? input.size() - axis_start.integer() : axis_count.integer();\n        \n        check(offset + length <= input.size(), \"'axis_start' + 'axis_count' must be in range [0,%d], found %d\",\n              (int)input.size(), (int)(offset + length));\n        \n        Shape output(input.begin(), input.begin() + offset);\n        \n        size_t autoAxis = std::numeric_limits<size_t>::max();\n        for ( size_t i = 0; i < shape.size(); ++i )\n        {\n            auto s = shape[i].integer();\n            if ( s == 0 )\n            {\n                s = input[i + offset];\n            }\n            else if ( s == -1 )\n            {\n                check(autoAxis == std::numeric_limits<size_t>::max(), \"shape may only contain at most one -1 value\");\n                \n                s = 1;\n                autoAxis = i + offset;\n            }\n            output.push_back(s);\n        }\n        \n        output.insert(output.end(), input.begin() + offset + length, input.end());\n        \n        auto inputVolume = volume_of(input, offset, length);\n        auto outputVolume = volume_of(output, offset, shape.size());\n        \n        if ( autoAxis != std::numeric_limits<size_t>::max() )\n        {\n            check(inputVolume % outputVolume == 0, \"automatic output shape (%s) incompatible with input shape (%s)\", (int)outputVolume, (int)inputVolume);\n            \n            output[autoAxis] = (Shape::value_type)(inputVolume / outputVolume);\n        }\n        else\n        {\n            check(inputVolume == outputVolume, \"input volume (%d) does not equal output volume (%d)\", (int)inputVolume, (int)outputVolume);\n        }\n        \n        return output;\n    }\n\n    inline Shape transpose_shape( const Shape& input, const Value& axes )\n    {\n        std::vector<size_t> perm(axes.size());\n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            perm[i] = axes[i].integer();\n        }\n        \n        std::sort(perm.begin(), perm.end());\n        for ( size_t i = 0; i < perm.size(); ++i )\n        {\n            check(perm[i] == i, \"'axes' array must contain a permutation of dimensions from 0 to %d-1\", (int)perm.size());\n        }\n        \n        Shape output = input;\n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            auto j = axes[i].integer();\n            output[i] = input[j];\n        }\n        return output;\n    }\n\n    inline std::vector<Shape> split_shape( const Shape& value, const Value& axis, const Value& ratios )\n    {\n        check_axis_compatible_with_rank(axis, value.size());\n        check_range(\"ratios\", ratios, 1);\n        \n        auto idx = axis.integer();\n        \n        Value::integer_t total = 0;\n        for ( size_t i = 0; i < ratios.size(); ++i )\n        {\n            total += ratios[i].integer();\n        }\n        \n        check(value[idx] % total == 0, \"sum of split ratios (%d) does not divide whole extent (%d)\", (int)total, (int)value[idx]);\n        \n        const Value::integer_t unit = value[idx] / total;\n        \n        std::vector<Shape> values(ratios.size());\n        for ( size_t i = 0; i < values.size(); ++i )\n        {\n            Shape item = value;\n            item[idx] = unit * ratios[i].integer();\n            \n            values[i] = item;\n        }\n        return values;\n    }\n\n    inline Shape concat_shape( const std::vector<Shape>& valuesShape, const Value& axis )\n    {\n        check(valuesShape.size() != 0, \"input array must be non-empty\");\n        \n        Shape outputShape = valuesShape[0];\n        \n        check_axis_compatible_with_rank(axis, outputShape.size());\n        \n        const size_t idx = axis.integer();\n        \n        bool compatibleShape = true;\n        for ( size_t i = 1; i < valuesShape.size(); ++i )\n        {\n            auto& partShape = valuesShape[i];\n            if ( partShape.size() != outputShape.size() )\n            {\n                compatibleShape = false;\n                break;\n            }\n            \n            for ( size_t i = 0; i < outputShape.size(); ++i )\n            {\n                if ( i == idx )\n                {\n                    outputShape[i] += partShape[i];\n                }\n                else\n                {\n                    compatibleShape &= outputShape[i] == partShape[i];\n                }\n            }\n        }\n        \n        check(compatibleShape, \"incompatible tensor shapes in input array\");\n        \n        return outputShape;\n    }\n\n    inline Shape slice_shape( const Shape& input, const Value& axes, const Value& begin, const Value& end, const Value& stride )\n    {\n        check(begin.size() == axes.size() && end.size() == axes.size(), \"'axes', 'begin' and 'end' arrays must have the same length\");\n        check(stride.size() == 0 || stride.size() == axes.size(), \"'stride' must have the same length as 'axes'\");\n        \n        check_axes_compatible_with_rank(axes, input.size());\n        \n        Shape output = input;\n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            auto axis = axes[i].integer();\n            auto extent = input[axis];\n            auto str = stride.size() ? stride[i].integer() : 1;\n            \n            auto first = begin[i].integer();\n            if ( first < 0 )\n            {\n                first += extent;\n            }\n            \n            auto last = end[i].integer();\n            if ( last < 0 )\n            {\n                last += extent;\n            }\n            else if ( last == 0 && str == 1 )\n            {\n                last = extent;\n            }\n            \n            if ( first < 0 )\n            {\n                first = -1;\n            }\n            if ( first > extent )\n            {\n                first = extent;\n            }\n            if ( last < 0 )\n            {\n                last = -1;\n            }\n            if ( last > extent )\n            {\n                last = extent;\n            }\n            \n            check(str != 0, \"'stride' must be non-zero\");\n            \n            if ( str > 0 )\n            {\n                check(first >= 0 && last >= first, \"slice range (%d:%d:%d) is invalid for axis %d\",\n                      (int)first, (int)last, (int)str, (int)axis);\n            }\n            else\n            {\n                check(first < extent && last <= first, \"slice range (%d:%d:%d) is invalid for axis %d\",\n                      (int)first, (int)last, (int)str, (int)axis);\n            }\n            \n            output[axis] = ceil_div(last - first, str);\n        }\n        return output;\n    }\n\n    inline Shape stack_shape( const std::vector<Shape>& inputs, const Value& axis )\n    {\n        auto& input = inputs[0];\n        \n        bool compatibleShapes = std::all_of(inputs.begin() + 1, inputs.end(), [&]( const Shape& shape ){ return shape == input; });\n        check(compatibleShapes, \"incompatible tensor shapes in input array\");\n        \n        Shape output(input.size() + 1);\n        \n        check_axis_compatible_with_rank(axis, output.size());\n        \n        const size_t idx = axis.integer();\n        for ( size_t i = 0; i < idx; ++i )\n        {\n            output[i] = input[i];\n        }\n        output[idx] = (Shape::value_type)inputs.size();\n        for ( size_t i = idx + 1; i < output.size(); ++i )\n        {\n            output[i] = input[i-1];\n        }\n        return output;\n    }\n\n    inline std::vector<Shape> unstack_shape( const Shape& input, const Value& axis )\n    {\n        check_axis_compatible_with_rank(axis, input.size());\n        \n        const size_t idx = axis.integer();\n        \n        Shape output(input.size() - 1);\n        for ( size_t i = 0; i < idx; ++i )\n        {\n            output[i] = input[i];\n        }\n        for ( size_t i = idx; i < output.size(); ++i )\n        {\n            output[i] = input[i+1];\n        }\n        \n        return std::vector<Shape>(input[idx], output);\n    }\n\n    inline Shape squeeze_shape( const Shape& input, const Value& axes )\n    {\n        check_axes_compatible_with_rank(axes, input.size());\n        \n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            auto axis = axes[i].integer();\n            check(input[axis] == 1, \"squeezed dimension is not singleton (has extent %d)\", (int)input[axis]);\n        }\n        \n        Shape output(input.size() - axes.size());\n        for ( size_t i = 0, k = 0; i < input.size(); ++i )\n        {\n            if ( !contains_axis(axes, i) )\n            {\n                output[k++] = input[i];\n            }\n        }\n        return output;\n    }\n\n    inline Shape unsqueeze_shape( const Shape& input, const Value& axes )\n    {\n        Shape output(input.size() + axes.size());\n        \n        check_axes_compatible_with_rank(axes, output.size());\n        \n        for ( size_t i = 0, k = 0; i < output.size(); ++i )\n        {\n            output[i] = contains_axis(axes, i) ? (Shape::value_type)1 : input[k++];\n        }\n        return output;\n    }\n    \n    inline Shape tile_shape( const Shape& input, const Value& repeats )\n    {\n        check_rank(\"repeats\", repeats, input.size());\n        check_range(\"repeats\", repeats, 1);\n        \n        Shape output(input.size());\n        for ( size_t i = 0; i < output.size(); ++i )\n        {\n            output[i] = input[i] * repeats[i].integer();\n        }\n        return output;\n    }\n    \n    inline Shape pad_shape( const Shape& input, const Value& padding )\n    {\n        check_rank(\"padding\", padding, input.size());\n        \n        Shape output(input.size());\n        for ( size_t i = 0; i < output.size(); ++i )\n        {\n            output[i] = padding[i][0].integer() + input[i] + padding[i][1].integer();\n        }\n        return output;\n    }\n    \n    inline Shape gather_shape( const Shape& input, const Shape& indices, const Value& axis )\n    {\n        check_axis_compatible_with_rank(axis, input.size());\n        \n        const size_t idx = axis.integer();\n        \n        Shape output(input.size() + indices.size() - 1);\n        std::copy_n(input.begin(), idx, output.begin());\n        std::copy_n(indices.begin(), indices.size(), output.begin() + idx);\n        std::copy(input.begin() + idx + 1, input.end(), output.begin() + idx + indices.size());\n        \n        return output;\n    }\n\n    inline Shape matmul_shape( const Shape& A, const Shape& B, const Value& trA, const Value& trB )\n    {\n        check(A.size() == B.size(), \"rank mismatch for A and B (%d vs %d)\", (int)A.size(), (int)B.size());\n        \n        auto rank = A.size();\n        check(rank >= 2, \"rank of A and B must be at least 2, found %d\", (int)rank);\n        \n        auto batch_dims = rank - 2;\n        check(broadcast_compatible(A, B, batch_dims),\n              \"incompatible tensor shapes for broadcasting first %d dimensions (%s vs %s)\",\n              (int)batch_dims, to_string(A).c_str(), to_string(B).c_str());\n        \n        auto i0 = batch_dims + 0;\n        auto i1 = batch_dims + 1;\n        \n        auto m = trA.logical() ? A[i1] : A[i0];\n        auto n = trB.logical() ? B[i0] : B[i1];\n        auto kA = trA.logical() ? A[i0] : A[i1];\n        auto kB = trB.logical() ? B[i1] : B[i0];\n        \n        check(kA == kB, \"inner dimensions must agree (%d vs %d)\", (int)kA, (int)kB);\n        \n        Shape C = broadcast_shape(A, B, batch_dims);\n        C[i0] = m;\n        C[i1] = n;\n        return C;\n    }\n\n    inline Shape linear_shape( const Shape& input, const Shape& filter, const Shape& bias )\n    {\n        check(input.size() == 2, \"input shape must be of rank 2 (found %d)\", (int)input.size());\n        check(filter.size() == 2, \"filter shape must be of rank 2 (found %d)\", (int)filter.size());\n        check(input[1] == filter[1], \"inner dimensions must agree (%d vs %d)\", (int)input[1], (int)filter[1]);\n        if ( bias.size() )\n        {\n            check(bias[1] == filter[0], \"bias channels (%d) does not match filter count (%d)\", (int)bias[1], (int)filter[0]);\n        }\n\t\t\n        return Shape({ input[0], filter[0] });\n    }\n\n    inline Shape update_shape( const Shape& variable, const Shape& value )\n    {\n        check(value == variable, \"updated shape %s does not equal variable shape %s\", to_string(value).c_str(), to_string(variable).c_str());\n        return variable;\n    }\n\n    inline Shape softmax_shape( const Shape& inputShape, const Value& axes )\n    {\n        check_axes_compatible_with_rank(axes, inputShape.size());\n        return inputShape;\n    }\n\n    inline std::vector<Shape> copy_n_shape( const Shape& shape, const Value& times )\n    {\n        check_range(\"times\", times, 1);\n        return std::vector<Shape>(times.integer(), shape);\n    }\n\n    inline Shape add_n_shape( const std::vector<Shape>& inputs )\n    {\n        check(inputs.size() != 0, \"input array must be non-empty\");\n        \n        auto& shape = inputs[0];\n        for ( size_t i = 1; i < inputs.size(); ++i )\n        {\n            check(inputs[i] == shape, \"incompatible item shapes in array (%s vs %s)\", to_string(shape).c_str(), to_string(inputs[i]).c_str());\n        }\n        return shape;\n    }\n\t\n\tinline Shape quantize_shape( const Shape& input, const Shape& min, const Shape& max, const Value& bits )\n\t{\n\t\tcheck(broadcastable(min, input), \"cannot broadcast 'min' shape (%s) to 'input' shape (%s)\",\n\t\t\t  to_string(min).c_str(), to_string(input).c_str());\n\t\tcheck(broadcastable(max, input), \"cannot broadcast 'max' shape (%s) to 'input' shape (%s)\",\n\t\t\t  to_string(max).c_str(), to_string(input).c_str());\n\t\t\n\t\tcheck_range(\"bits\", bits, 0);\n\t\t\n\t\treturn input;\n\t}\n\t\n\tinline Shape linear_quantize_shape( const Shape& input, const Shape& min, const Shape& max, const Value& bits )\n\t{\n\t\treturn quantize_shape(input, min, max, bits);\n\t}\n\t\n\tinline Shape logarithmic_quantize_shape( const Shape& input, const Shape& max, const Value& bits )\n\t{\n\t\treturn quantize_shape(input, Shape(), max, bits);\n\t}\n    \n\tinline Shape zero_point_linear_quantize_shape( const Shape& input, const Shape& zero_point, const Shape& scale, const Value& bits )\n\t{\n        check(broadcastable(zero_point, input), \"cannot broadcast 'zero_point' shape (%s) to 'input' shape (%s)\",\n              to_string(zero_point).c_str(), to_string(input).c_str());\n        check(broadcastable(scale, input), \"cannot broadcast 'scale' shape (%s) to 'input' shape (%s)\",\n              to_string(scale).c_str(), to_string(input).c_str());\n        \n        check_range(\"bits\", bits, 0);\n        \n        return input;\n\t}\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/typespec.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_TYPESPEC_H_\n#define _NNEF_TYPESPEC_H_\n\n#include <map>\n#include <memory>\n#include <vector>\n#include <utility>\n#include <iostream>\n#include <algorithm>\n#include <initializer_list>\n\n\nnamespace nnef\n{\n    \n    enum class Typename { Integer, Scalar, Logical, String, Generic };\n    \n    inline const char* toString( const Typename& name )\n    {\n        static const char* strings[] =\n        {\n            \"integer\", \"scalar\", \"logical\", \"string\", \"?\"\n        };\n        return strings[(size_t)name];\n    }\n\n    inline Typename fromString( const std::string& str )\n    {\n        static const std::map<std::string,Typename> typenames =\n        {\n            { \"integer\", Typename::Integer },\n            { \"scalar\", Typename::Scalar },\n            { \"logical\", Typename::Logical },\n            { \"string\", Typename::String },\n        };\n        return typenames.at(str);\n    }\n    \n\n    \n    class Type\n    {\n    public:\n\n        enum Kind { Primitive, Tensor, Array, Tuple };\n\n    public:\n        \n        virtual ~Type() {}\n\n        virtual Kind kind() const = 0;\n\n        virtual bool isAttribute() const = 0;\n        virtual bool isGeneric() const = 0;\n\n        virtual std::string toString() const = 0;\n    };\n\n\n    class PrimitiveType : public Type\n    {\n    public:\n\n        PrimitiveType( const Typename name )\n        : _name(name)\n        {\n        }\n\n        Typename name() const\n        {\n            return _name;\n        }\n\n        virtual Kind kind() const\n        {\n            return Primitive;\n        }\n\n        virtual bool isAttribute() const\n        {\n            return true;\n        }\n\n        virtual bool isGeneric() const\n        {\n            return _name == Typename::Generic;\n        }\n\n        virtual std::string toString() const\n        {\n            return nnef::toString(_name);\n        }\n\n    private:\n\n        Typename _name;\n    };\n\n\n    class TensorType : public Type\n    {\n    public:\n\n        TensorType( const Type* dataType )\n        : _dataType(dataType)\n        {\n        }\n\n        const Type* dataType() const\n        {\n            return _dataType;\n        }\n\n        virtual Kind kind() const\n        {\n            return Tensor;\n        }\n\n        virtual std::string toString() const\n        {\n            return _dataType ? \"tensor<\" + _dataType->toString() + \">\" : \"tensor<>\";\n        }\n\n        virtual bool isAttribute() const\n        {\n            return false;\n        }\n        \n        virtual bool isGeneric() const\n        {\n            return _dataType && _dataType->isGeneric();\n        }\n\n    private:\n\n        const Type* _dataType;\n    };\n\n\n    class ArrayType : public Type\n    {\n    public:\n\n        ArrayType( const Type* itemType )\n        : _itemType(itemType)\n        {\n        }\n\n        const Type* itemType() const\n        {\n            return _itemType;\n        }\n\n        virtual Kind kind() const\n        {\n            return Array;\n        }\n\n        virtual std::string toString() const\n        {\n            return _itemType ? _itemType->toString() + \"[]\" : \"[]\";\n        }\n\n        virtual bool isAttribute() const\n        {\n            return _itemType && _itemType->isAttribute();\n        }\n\n        virtual bool isGeneric() const\n        {\n            return _itemType && _itemType->isGeneric();\n        }\n        \n    private:\n\n        const Type* _itemType;\n    };\n\n\n    class TupleType : public Type\n    {\n    public:\n\n        TupleType( const std::vector<const Type*>& itemTypes )\n        : _itemTypes(itemTypes)\n        {\n        }\n        \n        TupleType( const std::initializer_list<const Type*>& itemTypes )\n        : _itemTypes(itemTypes)\n        {\n        }\n\n        size_t size() const\n        {\n            return _itemTypes.size();\n        }\n\n        const Type* itemType( const size_t i ) const\n        {\n            return _itemTypes[i];\n        }\n\n        virtual Kind kind() const\n        {\n            return Tuple;\n        }\n\n        virtual bool isAttribute() const\n        {\n            return std::all_of(_itemTypes.begin(), _itemTypes.end(), []( const Type* type ){ return type->isAttribute(); });\n        }\n\n        virtual bool isGeneric() const\n        {\n            return std::any_of(_itemTypes.begin(), _itemTypes.end(), []( const Type* type ){ return type->isGeneric(); });\n        }\n\n        virtual std::string toString() const\n        {\n            std::string str;\n            str += '(';\n            for ( size_t i = 0; i < _itemTypes.size(); ++i )\n            {\n                if ( i )\n                {\n                    str += ',';\n                }\n                str += _itemTypes[i]->toString();\n            }\n            str += ')';\n            return str;\n        }\n\n    private:\n\n        std::vector<const Type*> _itemTypes;\n    };\n\n    \n    inline const PrimitiveType* primitiveType( const Typename name )\n    {\n        static const PrimitiveType types[] =\n        {\n            PrimitiveType(Typename::Integer),\n            PrimitiveType(Typename::Scalar),\n            PrimitiveType(Typename::Logical),\n            PrimitiveType(Typename::String),\n            PrimitiveType(Typename::Generic),\n        };\n        return &types[(size_t)name];\n    }\n\n    inline const TensorType* tensorType( const Typename name )\n    {\n        static const TensorType types[] =\n        {\n            TensorType(primitiveType(Typename::Integer)),\n            TensorType(primitiveType(Typename::Scalar)),\n            TensorType(primitiveType(Typename::Logical)),\n            TensorType(primitiveType(Typename::String)),\n            TensorType(primitiveType(Typename::Generic)),\n        };\n        return &types[(size_t)name];\n    }\n\n    inline const TensorType* tensorType()\n    {\n        static const TensorType type(nullptr);\n        return &type;\n    }\n\n    inline const Type* arrayType( const Type* itemType )\n    {\n        static std::map<const Type*,ArrayType> types;\n        \n        auto it = types.lower_bound(itemType);\n        if ( it == types.end() || it->first != itemType )\n        {\n            it = types.emplace_hint(it, itemType, itemType);\n        }\n        return &it->second;\n    }\n\n    inline const Type* tupleType( const std::vector<const Type*>& itemTypes )\n    {\n        static std::map<std::vector<const Type*>,TupleType> types;\n\n        auto it = types.lower_bound(itemTypes);\n        if ( it == types.end() || it->first != itemTypes )\n        {\n            it = types.emplace_hint(it, itemTypes, itemTypes);\n        }\n        return &it->second;\n    }\n    \n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/typeutils.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_TYPEUTILS_H_\n#define _NNEF_TYPEUTILS_H_\n\n#include \"typespec.h\"\n#include \"prototype.h\"\n#include \"dictionary.h\"\n#include <cassert>\n\n\nnamespace nnef\n{\n\n    inline bool isCastable( const Type* type1, const Type* type2, bool allowPrimitiveToTensor = true, bool allowArrayToTensor = false )\n    {\n        if ( type1 == type2 )\n        {\n            return true;\n        }\n\n        if ( type1->kind() == type2->kind() )\n        {\n            switch ( type1->kind() )\n            {\n                case Type::Primitive:\n                {\n                    auto primitiveType1 = static_cast<const PrimitiveType*>(type1);\n                    auto primitiveType2 = static_cast<const PrimitiveType*>(type2);\n                    return primitiveType1->name() == primitiveType2->name() || primitiveType2->name() == Typename::Generic;\n                }\n                case Type::Tensor:\n                {\n                    auto tensorType1 = static_cast<const TensorType*>(type1);\n                    auto tensorType2 = static_cast<const TensorType*>(type2);\n                    if ( tensorType1->dataType() && tensorType2->dataType() )\n                    {\n                        return isCastable(tensorType1->dataType(), tensorType2->dataType(), allowPrimitiveToTensor, allowArrayToTensor);\n                    }\n                    else\n                    {\n                        return !tensorType2->dataType();\n                    }\n                }\n                case Type::Array:\n                {\n                    auto arrayType1 = static_cast<const ArrayType*>(type1);\n                    auto arrayType2 = static_cast<const ArrayType*>(type2);\n                    if ( arrayType1->itemType() && arrayType2->itemType() )\n                    {\n                        return isCastable(arrayType1->itemType(), arrayType2->itemType(), allowPrimitiveToTensor, allowArrayToTensor);\n                    }\n                    else\n                    {\n                        return !arrayType1->itemType();\n                    }\n                }\n                case Type::Tuple:\n                {\n                    auto tupleType1 = static_cast<const TupleType*>(type1);\n                    auto tupleType2 = static_cast<const TupleType*>(type2);\n                    if ( tupleType1->size() != tupleType2->size() )\n                    {\n                        return false;\n                    }\n                    for ( size_t i = 0; i < tupleType1->size(); ++i )\n                    {\n                        if ( !isCastable(tupleType1->itemType(i), tupleType2->itemType(i), allowPrimitiveToTensor, allowArrayToTensor) )\n                        {\n                            return false;\n                        }\n                    }\n                    return true;\n                }\n            }\n        }\n        else if ( type1->kind() == Type::Primitive && type2->kind() == Type::Tensor && allowPrimitiveToTensor )\n        {\n            auto tensorType = static_cast<const TensorType*>(type2);\n            return !tensorType->dataType() || isCastable(type1, tensorType->dataType());\n        }\n        else if ( type1->kind() == Type::Array && type2->kind() == Type::Tensor && allowArrayToTensor )\n        {\n            auto arrayType = static_cast<const ArrayType*>(type1);\n            auto itemType = arrayType->itemType();\n            while ( itemType->kind() != Type::Primitive )\n            {\n                if ( itemType->kind() != Type::Array )\n                {\n                    return false;\n                }\n                itemType = static_cast<const ArrayType*>(itemType)->itemType();\n            }\n            auto tensorType = static_cast<const TensorType*>(type2);\n            return !tensorType->dataType() || isCastable(itemType, tensorType->dataType());\n        }\n\n        return false;\n    }\n\n    inline const Type* commonType( const Type* type1, const Type* type2 )\n    {\n        if ( isCastable(type1, type2) )\n        {\n            return type2;\n        }\n        else if ( isCastable(type2, type1) )\n        {\n            return type1;\n        }\n        return nullptr;\n    }\n    \n    inline const Type* bindDataType( const Type* paramType, const PrimitiveType* dataType )\n    {\n        if ( !paramType->isGeneric() || dataType == primitiveType(Typename::Generic) )\n        {\n            return paramType;\n        }\n        \n        switch ( paramType->kind() )\n        {\n            case Type::Primitive:\n            {\n                return paramType == primitiveType(Typename::Generic) ? dataType : paramType;\n            }\n            case Type::Tensor:\n            {\n                auto tensor = static_cast<const TensorType*>(paramType);\n                return tensor->dataType() == primitiveType(Typename::Generic) ? tensorType(dataType->name()) : paramType;\n            }\n            case Type::Array:\n            {\n                auto array = static_cast<const ArrayType*>(paramType);\n                return array->itemType() ? arrayType(bindDataType(array->itemType(), dataType)) : paramType;\n            }\n            case Type::Tuple:\n            {\n                auto tuple = static_cast<const TupleType*>(paramType);\n                \n                std::vector<const Type*> itemTypes(tuple->size());\n                for ( size_t i = 0; i < tuple->size(); ++i )\n                {\n                    itemTypes[i] = bindDataType(tuple->itemType(i), dataType);\n                }\n                return tupleType(itemTypes);\n            }\n        }\n        assert(false);\n        return nullptr;\n    }\n\n    inline void deduceDataType( const Type* paramType, const Type* argType, const PrimitiveType*& dataType )\n    {\n        if ( paramType->kind() == argType->kind() )\n        {\n            switch ( paramType->kind() )\n            {\n                case Type::Primitive:\n                {\n                    if ( paramType->isGeneric() )\n                    {\n                        auto primitiveType = static_cast<const PrimitiveType*>(argType);\n                        if ( !dataType )\n                        {\n                            dataType = primitiveType;\n                        }\n                        else if ( dataType != argType )\n                        {\n                            throw std::make_pair(dataType->name(), primitiveType->name());\n                        }\n                    }\n                    break;\n                }\n                case Type::Tensor:\n                {\n                    auto tensorType1 = static_cast<const TensorType*>(paramType);\n                    auto tensorType2 = static_cast<const TensorType*>(argType);\n                    if ( tensorType1->dataType() && tensorType2->dataType() )\n                    {\n                        deduceDataType(tensorType1->dataType(), tensorType2->dataType(), dataType);\n                    }\n                    break;\n                }\n                case Type::Array:\n                {\n                    auto arrayType1 = static_cast<const ArrayType*>(paramType);\n                    auto arrayType2 = static_cast<const ArrayType*>(argType);\n                    if ( arrayType1->itemType() && arrayType2->itemType() )\n                    {\n                        deduceDataType(arrayType1->itemType(), arrayType2->itemType(), dataType);\n                    }\n                    break;\n                }\n                case Type::Tuple:\n                {\n                    auto tupleType1 = static_cast<const TupleType*>(paramType);\n                    auto tupleType2 = static_cast<const TupleType*>(argType);\n                    assert(tupleType1->size() == tupleType2->size());\n\n                    for ( size_t i = 0; i < tupleType1->size(); ++i )\n                    {\n                        deduceDataType(tupleType1->itemType(i), tupleType2->itemType(i), dataType);\n                    }\n                    break;\n                }\n            }\n        }\n        else if ( paramType->kind() == Type::Tensor && argType->kind() == Type::Primitive )\n        {\n            auto tensorType = static_cast<const TensorType*>(paramType);\n            deduceDataType(tensorType->dataType(), argType, dataType);\n        }\n    }\n\n    inline bool deduceDataType( const Prototype& proto, const Dictionary<const Type*>& types, const PrimitiveType*& dataType )\n    {\n        for ( size_t i = 0; i < proto.paramCount(); ++i )\n        {\n            auto& param = proto.param(i);\n            if ( param.type()->isGeneric() )\n            {\n                auto argType = types.at(param.name());\n                deduceDataType(param.type(), argType, dataType);\n            }\n        }\n        return dataType != nullptr;\n    }\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/common/value.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_VALUE_H_\n#define _NNEF_VALUE_H_\n\n#define CHECKS_SHOULD_THROW 1\n\n#include <string>\n#include <sstream>\n#include <vector>\n\n\nnamespace nnef\n{\n\n    class Value;\n\n    std::ostream& operator<<( std::ostream& os, const Value& arg );\n\n    \n    class Value\n    {\n    public:\n        \n        typedef int integer_t;\n        typedef float scalar_t;\n        typedef bool logical_t;\n        typedef std::string string_t;\n        typedef std::vector<Value> items_t;\n        \n        struct identifier_t : public std::string\n        {\n            explicit identifier_t( const std::string& s ) : std::string(s) {}\n        };\n        \n        enum Kind { None, Integer, Scalar, Logical, String, Identifier, Array, Tuple };\n        \n    private:\n        \n        Value( const Kind kind, const integer_t& value )\n        : _kind(kind), _integer(value)\n        {\n        }\n\n        Value( const Kind kind, const scalar_t& value )\n        : _kind(kind), _scalar(value)\n        {\n        }\n\n        Value( const Kind kind, const logical_t& value )\n        : _kind(kind), _logical(value)\n        {\n        }\n\n        Value( const Kind kind, const string_t& value )\n        : _kind(kind), _string(value)\n        {\n        }\n\n        Value( const Kind kind, const identifier_t& value )\n        : _kind(kind), _identifier(value)\n        {\n        }\n\n        Value( const Kind kind, const items_t& value )\n        : _kind(kind), _items(value)\n        {\n        }\n        \n        Value( const Kind kind, items_t&& items )\n        : _kind(kind), _items(std::forward<items_t>(items))\n        {\n        }\n        \n    public:\n\n        static const Value& none()\n        {\n            static const Value none;\n            return none;\n        }\n        \n        static Value integer( const integer_t& value )\n        {\n            return Value(Integer, value);\n        }\n        \n        static Value scalar( const scalar_t& value )\n        {\n            return Value(Scalar, value);\n        }\n        \n        static Value logical( const logical_t& value )\n        {\n            return Value(Logical, value);\n        }\n        \n        static Value string( const string_t& value )\n        {\n            return Value(String, value);\n        }\n        \n        static Value identifier( const std::string& value )\n        {\n            return Value(Identifier, (identifier_t)value);\n        }\n\n        static Value array( const items_t& value )\n        {\n            return Value(Array, value);\n        }\n\n        static Value tuple( const items_t& value )\n        {\n            return Value(Tuple, value);\n        }\n        \n        static Value array( items_t&& items )\n        {\n            return Value(Array, std::forward<items_t>(items));\n        }\n        \n        static Value tuple( items_t&& items )\n        {\n            return Value(Tuple, std::forward<items_t>(items));\n        }\n\n        static Value make( const integer_t& value )\n        {\n            return Value(Integer, value);\n        }\n\n        static Value make( const scalar_t& value )\n        {\n            return Value(Scalar, value);\n        }\n\n        static Value make( const logical_t& value )\n        {\n            return Value(Logical, value);\n        }\n\n        static Value make( const string_t& value )\n        {\n            return Value(String, value);\n        }\n\n        static Value make( const identifier_t& value )\n        {\n            return Value(Identifier, value);\n        }\n        \n    public:\n        \n        Value()\n        : _kind(None)\n        {\n        }\n        \n        Value( const Value& other )\n        {\n            if ( &other != this )\n            {\n                construct(other);\n            }\n        }\n        \n        Value( Value&& other )\n        {\n            if ( &other != this )\n            {\n                move(other);\n            }\n        }\n        \n        ~Value()\n        {\n            destroy();\n        }\n        \n        Value& operator=( const Value& other )\n        {\n            if ( &other != this )\n            {\n                destroy();\n                construct(other);\n            }\n            return *this;\n        }\n        \n        Value& operator=( Value&& other )\n        {\n            if ( &other != this )\n            {\n                destroy();\n                move(other);\n            }\n            return *this;\n        }\n        \n        explicit operator bool() const\n        {\n            return _kind != None;\n        }\n        \n        Kind kind() const\n        {\n            return _kind;\n        }\n        \n        const integer_t& integer() const\n        {\n            checkKind(Integer);\n            return _integer;\n        }\n        \n        const scalar_t& scalar() const\n        {\n            checkKind(Scalar);\n            return _scalar;\n        }\n        \n        const logical_t& logical() const\n        {\n            checkKind(Logical);\n            return _logical;\n        }\n        \n        const string_t& string() const\n        {\n            checkKind(String);\n            return _string;\n        }\n        \n        const identifier_t& identifier() const\n        {\n            checkKind(Identifier);\n            return _identifier;\n        }\n\n        const items_t& array() const\n        {\n            checkKind(Array);\n            return _items;\n        }\n\n        const items_t& tuple() const\n        {\n            checkKind(Tuple);\n            return _items;\n        }\n\n        const items_t& items() const\n        {\n            checkItems();\n            return _items;\n        }\n        \n        template<typename T>\n        const T& get() const\n        {\n            return get(T());\n        }\n\n        size_t size() const\n        {\n            checkItems();\n            return _items.size();\n        }\n\n        const Value& operator[]( const size_t i ) const\n        {\n            checkItems();\n            return _items[i];\n        }\n\n        bool operator==( const Value& other ) const\n        {\n            return equals(other);\n        }\n\n        bool operator!=( const Value& other ) const\n        {\n            return !equals(other);\n        }\n\n        std::string toString() const\n        {\n            std::stringstream ss;\n            ss << *this;\n            return ss.str();\n        }\n        \n    private:\n        \n        const scalar_t& get( scalar_t ) const\n        {\n            return scalar();\n        }\n        \n        const integer_t& get( integer_t ) const\n        {\n            return integer();\n        }\n        \n        const logical_t& get( logical_t ) const\n        {\n            return logical();\n        }\n        \n        const string_t& get( string_t ) const\n        {\n            return string();\n        }\n        \n        const identifier_t& get( identifier_t ) const\n        {\n            return identifier();\n        }\n        \n    private:\n        \n        void checkKind( const Kind kind ) const\n        {\n#if CHECKS_SHOULD_THROW\n            if ( _kind != kind )\n            {\n\t\t\t\tthrow std::invalid_argument(\"Value: kind mismatch\");\n            }\n#endif\n        }\n        \n        void checkItems() const\n        {\n#if CHECKS_SHOULD_THROW\n            if ( _kind != Array && _kind != Tuple )\n            {\n\t\t\t\tthrow std::invalid_argument(\"Value: expected items\");\n            }\n#endif\n        }\n        \n        void move( Value& other )\n        {\n            _kind = other._kind;\n            switch ( _kind )\n            {\n                case Array:\n                case Tuple:\n                {\n                    new(&_items) items_t(std::move(other._items));\n                    break;\n                }\n                case String:\n                {\n                    new(&_string) string_t(std::move(other._string));\n                    break;\n                }\n                case Identifier:\n                {\n                    new(&_identifier) identifier_t(std::move(other._identifier));\n                    break;\n                }\n                case Integer:\n                {\n                    _integer = other._integer;\n                    break;\n                }\n                case Scalar:\n                {\n                    _scalar = other._scalar;\n                    break;\n                }\n                case Logical:\n                {\n                    _logical = other._logical;\n                    break;\n                }\n                case None:\n                {\n                    break;\n                }\n            }\n        }\n        \n        void construct( const Value& other )\n        {\n            _kind = other._kind;\n            switch ( _kind )\n            {\n                case Array:\n                case Tuple:\n                {\n                    new(&_items) items_t(other._items);\n                    break;\n                }\n                case String:\n                {\n                    new(&_string) string_t(other._string);\n                    break;\n                }\n                case Identifier:\n                {\n                    new(&_identifier) identifier_t(other._identifier);\n                    break;\n                }\n                case Integer:\n                {\n                    _integer = other._integer;\n                    break;\n                }\n                case Scalar:\n                {\n                    _scalar = other._scalar;\n                    break;\n                }\n                case Logical:\n                {\n                    _logical = other._logical;\n                    break;\n                }\n                case None:\n                {\n                    break;\n                }\n            }\n        }\n        \n        void destroy()\n        {\n            switch ( _kind )\n            {\n                case Array:\n                case Tuple:\n                {\n                    _items.~items_t();\n                    break;\n                }\n                case String:\n                {\n                    _string.~string_t();\n                    break;\n                }\n                case Identifier:\n                {\n                    _identifier.~identifier_t();\n                    break;\n                }\n                default:\n                {\n                    break;\n                }\n            }\n        }\n\n        bool equals( const Value& other ) const\n        {\n            if ( _kind != other._kind )\n            {\n                return false;\n            }\n            switch ( _kind )\n            {\n                case Array:\n                case Tuple:\n                {\n                    return _items == other._items;\n                }\n                case String:\n                {\n                    return _string == other._string;\n                }\n                case Identifier:\n                {\n                    return _identifier == other._identifier;\n                }\n                case Integer:\n                {\n                    return _integer == other._integer;\n                }\n                case Scalar:\n                {\n                    return _scalar == other._scalar;\n                }\n                case Logical:\n                {\n                    return _logical == other._logical;\n                }\n                case None:\n                {\n                    return true;\n                }\n            }\n            return false;\n        }\n        \n    private:\n        \n        Kind _kind;\n        union\n        {\n            integer_t _integer;\n            scalar_t _scalar;\n            logical_t _logical;\n            string_t _string;\n            identifier_t _identifier;\n            items_t _items;\n        };\n    };\n    \n\n    inline std::ostream& operator<<( std::ostream& os, const Value& arg )\n    {\n        switch ( arg.kind() )\n        {\n            case Value::None:\n            {\n                os << \"none\";\n                break;\n            }\n            case Value::Integer:\n            {\n                os << arg.integer();\n                break;\n            }\n            case Value::Scalar:\n            {\n                os << arg.scalar();\n                if ( (Value::integer_t)arg.scalar() == arg.scalar() )\n                {\n                    os << \".0\";\n                }\n                break;\n            }\n            case Value::Logical:\n            {\n                os << std::boolalpha << arg.logical();\n                break;\n            }\n            case Value::String:\n            {\n                os << '\\'' << arg.string() << '\\'';\n                break;\n            }\n            case Value::Identifier:\n            {\n                os << arg.identifier();\n                break;\n            }\n            case Value::Array:\n            {\n                os << '[';\n                for ( size_t i = 0; i < arg.size(); ++i )\n                {\n                    if ( i )\n                    {\n                        os << ',';\n                    }\n                    os << arg[i];\n                }\n                os << ']';\n                break;\n            }\n            case Value::Tuple:\n            {\n                os << '(';\n                for ( size_t i = 0; i < arg.size(); ++i )\n                {\n                    if ( i )\n                    {\n                        os << ',';\n                    }\n                    os << arg[i];\n                }\n                os << ')';\n                break;\n            }\n        }\n        return os;\n    }\n    \n    inline std::vector<int> nestedArrayShape( const Value& value )\n    {\n        if ( value.kind() != Value::Array )\n        {\n            return {};\n        }\n        \n        size_t rank = 1;\n        for ( const Value* v = &value; v->size() > 0 && v->items().data()->kind() == Value::Array; v = v->items().data() )\n        {\n            rank += 1;\n        }\n        \n        std::vector<int> shape(rank);\n        const Value* v = &value;\n        for ( size_t i = 0; i < rank; ++i, v = v->items().data() )\n        {\n            shape[i] = (int)v->size();\n        }\n        return shape;\n    }\n    \n}   // namespace nnef\n\n\n#undef CHECKS_SHOULD_THROW\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/comp/comp_parser.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_COMP_PARSER_H_\n#define _NNEF_COMP_PARSER_H_\n\n#include \"../common/dictionary.h\"\n#include \"../common/prototype.h\"\n#include \"../common/typeutils.h\"\n#include \"../common/parser.h\"\n#include \"../common/value.h\"\n#include \"../common/lexer.h\"\n#include \"../common/error.h\"\n#include \"stdlib_source.h\"\n#include \"expression.h\"\n#include \"evaluation.h\"\n#include \"fragment.h\"\n#include <exception>\n#include <cassert>\n#include <sstream>\n#include <cctype>\n\n\nnamespace nnef\n{\n    \n    class CompParser : public Parser\n    {\n    public:\n        \n        typedef Error::Position Position;\n\n    private:\n\n        typedef Dictionary<Fragment> Fragments;\n        typedef Dictionary<Prototype> Prototypes;\n        typedef Dictionary<const Type*> Declarations;\n        \n    public:\n\n        CompParser( const std::string& stdlib, const std::set<std::string>& lowered = {} )\n        : _stdlib_source(!stdlib.empty() ? stdlib : stdlib_source()), _lowered(lowered), _flags(0)\n        {\n        }\n        \n        virtual void parse( std::istream& is, const char* filename, Callback& callback )\n        {\n            Lexer lexer(is, filename);\n            lexer.next();\n\n            auto version = readVersion(lexer);\n\n            callback.beginDocument(filename, version);\n\n            _flags = 0;\n            auto extensions = readExtensions(lexer, [&]( const std::string& ext )\n            {\n                return callback.handleExtension(ext) || handleExtension(ext);\n            });\n\n            Prototypes prototypes;\n            Fragments fragments;\n\n            parseFragments(_stdlib_source, \"stdlib\", prototypes, fragments);\n\n            if ( _flags & KHR_ENABLE_FRAGMENT_DEFINITIONS )\n            {\n                while ( lexer.token() == Lexer::Fragment )\n                {\n                    auto fragment = parseFragment(lexer, prototypes, (_flags & KHR_ENABLE_OPERATOR_EXPRESSIONS) != 0);\n                    fragments.emplace(fragment.prototype().name(), std::move(fragment));\n                }\n            }\n\n            lexer.readToken(Lexer::Graph);\n\n            auto graph = parsePrototype(lexer, prototypes, false, true);\n            auto assignments = parseAssignments(lexer, graph, prototypes, (_flags & KHR_ENABLE_OPERATOR_EXPRESSIONS) != 0, true);\n            \n            callback.beginGraph(graph, prototypes);\n\n            Dictionary<Value> values;\n            Dictionary<Typename> dtypes;\n\t\t\tstd::set<std::string> vars;\n\n            Evaluation evaluation(assignments, fragments, _lowered);\n            for ( auto& assignment : assignments )\n            {\n\t\t\t\tcheckExternalsAndVariables(assignment.lhs(), assignment.rhs(), graph, vars);\n\t\t\t\t\n                const Value context = evaluation.evaluateLvalue(assignment.lhs(), Dictionary<Value>(), true);\n                evaluation.evaluateAssign(assignment.lhs(), assignment.rhs(), values, dtypes, callback, nullptr, context);\n            }\n\n            callback.endGraph(graph, dtypes);\n            callback.endDocument(filename);\n\n            lexer.readToken(Lexer::Eof);\n        }\n\n    private:\n\n        bool handleExtension( const std::string& ext )\n        {\n            if ( ext == \"KHR_enable_fragment_definitions\" )\n            {\n                _flags |= KHR_ENABLE_FRAGMENT_DEFINITIONS;\n                return true;\n            }\n            else if ( ext == \"KHR_enable_operator_expressions\" )\n            {\n                _flags |= KHR_ENABLE_OPERATOR_EXPRESSIONS;\n                return true;\n            }\n            return false;\n        }\n\n        static void parseFragments( const std::string& text, const char* filename, Prototypes& prototypes, Fragments& fragments )\n        {\n            std::stringstream ss(text);\n            Lexer lexer(ss, filename);\n            lexer.next();\n\n            while ( lexer.token() != Lexer::Eof )\n            {\n                auto fragment = parseFragment(lexer, prototypes, true);\n                fragments.emplace(fragment.prototype().name(), std::move(fragment));\n            }\n        }\n\n        static Prototype parsePrototype( Lexer& lexer, const Prototypes& prototypes, bool allowTypespec, bool graph )\n        {\n            auto position = lexer.position();\n\n            const std::string name = lexer.string();\n            lexer.readToken(Lexer::Identifier);\n\n            if ( prototypes.count(name) )\n            {\n                throw Error(position, \"operation '%s' already defined\", name.c_str());\n            }\n\n            bool isGenericDecl = false;\n            const PrimitiveType* genericParamDefault = nullptr;\n            if ( !graph && lexer.readIfToken('<') )\n            {\n                isGenericDecl = true;\n\n                lexer.readToken('?');\n                if ( lexer.readIfToken('=') )\n                {\n                    genericParamDefault = primitiveType(getTypename(lexer));\n                    lexer.next();\n                }\n                \n                lexer.readToken('>');\n            }\n\n            std::vector<Param> params = parseParams(lexer, name, allowTypespec, graph);\n\n            lexer.readToken(Lexer::Arrow);\n\n            std::vector<Result> results = parseResults(lexer, name, allowTypespec, !graph);\n\n            for ( auto& result : results )\n            {\n                if ( std::find_if(params.begin(), params.end(), [&]( const Param& param ){ return param.name() == result.name(); }) != params.end() )\n                {\n                    throw Error(position, \"invalid definition of operation '%s'; '%s' is defined both as parameter and as result\",\n                                name.c_str(), result.name().c_str());\n                }\n            }\n\n            bool attribute = results.front().type()->isAttribute();\n            for ( size_t i = 1; i < results.size(); ++i )\n            {\n                if ( results[i].type()->isAttribute() != attribute )\n                {\n                    throw Error(position, \"result types of fragment must be all tensor types or all attribute types\");\n                }\n            }\n\n            auto isGenericTyped = []( const Typed& typed ){ return typed.type()->isGeneric(); };\n            bool hasGenericParams = std::any_of(params.begin(), params.end(), isGenericTyped);\n            bool hasGenericResults = std::any_of(results.begin(), results.end(), isGenericTyped);\n            if ( (hasGenericParams || hasGenericResults) && !isGenericDecl )\n            {\n                throw Error(position, \"fragment with generic parameter or result types must be declared generic using <?>\");\n            }\n            else if ( isGenericDecl && !hasGenericParams && !hasGenericResults )\n            {\n                throw Error(position, \"fragment declared as generic must have at least one generic parameter or result type\");\n            }\n\n            return Prototype(name, params, results, genericParamDefault);\n        }\n\n        static std::vector<Param> parseParams( Lexer& lexer, const std::string& op, bool allowTypespec, bool forceDefaults )\n        {\n            std::vector<Param> params;\n\n            lexer.readToken('(');\n\n            bool expectAttribute = false;\n            do\n            {\n                auto position = lexer.position();\n\n                auto name = lexer.string();\n                lexer.readToken(Lexer::Identifier);\n\n                const Type* type = tensorType();\n                if ( allowTypespec )\n                {\n                    lexer.readToken(':');\n                    type = parseTypespec(lexer, true);\n                }\n\n                if ( expectAttribute && !type->isAttribute() )\n                {\n                    throw Error(position, \"expected attribute, found parameter of type '%s'\", type->toString().c_str());\n                }\n\n                expectAttribute |= type->isAttribute();\n\n                auto defaultValue = Value::none();\n                if ( lexer.token() == '=' )\n                {\n                    lexer.next();\n\n                    auto expr = parseExpression(lexer, nullptr, nullptr, true, false, false, false);\n\n                    if ( !isCastable(expr->type(), type) )\n                    {\n                        throw Error(expr->position(), \"default value type '%s' cannot be cast to parameter type '%s'\",\n                                    expr->type()->toString().c_str(), type->toString().c_str());\n                    }\n\n                    defaultValue = Evaluation::evaluateRvalue(*expr);\n                }\n                else if ( forceDefaults && type->isAttribute() )\n                {\n                    throw Error(position, \"expected default value for parameter '%s'\", name.c_str());\n                }\n\n                if ( std::find_if(params.begin(), params.end(), [&]( const Param& param ){ return param.name() == name; }) != params.end() )\n                {\n                    throw Error(position, \"duplicate parameter definition for fragment '%s'; parameter '%s' is already defined\",\n                                op.c_str(), name.c_str());\n                }\n\n                params.emplace_back(name, type, defaultValue);\n            }\n            while ( lexer.readIfToken(',') );\n\n            lexer.readToken(')');\n\n            return params;\n        }\n\n        static std::vector<Result> parseResults( Lexer& lexer, const std::string& op, bool allowTypespec, bool allowAttribute )\n        {\n            std::vector<Result> results;\n\n            lexer.readToken('(');\n\n            do\n            {\n                auto position = lexer.position();\n\n                auto name = lexer.string();\n                lexer.readToken(Lexer::Identifier);\n\n                const Type* type = tensorType();\n                if ( allowTypespec )\n                {\n                    lexer.readToken(':');\n                    type = parseTypespec(lexer, false);\n\n                    if ( !allowAttribute && type->isAttribute() )\n                    {\n                        throw Error(position, \"non-tensor type not allowed in this context\");\n                    }\n                }\n\n                if ( std::find_if(results.begin(), results.end(), [&]( const Result& result ){ return result.name() == name; }) != results.end() )\n                {\n                    throw Error(position, \"duplicate result definition for operation '%s'; result '%s' is already defined\",\n                                op.c_str(), name.c_str());\n                }\n\n                results.emplace_back(name, type);\n            }\n            while ( lexer.readIfToken(',') );\n\n            lexer.readToken(')');\n\n            return results;\n        }\n\n        static Fragment parseFragment( Lexer& lexer, Prototypes& prototypes, bool allowOperator )\n        {\n            lexer.readToken(Lexer::Fragment);\n\n            auto prototype = parsePrototype(lexer, prototypes, true, false);\n            auto& proto = prototypes.emplace(prototype.name(), prototype).first->second;\n\n            std::vector<Assignment> assignments;\n            if ( !lexer.readIfToken(';') )\n            {\n                assignments = parseAssignments(lexer, proto, prototypes, allowOperator, false);\n            }\n\n            return Fragment(proto, assignments);\n        }\n\n        static std::vector<Assignment> parseAssignments( Lexer& lexer, const Prototype& proto, const Prototypes& prototypes, bool allowOperator, bool graph )\n        {\n            Declarations decls;\n            for ( size_t i = 0; i < proto.paramCount(); ++i )\n            {\n                auto& param = proto.param(i);\n                if ( !graph || param.type()->isAttribute() )\n                {\n                    decls[param.name()] = param.type();\n                }\n            }\n\n            std::vector<Assignment> assignments;\n\n            lexer.readToken('{');\n\n            do\n            {\n                auto lhs = parseTuple(lexer, nullptr, nullptr, false, true, false);\n\n                lexer.readToken('=');\n\n                auto rhs = allowOperator ? parseExpression(lexer, &prototypes, &decls, true, true, true) : parseInvocation(lexer, &prototypes, &decls);\n\n                lexer.readToken(';');\n\n                declare(*lhs, rhs->type(), decls);\n\t\t\t\tif ( !graph )\n\t\t\t\t{\n\t\t\t\t\tcheckOperationsAllowed(*rhs);\n\t\t\t\t}\n\n                assignments.emplace_back(lhs, rhs);\n            }\n            while ( lexer.token() != '}' );\n\n            if ( graph )\n            {\n                for ( size_t i = 0; i < proto.paramCount(); ++i )\n                {\n                    auto& param = proto.param(i);\n                    if ( !decls.count(param.name()) )\n                    {\n                        throw Error(lexer.position(), \"graph parameter '%s' is not assigned\", param.name().c_str());\n                    }\n                }\n            }\n\n            for ( size_t i = 0; i < proto.resultCount(); ++i )\n            {\n                auto& result = proto.result(i);\n                auto decl = decls[result.name()];\n                if ( !decl )\n                {\n                    throw Error(lexer.position(), \"result '%s' of operation '%s' is not assigned\",\n                                result.name().c_str(), proto.name().c_str());\n                }\n                else if ( !isCastable(decl, result.type(), true) )\n                {\n                    throw Error(lexer.position(), \"result '%s' of operation '%s' is declared as '%s' but assignment has incompatible type '%s'\",\n                                result.name().c_str(), proto.name().c_str(), result.type()->toString().c_str(), decl->toString().c_str());\n                }\n            }\n\n            lexer.readToken('}');\n\n            return assignments;\n        }\n\t\t\n\t\tstatic void checkOperationsAllowed( const Expr& rhs )\n\t\t{\n\t\t\ttraverse(rhs, []( const Expr& expr )\n\t\t\t{\n\t\t\t\tif ( expr.kind() == Expr::Invocation )\n\t\t\t\t{\n\t\t\t\t\tauto& invocation = static_cast<const InvocationExpr&>(expr);\n\t\t\t\t\t\n\t\t\t\t\tif ( invocation.target() == \"external\" || invocation.target() == \"variable\" || invocation.target() == \"update\" )\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow Error(invocation.position(), \"operation '%s' not allowed inside fragments\", invocation.target().c_str());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t});\n\t\t}\n\t\t\n\t\tvoid checkExternalsAndVariables( const Expr& lhs, const Expr& rhs, const Prototype& graph, std::set<std::string>& vars )\n\t\t{\n\t\t\tif ( (lhs.kind() == Expr::Array || lhs.kind() == Expr::Tuple) && rhs.kind() == lhs.kind() )\n\t\t\t{\n\t\t\t\tauto& left = static_cast<const ItemExpr&>(lhs);\n\t\t\t\tauto& right = static_cast<const ItemExpr&>(rhs);\n\t\t\t\t\n\t\t\t\tfor ( size_t i = 0; i < left.size(); ++i )\n\t\t\t\t{\n\t\t\t\t\tcheckExternalsAndVariables(left.item(i), right.item(i), graph, vars);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if ( rhs.kind() == Expr::Invocation && lhs.kind() == Expr::Identifier )\n\t\t\t{\n\t\t\t\tauto& identifier = static_cast<const IdentifierExpr&>(lhs);\n\t\t\t\tauto& invocation = static_cast<const InvocationExpr&>(rhs);\n\t\t\t\t\n\t\t\t\tif ( invocation.target() == \"external\" )\n\t\t\t\t{\n\t\t\t\t\tif ( !graph.param(identifier.name()) )\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow Error(identifier.position(), \"identifiers assigned by operation 'external' must be graph parameters\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tif ( graph.param(identifier.name()) )\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow Error(identifier.position(), \"graph parameter '%s' can only be assigned by operation 'external'\",\n\t\t\t\t\t\t\t\t\tidentifier.name().c_str());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\tif ( invocation.target() == \"variable\" )\n\t\t\t\t{\n\t\t\t\t\tvars.insert(identifier.name());\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\tif ( invocation.target() == \"update\" )\n\t\t\t\t{\n\t\t\t\t\tauto& arg = *invocation.arg(\"variable\");\n\t\t\t\t\t\n\t\t\t\t\tif ( arg.kind() != Expr::Identifier || !vars.count(static_cast<const IdentifierExpr&>(arg).name()) )\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow Error(arg.position(), \"first argument to operation 'update' must be a variable\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t\n\t\tstatic void traverse( const Expr& expr, std::function<void(const Expr&)> func )\n\t\t{\n\t\t\tfunc(expr);\n\t\t\tswitch ( expr.kind() )\n\t\t\t{\n\t\t\t\tcase Expr::Literal:\n\t\t\t\tcase Expr::Identifier:\n\t\t\t\t{\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase Expr::Builtin:\n\t\t\t\t{\n\t\t\t\t\tauto& builtin = static_cast<const BuiltinExpr&>(expr);\n\t\t\t\t\ttraverse(builtin.arg(), func);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase Expr::Array:\n\t\t\t\tcase Expr::Tuple:\n\t\t\t\t{\n\t\t\t\t\tauto& items = static_cast<const ItemExpr&>(expr);\n\t\t\t\t\tfor ( size_t i = 0; i < items.size(); ++i )\n\t\t\t\t\t{\n\t\t\t\t\t\ttraverse(items.item(i), func);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase Expr::Subscript:\n\t\t\t\t{\n\t\t\t\t\tauto& subscript = static_cast<const SubscriptExpr&>(expr);\n\t\t\t\t\ttraverse(subscript.sequence(), func);\n\t\t\t\t\tif ( subscript.begin() )\n\t\t\t\t\t{\n\t\t\t\t\t\ttraverse(*subscript.begin(), func);\n\t\t\t\t\t}\n\t\t\t\t\tif ( subscript.end() )\n\t\t\t\t\t{\n\t\t\t\t\t\ttraverse(*subscript.end(), func);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase Expr::Comprehension:\n\t\t\t\t{\n\t\t\t\t\tauto& comprehension = static_cast<const ComprehensionExpr&>(expr);\n\t\t\t\t\tfor ( size_t i = 0; i < comprehension.iteratorCount(); ++i )\n\t\t\t\t\t{\n\t\t\t\t\t\ttraverse(comprehension.iterator(i), func);\n\t\t\t\t\t\ttraverse(comprehension.iterable(i), func);\n\t\t\t\t\t}\n\t\t\t\t\tif ( comprehension.condition() )\n\t\t\t\t\t{\n\t\t\t\t\t\ttraverse(*comprehension.condition(), func);\n\t\t\t\t\t}\n\t\t\t\t\ttraverse(comprehension.item(), func);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase Expr::Unary:\n\t\t\t\t{\n\t\t\t\t\tauto& unary = static_cast<const UnaryExpr&>(expr);\n\t\t\t\t\ttraverse(unary.right(), func);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase Expr::Binary:\n\t\t\t\t{\n\t\t\t\t\tauto& binary = static_cast<const BinaryExpr&>(expr);\n\t\t\t\t\ttraverse(binary.left(), func);\n\t\t\t\t\ttraverse(binary.right(), func);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase Expr::Select:\n\t\t\t\t{\n\t\t\t\t\tauto& select = static_cast<const SelectExpr&>(expr);\n\t\t\t\t\ttraverse(select.condition(), func);\n\t\t\t\t\ttraverse(select.trueValue(), func);\n\t\t\t\t\ttraverse(select.falseValue(), func);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase Expr::Invocation:\n\t\t\t\t{\n\t\t\t\t\tauto& invocation = static_cast<const InvocationExpr&>(expr);\n\t\t\t\t\tfor ( auto it = invocation.begin(); it != invocation.end(); ++it )\n\t\t\t\t\t{\n\t\t\t\t\t\ttraverse(*it->second, func);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n    private:\n\n        static const Type* parseArrayTypespec( Lexer& lexer, const Type* type )\n        {\n            while ( lexer.readIfToken('[') )\n            {\n                lexer.readToken(']');\n\n                type = arrayType(type);\n            }\n\n            return type;\n        }\n\n        static const Type* parseTupleTypespec( Lexer& lexer, bool allowUnboundTensor )\n        {\n            auto position = lexer.position();\n\n            lexer.next();\n\n            std::vector<const Type*> items;\n            do\n            {\n                items.push_back(parseTypespec(lexer, allowUnboundTensor));\n            }\n            while ( lexer.readIfToken(',') );\n\n            lexer.readToken(')');\n\n            bool attribute = items.front()->isAttribute();\n            for ( size_t i = 1; i < items.size(); ++i )\n            {\n                if ( items[i]->isAttribute() != attribute )\n                {\n                    throw Error(position, \"item types in tuple type must be all attribute types or all tensor types\");\n                }\n            }\n\n            return parseArrayTypespec(lexer, tupleType(items));\n        }\n\n        static const Type* parseTypespec( Lexer& lexer, bool allowUnboundTensor )\n        {\n            if ( lexer.token() == '(' )\n            {\n                return parseTupleTypespec(lexer, allowUnboundTensor);\n            }\n\n            const Type* type = nullptr;\n            if ( lexer.readIfToken(Lexer::Tensor) )\n            {\n                lexer.readToken('<');\n\n                type = tensorType();\n\n                if ( lexer.token() != '>' )\n                {\n                    type = tensorType(getTypename(lexer));\n                    lexer.next();\n                }\n                else if ( !allowUnboundTensor )\n                {\n                    throw Error(lexer.position(), \"unbound tensor not allowed in this context\");\n                }\n\n                lexer.readToken('>');\n            }\n            else\n            {\n                const Typename name = getTypename(lexer);\n                lexer.next();\n\n                type = primitiveType(name);\n            }\n\n            return parseArrayTypespec(lexer, type);\n        }\n        \n    private:\n        \n        static Shared<Expr> parseExpression( Lexer& lexer, const Prototypes* prototypes, Declarations* decls,\n                                            bool allowLiteral, bool allowIdentifier, bool allowOperator,\n                                            bool allowSelect = true )\n        {\n            auto expr = parsePrimary(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n            if ( expr->kind() != Expr::Literal && allowOperator )\n            {\n                expr = parseSubscripts(lexer, prototypes, decls, expr);\n            }\n            if ( allowOperator )\n            {\n                expr = parseBinary(lexer, prototypes, decls, expr);\n                if ( lexer.token() == Lexer::If && allowSelect )\n                {\n                    expr = parseSelect(lexer, prototypes, decls, expr);\n                }\n            }\n            return expr;\n        }\n        \n        static Shared<Expr> parsePrimary( Lexer& lexer, const Prototypes* prototypes, Declarations* decls,\n                                         bool allowLiteral, bool allowIdentifier, bool allowOperator )\n        {\n            switch ( lexer.token() )\n            {\n                case Lexer::True:\n                case Lexer::False:\n                {\n                    if ( allowLiteral )\n                    {\n                        return parseLogical(lexer);\n                    }\n                    break;\n                }\n                case Lexer::Fractional:\n                {\n                    if ( allowLiteral )\n                    {\n                        return parseScalar(lexer);\n                    }\n                    break;\n                }\n                case Lexer::Decimal:\n                {\n                    if ( allowLiteral )\n                    {\n                        return parseInteger(lexer);\n                    }\n                    break;\n                }\n                case Lexer::Characters:\n                {\n                    if ( allowLiteral )\n                    {\n                        return parseString(lexer);\n                    }\n                    break;\n                }\n                case Lexer::Identifier:\n                {\n                    if ( allowIdentifier )\n                    {\n                        return parseIdentifier(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n                    }\n                    break;\n                }\n                case '[':\n                {\n                    return parseArray(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n                }\n                case '(':\n                {\n                    return parseTuple(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n                }\n                case '-':\n                {\n                    return parseUnary(lexer, prototypes, decls);\n                }\n                case '!':\n                {\n                    if ( allowOperator )\n                    {\n                        return parseUnary(lexer, prototypes, decls);\n                    }\n                    break;\n                }\n                case Lexer::ShapeOf:\n                {\n                    throw Error(lexer.position(), \"the use of operator 'shape_of' is deprecated and is not supported\");\n                }\n                case Lexer::LengthOf:\n                case Lexer::RangeOf:\n                case Lexer::Integer:\n                case Lexer::Scalar:\n                case Lexer::Logical:\n                case Lexer::String:\n                {\n                    if ( allowOperator )\n                    {\n                        return parseBuiltin(lexer, prototypes, decls);\n                    }\n                    break;\n                }\n                default:\n                {\n                    throw Error(lexer.position(), \"unexpected token '%s'\", Lexer::tokenString(lexer.token()).c_str());\n                }\n            }\n            throw Error(lexer.position(), \"token '%s' not allowed in this context\", Lexer::tokenString(lexer.token()).c_str());\n        }\n        \n        static Shared<Expr> parseInteger( Lexer& lexer )\n        {\n            auto position = lexer.position();\n            \n            auto value = getIntegerValue(lexer);\n            lexer.next();\n            \n            return std::make_shared<IntegerExpr>(position, value, primitiveType(Typename::Integer));\n        }\n        \n        static Shared<Expr> parseScalar( Lexer& lexer )\n        {\n            auto position = lexer.position();\n            \n            auto value = getScalarValue(lexer);\n            lexer.next();\n            \n            return std::make_shared<ScalarExpr>(position, value, primitiveType(Typename::Scalar));\n        }\n        \n        static Shared<Expr> parseLogical( Lexer& lexer )\n        {\n            auto position = lexer.position();\n            \n            auto value = lexer.token() == Lexer::True;\n            lexer.next();\n            \n            return std::make_shared<LogicalExpr>(position, value, primitiveType(Typename::Logical));\n        }\n        \n        static Shared<Expr> parseString( Lexer& lexer )\n        {\n            auto position = lexer.position();\n            \n            auto value = lexer.string();\n            lexer.next();\n            \n            return std::make_shared<StringExpr>(position, value, primitiveType(Typename::String));\n        }\n        \n        static Shared<Expr> parseIdentifier( Lexer& lexer, const Prototypes* prototypes, Declarations* decls,\n                                            bool allowLiteral, bool allowIdentifier, bool allowOperator )\n        {\n            auto position = lexer.position();\n            auto string = lexer.string();\n\n            lexer.readToken(Lexer::Identifier);\n            \n            if ( lexer.token() == '(' || (lexer.token() == '<' && prototypes && prototypes->count(string)) )\n            {\n                return parseInvocation(lexer, prototypes, decls, position, string, allowLiteral, allowIdentifier, allowOperator);\n            }\n            else\n            {\n                return makeIdentifier(position, string, decls);\n            }\n        }\n        \n        static Shared<Expr> makeIdentifier( const Position& position, const std::string& name, Declarations* decls )\n        {\n            const Type* type = nullptr;\n            if ( decls )\n            {\n                type = (*decls)[name];\n                if ( !type )\n                {\n                    throw Error(position, \"undeclared identifier '%s'\", name.c_str());\n                }\n            }\n            return std::make_shared<IdentifierExpr>(position, name, type);\n        }\n        \n        static Shared<Expr> parseArray( Lexer& lexer, const Prototypes* prototypes, Declarations* decls,\n                                       bool allowLiteral, bool allowIdentifier, bool allowOperator )\n        {\n            auto position = lexer.position();\n            lexer.next();\n            \n            std::vector<Shared<Expr>> items;\n\n            const Type* type = nullptr;\n            \n            if ( lexer.token() != ']' )\n            {\n                if ( lexer.token() == Lexer::For )\n                {\n                    return parseComprehension(lexer, prototypes, decls, position);\n                }\n\n                auto first = parseExpression(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n                items = { first };\n                type = first->type();\n\n                while ( lexer.readIfToken(',') )\n                {\n                    auto item = parseExpression(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n                    items.push_back(item);\n\n                    if ( decls )\n                    {\n                        type = commonType(type, item->type());\n                        if ( !type )\n                        {\n                            throw Error(position, \"incompatible item types (%s vs %s) in array\",\n                                        first->type()->toString().c_str(), item->type()->toString().c_str());\n                        }\n                    }\n                }\n            }\n            \n            lexer.readToken(']');\n            \n            return std::make_shared<ArrayExpr>(position, items, arrayType(type));\n        }\n        \n        static Shared<Expr> parseTuple( Lexer& lexer, const Prototypes* prototypes, Declarations* decls,\n                                       bool allowLiteral, bool allowIdentifier, bool allowOperator )\n        {\n            auto position = lexer.position();\n            \n            bool parenthesized = lexer.token() == '(';\n            if ( parenthesized )\n            {\n                lexer.next();\n            }\n\n            std::vector<Shared<Expr>> items;\n            std::vector<const Type*> types;\n\n            auto first = parseExpression(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n\n            if ( lexer.token() == ',' )\n            {\n                items = { first };\n                types = { first->type() };\n\n                while ( lexer.readIfToken(',') )\n                {\n                    auto item = parseExpression(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n                    items.push_back(item);\n                    types.push_back(item->type());\n                }\n            }\n            \n            if ( parenthesized )\n            {\n                lexer.readToken(')');\n            }\n\n            return items.empty() ? first : std::make_shared<TupleExpr>(position, items, tupleType(types));\n        }\n\n        static Shared<Expr> parseInvocation( Lexer& lexer, const Prototypes* prototypes, Declarations* decls )\n        {\n            auto position = lexer.position();\n            auto string = lexer.string();\n\n            lexer.readToken(Lexer::Identifier);\n\n            if ( lexer.token() != '(' && lexer.token() != '<' )\n            {\n                throw Error(position, \"expected operation invocation\");\n            }\n\n            return parseInvocation(lexer, prototypes, decls, position, string, true, true, false);\n        }\n        \n        static Shared<Expr> parseInvocation( Lexer& lexer, const Prototypes* prototypes, Declarations* decls,\n                                            const Position& position, const std::string& target,\n                                            bool allowLiteral, bool allowIdentifier, bool allowOperator )\n        {\n            auto it = prototypes->find(target);\n            if ( it == prototypes->end() )\n            {\n                throw Error(position, \"undefined operation '%s'\", target.c_str());\n            }\n\n            const Prototype& proto = it->second;\n            \n            const PrimitiveType* dataType = proto.genericParamDefault();\n            if ( lexer.readIfToken('<') )\n            {\n                dataType = primitiveType(getTypename(lexer));\n                lexer.next();\n                \n                lexer.readToken('>');\n            }\n\n            lexer.readToken('(');\n\n            Dictionary<Shared<Expr>> args;\n            \n            bool expectNamed = false;\n            \n            do\n            {\n                auto position = lexer.position();\n                \n                if ( args.size() >= proto.paramCount() )\n                {\n                    throw Error(position, \"too many positional arguments; definition of '%s' has only %d parameters\",\n                                proto.name().c_str(), (int)proto.paramCount());\n                }\n                \n                const Param* param = nullptr;\n                Shared<Expr> arg;\n                \n                bool named = false;\n                if ( lexer.token() == Lexer::Identifier )\n                {\n                    auto string = lexer.string();\n                    lexer.next();\n                    \n                    if ( lexer.readIfToken('=') )\n                    {\n                        param = proto.param(string);\n                        if ( !param )\n                        {\n                            throw Error(position, \"operation '%s' has no parameter called '%s'\",\n                                        proto.name().c_str(), string.c_str());\n                        }\n                        \n                        arg = parseExpression(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n                        named = true;\n                    }\n                    else\n                    {\n                        param = &proto.param(args.size());\n                        if ( lexer.token() == '(' )\n                        {\n                            arg = parseInvocation(lexer, prototypes, decls, position, string, allowLiteral, allowIdentifier, allowOperator);\n                        }\n                        else\n                        {\n                            arg = makeIdentifier(position, string, decls);\n                        }\n                        arg = parseSubscripts(lexer, prototypes, decls, arg);\n                        arg = parseBinary(lexer, prototypes, decls, arg);\n                        if ( lexer.token() == Lexer::If )\n                        {\n                            arg = parseSelect(lexer, prototypes, decls, arg);\n                        }\n                    }\n                }\n                else\n                {\n                    param = &proto.param(args.size());\n                    arg = parseExpression(lexer, prototypes, decls, allowLiteral, allowIdentifier, allowOperator);\n                }\n\n                auto paramType = dataType ? bindDataType(param->type(), dataType) : param->type();\n                if ( !isCastable(arg->type(), paramType) )\n                {\n                    throw Error(position, \"argument of type '%s' cannot be cast to type '%s' for parameter '%s'\",\n                                arg->type()->toString().c_str(), paramType->toString().c_str(), param->name().c_str());\n                }\n                \n                expectNamed |= named || paramType->isAttribute();\n                if ( expectNamed && !named )\n                {\n                    throw Error(position, \"expected named argument\");\n                }\n\n                auto contained = args[param->name()];\n                if ( contained )\n                {\n                    auto& pos = contained->position();\n                    throw Error(position, \"duplicate arguments: parameter '%s' already assigned (%u,%u)\",\n                                param->name().c_str(), pos.line, pos.column);\n                }\n\n                args[param->name()] = arg;\n            }\n            while ( lexer.readIfToken(',') );\n            \n            for ( size_t i = 0; i < proto.paramCount(); ++i )\n            {\n                auto& param = proto.param(i);\n\n                if ( !args.count(param.name()) )\n                {\n                    if ( !param.defaultValue() )\n                    {\n                        throw Error(lexer.position(), \"missing argument for fragment '%s'; parameter '%s' not assigned\",\n                                        proto.name().c_str(), param.name().c_str());\n                    }\n                    else if ( param.type()->isGeneric() )\n                    {\n                        auto valueType = typeOf(param.defaultValue());\n                        auto paramType = dataType ? bindDataType(param.type(), dataType) : param.type();\n                        if ( !isCastable(valueType, paramType) )\n                        {\n                            throw Error(lexer.position(), \"default value type '%s' cannot be cast to type '%s' for parameter '%s'\",\n                                        valueType->toString().c_str(), paramType->toString().c_str(), param.name().c_str());\n                        }\n                    }\n                }\n            }\n            \n            lexer.readToken(')');\n            \n            if ( proto.isGeneric() && !dataType && !deduceDataType(proto, args, dataType, position) )\n            {\n                throw Error(position, \"could not deduce generic data-type\");\n            }\n            \n            const Type* type = resultType(proto, dataType);\n\n            return std::make_shared<InvocationExpr>(position, target, args, type, dataType);\n        }\n        \n        static Shared<Expr> parseUnary( Lexer& lexer, const Prototypes* prototypes, Declarations* decls )\n        {\n            auto position = lexer.position();\n            int op = lexer.token();\n            lexer.next();\n            \n            auto rhs = parseExpression(lexer, prototypes, decls, true, true, true);\n            \n            auto type = unaryResultType(rhs->type(), op);\n            if ( !type )\n            {\n                throw Error(position, \"invalid operand type '%s' for operation '%s'\",\n                            rhs->type()->toString().c_str(), Lexer::tokenString(op).c_str());\n            }\n            \n            if ( type->kind() == Type::Tensor )\n            {\n                auto target = unaryOpName(op);\n                auto args = makeUnaryOpArgs(rhs);\n                return std::make_shared<InvocationExpr>(position, target, args, type);\n            }\n            else\n            {\n                return std::make_shared<UnaryExpr>(position, rhs, op, type);\n            }\n        }\n        \n        static Shared<Expr> parseBinary( Lexer& lexer, const Prototypes* prototypes, Declarations* decls, Shared<Expr> lhs, int exprPrec = 0 )\n        {\n            auto position = lhs->position();\n\n            while (true)\n            {\n                int tokPrec = tokenPrecedence(lexer.token());\n                if ( tokPrec < exprPrec )\n                {\n                    return lhs;\n                }\n                \n                int op = lexer.token();\n                lexer.next();\n                \n                auto rhs = parsePrimary(lexer, prototypes, decls, true, true, true);\n                rhs = parseSubscripts(lexer, prototypes, decls, rhs);\n                \n                int nextPrec = tokenPrecedence(lexer.token());\n                if ( tokPrec < nextPrec )\n                {\n                    rhs = parseBinary(lexer, prototypes, decls, rhs, tokPrec + 1);\n                }\n                \n                auto type = binaryResultType(lhs->type(), rhs->type(), op);\n                if ( !type )\n                {\n                    throw Error(position, \"invalid operand types '%s' and '%s' for operation '%s'\",\n                                lhs->type()->toString().c_str(),  rhs->type()->toString().c_str(),\n                                Lexer::tokenString(op).c_str());\n                }\n                \n                if ( type->kind() == Type::Tensor )\n                {\n                    auto target = binaryOpName(op);\n                    auto args = makeBinaryOpArgs(lhs, rhs);\n                    lhs = std::make_shared<InvocationExpr>(position, target, args, type);\n                }\n                else\n                {\n                    lhs = std::make_shared<BinaryExpr>(position, lhs, rhs, op, type);\n                }\n            }\n        }\n        \n        static Shared<Expr> parseBuiltin( Lexer& lexer, const Prototypes* prototypes, Declarations* decls )\n        {\n            auto position = lexer.position();\n            int op = lexer.token();\n            lexer.next();\n            \n            lexer.readToken('(');\n            \n            auto arg = parseExpression(lexer, prototypes, decls, true, true, true);\n            \n            auto type = builtinResultType(op);\n            if ( !type )\n            {\n                throw Error(position, \"invalid operand type '%s' for operation '%s'\",\n                            arg->type()->toString().c_str(), Lexer::tokenString(op).c_str());\n            }\n            \n            lexer.readToken(')');\n\n            if ( op == Lexer::LengthOf )\n            {\n                if ( arg->type()->kind() != Type::Array && arg->type() != primitiveType(Typename::String) )\n                {\n                    throw Error(position, \"argument of length_of() must be an array or string (found %s)\", arg->type()->toString().c_str());\n                }\n            }\n            if ( op == Lexer::ShapeOf )\n            {\n                if ( arg->type()->kind() != Type::Tensor && arg->type()->kind() != Type::Primitive )\n                {\n                    throw Error(position, \"argument of shape_of() must be of tensor or primitive type (found %s)\",\n                                arg->type()->toString().c_str());\n                }\n            }\n            else if ( op == Lexer::RangeOf && arg->type() != primitiveType(Typename::String) )\n            {\n                if ( arg->type()->kind() != Type::Array )\n                {\n                    throw Error(position, \"argument of range_of() must be an array or string (found %s)\",\n                                arg->type()->toString().c_str());\n                }\n            }\n            else if ( op == Lexer::Integer || op == Lexer::Scalar || op == Lexer::Logical || op == Lexer::String )\n            {\n                if ( arg->type()->kind() != Type::Primitive )\n                {\n                    throw Error(position, \"argument of %s() must be of non-tensor primitive type (found %s)\",\n                                Lexer::tokenString(op).c_str(), arg->type()->toString().c_str());\n                }\n            }\n            \n            return std::make_shared<BuiltinExpr>(position, arg, op, type);\n        }\n\n        static Shared<Expr> parseSubscript( Lexer& lexer, const Prototypes* prototypes, Declarations* decls, const Shared<Expr> sequence )\n        {\n            lexer.next();\n\n            Shared<Expr> beg, end;\n            const Type* type = nullptr;\n\n            if ( sequence->type()->kind() == Type::Tuple )\n            {\n                beg = parseExpression(lexer, prototypes, decls, true, true, true);\n                if ( beg->kind() != Expr::Literal || beg->type() != primitiveType(Typename::Integer) )\n                {\n                    throw Error(beg->position(), \"tuple index must be an integer literal\");\n                }\n\n                auto idx = static_cast<const IntegerExpr&>(*beg).value();\n\t\t\t\t\n\t\t\t\tlexer.readToken(']');\n\n                type = static_cast<const TupleType*>(sequence->type())->itemType(idx);\n            }\n            else if ( sequence->type()->kind() == Type::Array || sequence->type() == primitiveType(Typename::String) )\n            {\n                if ( lexer.token() != ':' )\n                {\n                    beg = parseExpression(lexer, prototypes, decls, true, true, true);\n                    if ( beg->type() != primitiveType(Typename::Integer) )\n                    {\n                        throw Error(beg->position(), \"array index must be of type integer, found '%s'\", beg->type()->toString().c_str());\n                    }\n                }\n                bool range = false;\n                if ( lexer.readIfToken(':') )\n                {\n                    range = true;\n\n                    if ( lexer.token() != ']' )\n                    {\n                        end = parseExpression(lexer, prototypes, decls, true, true, true);\n                        if ( end->type() != primitiveType(Typename::Integer) )\n                        {\n                            throw Error(end->position(), \"array index must be of type integer, found '%s'\", end->type()->toString().c_str());\n                        }\n                    }\n                }\n                else\n                {\n                    end = beg;\n                }\n\n                lexer.readToken(']');\n\n                if ( sequence->type()->kind() == Type::Array )\n                {\n                    auto arrayType = static_cast<const ArrayType*>(sequence->type());\n                    type = range ? arrayType : arrayType->itemType();\n                }\n                else\n                {\n                    type = primitiveType(Typename::String);\n                }\n            }\n            else\n            {\n                throw Error(sequence->position(), \"subscripted expression must be of type array, tuple, or string; found '%s'\",\n                            sequence->type()->toString().c_str());\n            }\n\n            return std::make_shared<SubscriptExpr>(sequence->position(), sequence, beg, end, type);\n        }\n\n        static Shared<Expr> parseSubscripts( Lexer& lexer, const Prototypes* prototypes, Declarations* decls, Shared<Expr> sequence )\n        {\n            while ( lexer.token() == '[' )\n            {\n                sequence = parseSubscript(lexer, prototypes, decls, sequence);\n            }\n            return sequence;\n        }\n\n        static Shared<Expr> parseSelect( Lexer& lexer, const Prototypes* prototypes, Declarations* decls, Shared<Expr> trueValue )\n        {\n            lexer.readToken(Lexer::If);\n\n            auto condition = parseExpression(lexer, prototypes, decls, true, true, true);\n            if ( condition->type() != primitiveType(Typename::Logical) )\n            {\n                throw Error(condition->position(), \"condition must be a logical value\");\n            }\n\n            lexer.readToken(Lexer::Else);\n\n            auto falseValue = parseExpression(lexer, prototypes, decls, true, true, true);\n\n            const Type* type = commonType(trueValue->type(), falseValue->type());\n            if ( !type )\n            {\n                throw Error(trueValue->position(), \"incompatible types in if-else expression (%s vs %s)\",\n                            trueValue->type()->toString().c_str(), falseValue->type()->toString().c_str());\n            }\n\n            return std::make_shared<SelectExpr>(trueValue->position(), condition, trueValue, falseValue, type);\n        }\n\n        static Shared<Expr> parseComprehension( Lexer& lexer, const Prototypes* prototypes, Declarations* decls,\n                                               const Position& position )\n        {\n            lexer.readToken(Lexer::For);\n\n            std::vector<Shared<Expr>> iterators, iterables;\n            \n            do\n            {\n                auto iterator = parseIterator(lexer, decls);\n                \n                lexer.readToken(Lexer::In);\n\n                auto iterable = parseExpression(lexer, prototypes, decls, true, true, true, false);\n                if ( iterable->type()->kind() != Type::Array )\n                {\n                    throw Error(iterable->position(), \"expression not iterable\");\n                }\n                \n                iterators.push_back(iterator);\n                iterables.push_back(iterable);\n                \n                auto itemType = static_cast<const ArrayType*>(iterable->type())->itemType();\n                declare(*iterator, itemType, *decls);\n            }\n            while ( lexer.readIfToken(',') );\n\n            Shared<Expr> condition = nullptr;\n            if ( lexer.readIfToken(Lexer::If) )\n            {\n                condition = parseExpression(lexer, prototypes, decls, true, true, true);\n                if ( condition->type() != primitiveType(Typename::Logical) )\n                {\n                    throw Error(condition->position(), \"condition in comprehension expression must be a logical expression\");\n                }\n            }\n\n            lexer.readToken(Lexer::Yield);\n\n            auto item = parseExpression(lexer, prototypes, decls, true, true, true);\n            const Type* type = arrayType(item->type());\n\n            for ( size_t i = 0; i < iterators.size(); ++i )\n            {\n                undeclare(*iterators[i], *decls);\n            }\n\n            lexer.readToken(']');\n\n            return std::make_shared<ComprehensionExpr>(position, iterators, iterables, condition, item, type);\n        }\n        \n    private:\n        \n        static Shared<Expr> parseIterator( Lexer& lexer, const Declarations* decls )\n        {\n            if ( lexer.token() == Lexer::Identifier )\n            {\n                auto iterator = std::make_shared<IdentifierExpr>(lexer.position(), lexer.string(), nullptr);\n                lexer.readToken(Lexer::Identifier);\n                \n                return iterator;\n            }\n            \n            if ( lexer.token() != '(' )\n            {\n                throw Error(lexer.position(), \"expected tuple or identifier\");\n            }\n            lexer.next();\n            \n            auto position = lexer.position();\n            \n            std::vector<Shared<Expr>> items;\n            std::vector<const Type*> types;\n            \n            auto first = parseIterator(lexer, decls);\n            \n            if ( lexer.token() == ',' )\n            {\n                items = { first };\n                types = { first->type() };\n                \n                while ( lexer.readIfToken(',') )\n                {\n                    auto item = parseIterator(lexer, decls);\n                    items.push_back(item);\n                    types.push_back(item->type());\n                }\n            }\n            \n            lexer.readToken(')');\n            \n            return items.empty() ? first : std::make_shared<TupleExpr>(position, items, tupleType(types));\n        }\n        \n    private:\n\n        static void declare( const Expr& expr, const Type* type, Declarations& declared )\n        {\n            switch ( expr.kind() )\n            {\n                case Expr::Identifier:\n                {\n                    auto& identifier = static_cast<const IdentifierExpr&>(expr);\n                    if ( declared.count(identifier.name()) )\n                    {\n                        throw Error(expr.position(), \"identifier '%s' is already declared\", identifier.name().c_str());\n                    }\n                    declared.emplace(identifier.name(), type);\n                    break;\n                }\n                case Expr::Array:\n                {\n                    if ( type->kind() != Type::Array )\n                    {\n                        throw Error(expr.position(), \"cannot assign result of type '%s' to array\", type->toString().c_str());\n                    }\n                    auto& array = static_cast<const ArrayExpr&>(expr);\n                    auto arrayType = static_cast<const ArrayType*>(type);\n                    for ( size_t i = 0; i < array.size(); ++i )\n                    {\n                        declare(array.item(i), arrayType->itemType(), declared);\n                    }\n                    break;\n                }\n                case Expr::Tuple:\n                {\n                    if ( type->kind() != Type::Tuple )\n                    {\n                        throw Error(expr.position(), \"cannot assign result of type '%s' to tuple\", type->toString().c_str());\n                    }\n                    auto& tuple = static_cast<const TupleExpr&>(expr);\n                    auto tupleType = static_cast<const TupleType*>(type);\n                    if ( tupleType->size() != tuple.size() )\n                    {\n                        throw Error(expr.position(), \"cannot assign result of type '%s' to a tuple of size %d\",\n                                    type->toString().c_str(), (int)tuple.size());\n                    }\n                    for ( size_t i = 0; i < tuple.size(); ++i )\n                    {\n                        declare(tuple.item(i), tupleType->itemType(i), declared);\n                    }\n                    break;\n                }\n                default:\n                {\n                    throw Error(expr.position(), \"expression not allowed in this context\");\n                }\n            }\n        }\n        \n        static void undeclare( const Expr& expr, Declarations& declared )\n        {\n            switch ( expr.kind() )\n            {\n                case Expr::Identifier:\n                {\n                    auto& identifier = static_cast<const IdentifierExpr&>(expr);\n                    declared.erase(identifier.name());\n                    break;\n                }\n                case Expr::Array:\n                case Expr::Tuple:\n                {\n                    auto& items = static_cast<const ItemExpr&>(expr);\n                    for ( size_t i = 0; i < items.size(); ++i )\n                    {\n                        undeclare(items.item(i), declared);\n                    }\n                    break;\n                }\n                default:\n                {\n                    throw Error(expr.position(), \"expression not allowed in this context\");\n                }\n            }\n        }\n\n    private:\n\n        static bool deduceDataType( const Prototype& proto, const Dictionary<Shared<Expr>>& args, const PrimitiveType*& dataType,\n                                   const Position& position )\n        {\n            Dictionary<const Type*> types;\n            for ( auto& arg : args )\n            {\n                types[arg.first] = arg.second->type();\n            }\n            for ( size_t i = 0; i < proto.paramCount(); ++i )\n            {\n                auto& param = proto.param(i);\n                if ( !types.count(param.name()) )\n                {\n                    assert(param.defaultValue());\n                    types[param.name()] = typeOf(param.defaultValue());\n                }\n            }\n\n            try\n            {\n                return nnef::deduceDataType(proto, types, dataType);\n            }\n            catch ( std::pair<Typename,Typename> e )\n            {\n                throw Error(position, \"could not deduce data-type: ambiguous candidates '%s' vs '%s'\", toString(e.first), toString(e.second));\n            }\n        }\n\n        static const Type* resultType( const Prototype& proto, const PrimitiveType* dataType )\n        {\n            if ( proto.resultCount() == 1 )\n            {\n                return dataType ? bindDataType(proto.result(0).type(), dataType) : proto.result(0).type();\n            }\n\n            std::vector<const Type*> types(proto.resultCount());\n            for ( size_t i = 0; i < proto.resultCount(); ++i )\n            {\n                types[i] = dataType ? bindDataType(proto.result(i).type(), dataType) : proto.result(i).type();\n            }\n            return tupleType(types);\n        }\n        \n        static const Type* unaryResultType( const Type* argType, int op )\n        {\n            switch ( op )\n            {\n                case '-':\n                case '+':\n                {\n                    if ( argType == primitiveType(Typename::Integer) ||\n                        argType == primitiveType(Typename::Scalar) ||\n                        argType == tensorType(Typename::Scalar) )\n                    {\n                        return argType;\n                    }\n                    break;\n                }\n                case '!':\n                {\n                    if ( argType == primitiveType(Typename::Logical) ||\n                        argType == tensorType(Typename::Scalar) )\n                    {\n                        return argType;\n                    }\n                    break;\n                }\n            }\n            return nullptr;\n        }\n        \n        static const Type* binaryResultType( const Type* lhsType, const Type* rhsType, int op )\n        {\n            if ( op == Lexer::In && rhsType->kind() == Type::Array )\n            {\n                return primitiveType(Typename::Logical);\n            }\n            else if ( op == '+' && lhsType->kind() == Type::Array && rhsType == lhsType )\n            {\n                return lhsType;\n            }\n            else if ( op == '*' )\n            {\n                if ( lhsType->kind() == Type::Array && rhsType == primitiveType(Typename::Integer) )\n                {\n                    return lhsType;\n                }\n                if ( rhsType->kind() == Type::Array && lhsType == primitiveType(Typename::Integer) )\n                {\n                    return rhsType;\n                }\n            }\n            \n            const Type* argType = commonType(lhsType, rhsType);\n            \n            switch ( op )\n            {\n                case '<':\n                case '>':\n                case Lexer::Le:\n                case Lexer::Ge:\n                case Lexer::Eq:\n                case Lexer::Ne:\n                {\n                    return argType == tensorType(Typename::Scalar) ? (const Type*)tensorType(Typename::Logical) : (const Type*)primitiveType(Typename::Logical);\n                }\n                case '+':\n                case '*':\n                {\n                    if ( argType == primitiveType(Typename::String) )\n                    {\n                        return argType;\n                    }\n                }\n                case '-':\n                case '/':\n                case '^':\n                {\n                    if ( argType == primitiveType(Typename::Integer) ||\n                        argType == primitiveType(Typename::Scalar) ||\n                        argType == tensorType(Typename::Scalar) )\n                    {\n                        return argType;\n                    }\n                    break;\n                }\n                case Lexer::And:\n                case Lexer::Or:\n                {\n                    if ( argType == primitiveType(Typename::Logical) ||\n                        argType == tensorType(Typename::Scalar) )\n                    {\n                        return argType;\n                    }\n                    break;\n                }\n            }\n            return nullptr;\n        }\n        \n        static const Type* builtinResultType( int op )\n        {\n            switch ( op )\n            {\n                case Lexer::LengthOf:\n                {\n                    return primitiveType(Typename::Integer);\n                }\n                case Lexer::ShapeOf:\n                {\n                    return arrayType(primitiveType(Typename::Integer));\n                }\n                case Lexer::RangeOf:\n                {\n                    return arrayType(primitiveType(Typename::Integer));\n                }\n                case Lexer::Integer:\n                {\n                    return primitiveType(Typename::Integer);\n                }\n                case Lexer::Scalar:\n                {\n                    return primitiveType(Typename::Scalar);\n                }\n                case Lexer::String:\n                {\n                    return primitiveType(Typename::String);\n                }\n                case Lexer::Logical:\n                {\n                    return primitiveType(Typename::Logical);\n                }\n            }\n            \n            return nullptr;\n        }\n\n        static const Type* typeOf( const Value& value )\n        {\n            switch ( value.kind() )\n            {\n                case Value::Integer:\n                {\n                    return primitiveType(Typename::Integer);\n                }\n                case Value::Scalar:\n                {\n                    return primitiveType(Typename::Scalar);\n                }\n                case Value::Logical:\n                {\n                    return primitiveType(Typename::Logical);\n                }\n                case Value::String:\n                {\n                    return primitiveType(Typename::String);\n                }\n                case Value::Array:\n                {\n                    auto itemType = value.size() ? typeOf(value[0]) : nullptr;\n                    return arrayType(itemType);\n                }\n                case Value::Tuple:\n                {\n                    std::vector<const Type*> itemTypes(value.size());\n                    for ( size_t i = 0; i < value.size(); ++i )\n                    {\n                        itemTypes[i] = typeOf(value[i]);\n                    }\n                    return tupleType(itemTypes);\n                }\n                case Value::Identifier:\n                case Value::None:\n                {\n                    return nullptr;\n                }\n            }\n            assert(false);\n            return nullptr;\n        }\n        \n        static int tokenPrecedence( int token )\n        {\n            static const std::map<int,int> precedence =\n            {\n                std::make_pair(Lexer::In, 10),\n                std::make_pair(Lexer::And, 20),\n                std::make_pair(Lexer::Or, 20),\n                std::make_pair(Lexer::Le, 30),\n                std::make_pair(Lexer::Ge, 30),\n                std::make_pair(Lexer::Eq, 30),\n                std::make_pair(Lexer::Ne, 30),\n                std::make_pair('<', 30),\n                std::make_pair('>', 30),\n                std::make_pair('+', 40),\n                std::make_pair('-', 40),\n                std::make_pair('*', 50),\n                std::make_pair('/', 50),\n                std::make_pair('^', 60),\n            };\n            \n            auto it = precedence.find(token);\n            return it != precedence.end() ? it->second : -1;\n        }\n        \n    private:\n        \n        static const char* unaryOpName( int op )\n        {\n            switch (op)\n            {\n                case '+':\n                    return \"copy\";\n                case '-':\n                    return \"neg\";\n                case '!':\n                    return \"not\";\n                default:\n                    return nullptr;\n            }\n        }\n        \n        static const char* binaryOpName( int op )\n        {\n            switch (op)\n            {\n                case '+':\n                    return \"add\";\n                case '-':\n                    return \"sub\";\n                case '*':\n                    return \"mul\";\n                case '/':\n                    return \"div\";\n                case '^':\n                    return \"pow\";\n                case '<':\n                    return \"lt\";\n                case '>':\n                    return \"gt\";\n                case Lexer::Le:\n                    return \"le\";\n                case Lexer::Ge:\n                    return \"ge\";\n                case Lexer::Eq:\n                    return \"eq\";\n                case Lexer::Ne:\n                    return \"ne\";\n                case Lexer::And:\n                    return \"and\";\n                case Lexer::Or:\n                    return \"or\";\n                default:\n                    return nullptr;\n            }\n        }\n        \n        static Dictionary<Shared<Expr>> makeUnaryOpArgs( const Shared<Expr>& right )\n        {\n            const Dictionary<Shared<Expr>> args =\n            {\n                { \"x\", right },\n            };\n            return args;\n        }\n        \n        static Dictionary<Shared<Expr>> makeBinaryOpArgs( const Shared<Expr> left, const Shared<Expr> right )\n        {\n            const Dictionary<Shared<Expr>> args =\n            {\n                { \"x\", left },\n                { \"y\", right },\n            };\n            return args;\n        }\n\n    private:\n\n        static bool checkGraphParamType( const Value& value, const Type* type )\n        {\n            switch ( value.kind() )\n            {\n                case Value::Integer:\n                {\n                    return type == primitiveType(Typename::Integer);\n                }\n                case Value::Scalar:\n                {\n                    return type == primitiveType(Typename::Scalar);\n                }\n                case Value::Logical:\n                {\n                    return type == primitiveType(Typename::Logical);\n                }\n                case Value::String:\n                {\n                    return type == primitiveType(Typename::String);\n                }\n                case Value::Identifier:\n                {\n                    return type == tensorType();\n                }\n                case Value::Array:\n                {\n                    if ( type->kind() != Type::Array )\n                    {\n                        return false;\n                    }\n                    auto arrayType = static_cast<const ArrayType*>(type);\n                    for ( size_t i = 0; i < value.size(); ++i )\n                    {\n                        if ( !checkGraphParamType(value[i], arrayType->itemType()) )\n                        {\n                            return false;\n                        }\n                    }\n                    return true;\n                }\n                case Value::Tuple:\n                {\n                    if ( type->kind() != Type::Tuple )\n                    {\n                        return false;\n                    }\n                    auto tupleType = static_cast<const TupleType*>(type);\n                    for ( size_t i = 0; i < value.size(); ++i )\n                    {\n                        if ( !checkGraphParamType(value[i], tupleType->itemType(i)) )\n                        {\n                            return false;\n                        }\n                    }\n                    return true;\n                }\n                case Value::None:\n                {\n                    return false;\n                }\n            }\n        }\n\n    private:\n\n        const std::string _stdlib_source;\n        const std::set<std::string>& _lowered;\n        size_t _flags;\n    };\n    \n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/comp/evaluation.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_EVALUATION_H_\n#define _NNEF_EVALUATION_H_\n\n#include \"../common/dictionary.h\"\n#include \"../common/error.h\"\n#include \"../common/value.h\"\n#include \"../common/parser.h\"\n#include \"expression.h\"\n#include \"fragment.h\"\n#include <cassert>\n#include <cmath>\n#include <set>\n\n\nnamespace nnef\n{\n\n    class Evaluation\n    {\n        typedef Dictionary<Fragment> Fragments;\n        typedef Parser::Callback Callback;\n\n    public:\n\n        Evaluation( const std::vector<Assignment>& assignments, const Fragments& fragments, const std::set<std::string>& lowered )\n        : _fragments(fragments), _lowered(lowered)\n        {\n            for ( auto& assignment : assignments )\n            {\n                addReservedIdentifiers(assignment.lhs());\n            }\n        }\n\n    public:\n        \n        static Value evaluateLvalue( const Expr& expr, const Dictionary<Value>& values, bool fallbackToIds )\n        {\n            switch ( expr.kind() )\n            {\n                case Expr::Identifier:\n                {\n                    auto& identifier = static_cast<const IdentifierExpr&>(expr);\n                    \n                    auto it = values.find(identifier.name());\n                    return it != values.end() ? it->second : (fallbackToIds ? Value::identifier(identifier.name()) : Value::identifier(\"\"));\n                }\n                case Expr::Array:\n                {\n                    auto& array = static_cast<const ArrayExpr&>(expr);\n\n                    Value::items_t items(array.size());\n                    for ( size_t i = 0; i < array.size(); ++i )\n                    {\n                        items[i] = evaluateLvalue(array.item(i), values, fallbackToIds);\n                    }\n                    return Value::array(items);\n                }\n                case Expr::Tuple:\n                {\n                    auto& tuple = static_cast<const TupleExpr&>(expr);\n                    \n                    Value::items_t items(tuple.size());\n                    for ( size_t i = 0; i < tuple.size(); ++i )\n                    {\n                        items[i] = evaluateLvalue(tuple.item(i), values, fallbackToIds);\n                    }\n                    return Value::tuple(items);\n                }\n                default:\n                {\n                    assert(false);\n                    return Value::none();\n                }\n            }\n        }\n        \n        static Value evaluateRvalue( const Expr& expr )\n        {\n            switch ( expr.kind() )\n            {\n                case Expr::Literal:\n                {\n                    return evaluateLiteral(expr);\n                }\n                case Expr::Array:\n                case Expr::Tuple:\n                {\n                    auto& sequence = static_cast<const ItemExpr&>(expr);\n                    \n                    Value::items_t items(sequence.size());\n                    for ( size_t i = 0; i < sequence.size(); ++i )\n                    {\n                        items[i] = evaluateRvalue(sequence.item(i));\n                    }\n                    return expr.kind() == Expr::Array ? Value::array(items) : Value::tuple(items);\n                }\n                case Expr::Unary:\n                {\n                    auto& unary = static_cast<const UnaryExpr&>(expr);\n                    if ( unary.op() == '-' )\n                    {\n                        auto arg = evaluateRvalue(unary.right());\n                        if ( arg.kind() == Value::Integer )\n                        {\n                            return Value::integer(-arg.integer());\n                        }\n                        else if ( arg.kind() == Value::Scalar )\n                        {\n                            return Value::scalar(-arg.scalar());\n                        }\n                    }\n                }\n                default:\n                {\n                    assert(false);\n                    return Value::none();\n                }\n            }\n        }\n\n        void evaluateAssign( const Expr& lhs, const Expr& rhs, Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                            Callback& callback, const PrimitiveType* dtype, const Value& context )\n        {\n            auto value = evaluate(rhs, values, dtypes, callback, dtype, context);\n            assign(lhs, value, values, dtypes, callback);\n        }\n\n    private:\n\n        Value evaluate( const Expr& expr, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype, const Value& context = Value::none() )\n        {\n            switch ( expr.kind() )\n            {\n                case Expr::Literal:\n                {\n                    return evaluateLiteral(expr);\n                }\n                case Expr::Identifier:\n                {\n                    return evaluate(static_cast<const IdentifierExpr&>(expr), values);\n                }\n                case Expr::Array:\n                {\n                    return evaluate(static_cast<const ArrayExpr&>(expr), values, dtypes, callback, dtype, context);\n                }\n                case Expr::Tuple:\n                {\n                    return evaluate(static_cast<const TupleExpr&>(expr), values, dtypes, callback, dtype, context);\n                }\n                case Expr::Subscript:\n                {\n                    return evaluate(static_cast<const SubscriptExpr&>(expr), values, dtypes, callback, dtype);\n                }\n                case Expr::Unary:\n                {\n                    return evaluate(static_cast<const UnaryExpr&>(expr), values, dtypes, callback, dtype);\n                }\n                case Expr::Binary:\n                {\n                    return evaluate(static_cast<const BinaryExpr&>(expr), values, dtypes, callback, dtype);\n                }\n                case Expr::Select:\n                {\n                    return evaluate(static_cast<const SelectExpr&>(expr), values, dtypes, callback, dtype, context);\n                }\n                case Expr::Comprehension:\n                {\n                    return evaluate(static_cast<const ComprehensionExpr&>(expr), values, dtypes, callback, dtype, context);\n                }\n                case Expr::Builtin:\n                {\n                    return evaluate(static_cast<const BuiltinExpr&>(expr), values, dtypes, callback, dtype);\n                }\n                case Expr::Invocation:\n                {\n                    return evaluate(static_cast<const InvocationExpr&>(expr), values, dtypes, callback, dtype, context);\n                }\n                default:\n                {\n                    assert(false);\n                    return Value::none();\n                }\n            }\n        }\n\n        static Value evaluateLiteral( const Expr& expr )\n        {\n            auto type = static_cast<const PrimitiveType&>(*expr.type());\n            switch ( type.name() )\n            {\n                case Typename::Integer:\n                {\n                    return evaluate(static_cast<const IntegerExpr&>(expr));\n                }\n                case Typename::Scalar:\n                {\n                    return evaluate(static_cast<const ScalarExpr&>(expr));\n                }\n                case Typename::Logical:\n                {\n                    return evaluate(static_cast<const LogicalExpr&>(expr));\n                }\n                case Typename::String:\n                {\n                    return evaluate(static_cast<const StringExpr&>(expr));\n                }\n                default:\n                {\n                    assert(false);\n                    return Value::none();\n                }\n            }\n        }\n\n        static Value evaluate( const ScalarExpr& scalar )\n        {\n            return Value::scalar(scalar.value());\n        }\n\n        static Value evaluate( const IntegerExpr& integer )\n        {\n            return Value::integer(integer.value());\n        }\n\n        static Value evaluate( const LogicalExpr& logical )\n        {\n            return Value::logical(logical.value());\n        }\n\n        static Value evaluate( const StringExpr& string )\n        {\n            return Value::string(string.value());\n        }\n\n        static Value evaluate( const IdentifierExpr& identifier, const Dictionary<Value>& values )\n        {\n            if ( !values.count(identifier.name()) )\n            {\n                throw Error(identifier.position(), \"undefined identifier '%s'\", identifier.name().c_str());\n            }\n            return values.at(identifier.name());\n        }\n\n        Value evaluate( const ArrayExpr& array, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype, const Value& context )\n        {\n            Value::items_t items(array.size());\n            for ( size_t i = 0; i < array.size(); ++i )\n            {\n                auto ctx = context.kind() == Value::Array ? context[i] : Value::none();\n                items[i] = evaluate(array.item(i), values, dtypes, callback, dtype, ctx);\n            }\n            return Value::array(items);\n        }\n\n        Value evaluate( const TupleExpr& tuple, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype, const Value& context )\n        {\n            Value::items_t items(tuple.size());\n            for ( size_t i = 0; i < tuple.size(); ++i )\n            {\n                auto ctx = context.kind() == Value::Tuple ? context[i] : Value::none();\n                items[i] = evaluate(tuple.item(i), values, dtypes, callback, dtype, ctx);\n            }\n            return Value::tuple(items);\n        }\n\n        Value evaluate( const SubscriptExpr& subscript, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype )\n        {\n            Value sequence = evaluate(subscript.sequence(), values, dtypes, callback, dtype);\n\n            if ( subscript.isRange() )\n            {\n                Value::integer_t i = subscript.begin() ? evaluate(*subscript.begin(), values, dtypes, callback, dtype).integer() : (Value::integer_t)0;\n                if ( i < 0 )\n                {\n                    i += (Value::integer_t)sequence.size();\n                }\n                if ( i < 0 || i > (Value::integer_t)sequence.size() )\n                {\n                    throw Error(subscript.position(), \"range begin (%d) out of bounds (size = %d)\", (int)i, (int)sequence.size());\n                }\n\n                Value::integer_t j = subscript.end() ? evaluate(*subscript.end(), values, dtypes, callback, dtype).integer() : (Value::integer_t)sequence.size();\n                if ( j < 0 )\n                {\n                    j += (Value::integer_t)sequence.size();\n                }\n                if ( j < 0 || j > (Value::integer_t)sequence.size() )\n                {\n                    throw Error(subscript.position(), \"range end (%d) out of bounds (size = %d)\", (int)j, (int)sequence.size());\n                }\n\n                if ( j < i )\n                {\n                    throw Error(subscript.position(), \"invalid range: %d:%d\", (int)i, (int)j);\n                }\n\n                if ( sequence.kind() == Value::String )\n                {\n                    return Value::string(sequence.string().substr(i,j-i));\n                }\n                else\n                {\n                    auto it = sequence.items().begin();\n                    Value::items_t items(it + i, it + j);\n                    return Value::array(items);\n                }\n            }\n            else\n            {\n                Value::integer_t index = evaluate(*subscript.begin(), values, dtypes, callback, dtype).integer();\n                if ( index < 0 )\n                {\n                    index += (Value::integer_t)sequence.size();\n                }\n                if ( index < 0 || index >= (Value::integer_t)sequence.size() )\n                {\n                    throw Error(subscript.position(), \"index (%d) out of bounds (size = %d)\", (int)index, (int)sequence.size());\n                }\n\n                if ( sequence.kind() == Value::String )\n                {\n                    return Value::string(sequence.string().substr(index,1));\n                }\n                else\n                {\n                    return sequence[index];\n                }\n            }\n        }\n\n        Value evaluate( const UnaryExpr& unary, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype )\n        {\n            Value right = evaluate(unary.right(), values, dtypes, callback, dtype);\n\n            if ( unary.op() == '!' )\n            {\n                if ( right.kind() == Value::Logical )\n                {\n                    return Value::logical(!right.logical());\n                }\n            }\n            else if ( unary.op() == '-' )\n            {\n                if ( right.kind() == Value::Integer )\n                {\n                    return Value::integer(-right.integer());\n                }\n                else if ( right.kind() == Value::Scalar )\n                {\n                    return Value::scalar(-right.scalar());\n                }\n            }\n            else if ( unary.op() == '+' )\n            {\n                return right;\n            }\n\n            assert(false);\n            return Value::none();\n        }\n\n        Value evaluate( const BinaryExpr& binary, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype )\n        {\n            bool lazy = binary.op() == Lexer::And || binary.op() == Lexer::Or;\n\n            Value left = evaluate(binary.left(), values, dtypes, callback, dtype);\n            Value right = lazy ? Value::none() : evaluate(binary.right(), values, dtypes, callback, dtype);\n\n            switch ( binary.op() )\n            {\n                case '+':\n                {\n                    if ( left.kind() == Value::String && right.kind() == Value::String )\n                    {\n                        return Value::string(left.string() + right.string());\n                    }\n                    else if ( left.kind() == Value::Array && right.kind() == Value::Array )\n                    {\n                        Value::items_t items = left.array();\n                        items.insert(items.end(), right.array().begin(), right.array().end());\n                        return Value::array(items);\n                    }\n                    else\n                    {\n                        return evaluateBinary<std::plus>(left, right);\n                    }\n                }\n                case '*':\n                {\n                    if ( left.kind() == Value::String && right.kind() == Value::Integer )\n                    {\n                        Value::string_t str;\n                        for ( size_t i = 0; i < (size_t)right.integer(); ++i )\n                        {\n                            str += left.string();\n                        }\n                        return Value::string(str);\n                    }\n                    else if ( left.kind() == Value::Array && right.kind() == Value::Integer )\n                    {\n                        Value::items_t items;\n                        for ( size_t i = 0; i < (size_t)right.integer(); ++i )\n                        {\n                            items.insert(items.end(), left.array().begin(), left.array().end());\n                        }\n                        return Value::array(items);\n                    }\n                    else\n                    {\n                        return evaluateBinary<std::multiplies>(left, right);\n                    }\n                }\n                case '-':\n                {\n                    return evaluateBinary<std::minus>(left, right);\n                }\n                case '/':\n                {\n                    return evaluateBinary<std::divides>(left, right);\n                }\n                case '^':\n                {\n                    return evaluateBinary<power>(left, right);\n                }\n                case '<':\n                {\n                    return evaluateBinary<std::less>(left, right);\n                }\n                case '>':\n                {\n                    return evaluateBinary<std::greater>(left, right);\n                }\n                case Lexer::Le:\n                {\n                    return evaluateBinary<std::less_equal>(left, right);\n                }\n                case Lexer::Ge:\n                {\n                    return evaluateBinary<std::greater_equal>(left, right);\n                }\n                case Lexer::Eq:\n                {\n                    return evaluateBinary<std::equal_to>(left, right);\n                }\n                case Lexer::Ne:\n                {\n                    return evaluateBinary<std::not_equal_to>(left, right);\n                }\n                case Lexer::And:\n                {\n                    return !left.logical() ? left : evaluate(binary.right(), values, dtypes, callback, dtype);\n                }\n                case Lexer::Or:\n                {\n                    return left.logical() ? left : evaluate(binary.right(), values, dtypes, callback, dtype);\n                }\n                case Lexer::In:\n                {\n                    auto& items = right.array();\n                    bool contains = std::find(items.begin(), items.end(), left) != items.end();\n                    return Value::logical(contains);\n                }\n                default:\n                {\n                    break;\n                }\n            }\n\n            assert(false);\n            return Value::none();\n        }\n\n        Value evaluate( const SelectExpr& select, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype, const Value& context )\n        {\n            Value condition = evaluate(select.condition(), values, dtypes, callback, dtype);\n            return condition.logical() ? evaluate(select.trueValue(), values, dtypes, callback, dtype, context) :\n                                         evaluate(select.falseValue(), values, dtypes, callback, dtype, context);\n        }\n\n        Value evaluate( const ComprehensionExpr& comprehension, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype, const Value& context )\n        {\n            std::vector<Value> iterables;\n            for ( size_t i = 0; i < comprehension.iteratorCount(); ++i )\n            {\n                auto iterable = evaluate(comprehension.iterable(i), values, dtypes, callback, dtype);\n                iterables.push_back(iterable);\n            }\n            \n            const size_t length = iterables.front().size();\n            for ( size_t i = 1; i < iterables.size(); ++i )\n            {\n                if ( iterables[i].size() != length )\n                {\n                    throw Error(comprehension.position(), \"iterables must have the same length in array comprehension\");\n                }\n            }\n\n            Value::items_t items;\n\n            Dictionary<Value> ids = values;\n            for ( size_t i = 0; i < length; ++i )\n            {\n                for ( size_t k = 0; k < iterables.size(); ++k )\n                {\n                    assign(comprehension.iterator(k), iterables[k][i], ids, dtypes, callback);\n                }\n\n                bool accept = comprehension.condition() ? evaluate(*comprehension.condition(), ids, dtypes, callback, dtype).logical() : true;\n                if ( accept )\n                {\n                    auto ctx = context.kind() == Value::Array && items.size() < context.size() ? context[items.size()] : Value::none();\n                    auto item = evaluate(comprehension.item(), ids, dtypes, callback, dtype, ctx);\n                    items.push_back(item);\n                }\n\n                for ( size_t k = 0; k < iterables.size(); ++k )\n                {\n                    unassign(comprehension.iterator(k), ids);\n                }\n            }\n            return Value::array(items);\n        }\n\n        Value evaluate( const InvocationExpr& invocation, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype, const Value& context )\n        {\n            auto& fragment = _fragments.at(invocation.target());\n            auto& proto = fragment.prototype();\n\n            Dictionary<Value> ids;\n            for ( size_t i = 0; i < proto.paramCount(); ++i )\n            {\n                auto& param = proto.param(i);\n                auto arg = invocation.arg(param.name());\n                ids[param.name()] = arg ? evaluate(*arg, values, dtypes, callback, dtype) : param.defaultValue();\n            }\n\n            const PrimitiveType* dataType = invocation.dataType() == primitiveType(Typename::Generic) ? dtype : invocation.dataType();\n            if ( dataType )\n            {\n                ids[\"?\"] = Value::string(dataType->toString());\n            }\n            \n            if ( !invocation.type()->isAttribute() )\n            {\n                if ( context )\n                {\n                    checkStructure(context, invocation.type(), invocation.position());\n                }\n                \n                auto& resultType = static_cast<const TupleType&>(*invocation.type());\n                if ( proto.resultCount() == 1 )\n                {\n                    ids[proto.result(0).name()] = getResultValue(context, resultType, proto.name());\n                }\n                else\n                {\n                    assert(context.kind() == Value::Tuple);\n                    for ( size_t i = 0; i < proto.resultCount(); ++i )\n                    {\n                        ids[proto.result(i).name()] = getResultValue(context[i], *resultType.itemType(i), proto.name());\n                    }\n                }\n            }\n            \n            bool lower = fragment.assignmentCount() && _lowered.count(proto.name());\n            if ( lower )\n            {\n                for ( size_t i = 0; i < fragment.assignmentCount(); ++i )\n                {\n                    auto& assignment = fragment.assignment(i);\n                    \n                    const Value ctx = evaluateLvalue(assignment.lhs(), ids, false);\n                    try\n                    {\n                        evaluateAssign(assignment.lhs(), assignment.rhs(), ids, dtypes, callback, dataType, ctx);\n                    }\n                    catch ( const Error& e )\n                    {\n                        throw Error(chain(e.position(), invocation.position()), e.what());\n                    }\n                }\n            }\n            \n            Value value;\n            if ( proto.resultCount() == 1 )\n            {\n                value = ids[proto.result(0).name()];\n            }\n            else\n            {\n                Value::items_t items(proto.resultCount());\n                for ( size_t i = 0; i < proto.resultCount(); ++i )\n                {\n                    items[i] = ids[proto.result(i).name()];\n                }\n                value = Value::tuple(items);\n            }\n            \n            if ( hasNone(value) )\n            {\n                throw Error(invocation.position(), \"could not evaluate invocation (possibly unknown result array length)\");\n            }\n            \n            if ( !lower )\n            {\n                declare(value, invocation.type(), dtypes, dtype);\n                callback.operation(proto, ids, dtypes);\n            }\n            \n            return value;\n        }\n\n        Value evaluate( const BuiltinExpr& builtin, const Dictionary<Value>& values, Dictionary<Typename>& dtypes,\n                       Callback& callback, const PrimitiveType* dtype )\n        {\n            Value arg = evaluate(builtin.arg(), values, dtypes, callback, dtype);\n\n            switch ( builtin.op() )\n            {\n                case Lexer::LengthOf:\n                {\n                    auto length = arg.kind() == Value::String ? arg.string().length() : arg.array().size();\n                    return Value::integer((Value::integer_t)length);\n                }\n                case Lexer::RangeOf:\n                {\n                    auto length = arg.kind() == Value::String ? arg.string().length() : arg.array().size();\n\n                    Value::items_t items(length);\n                    for ( size_t i = 0; i < length; ++i )\n                    {\n                        items[i] = Value::integer((Value::integer_t)i);\n                    }\n                    return Value::array(items);\n                }\n                case Lexer::ShapeOf:\n                {\n                    throw Error(builtin.position(), \"the use of operator 'shape_of' is deprecated and is not supported\");\n                }\n                case Lexer::Integer:\n                {\n                    if ( arg.kind() == Value::Integer )\n                    {\n                        return arg;\n                    }\n                    else if ( arg.kind() == Value::Scalar )\n                    {\n                        return Value::integer((Value::integer_t)arg.scalar());\n                    }\n                    else if ( arg.kind() == Value::Logical )\n                    {\n                        return Value::integer((Value::integer_t)arg.logical());\n                    }\n                    else if ( arg.kind() == Value::String )\n                    {\n                        char* end;\n                        const char* str = arg.string().c_str();\n\n                        auto value = (Value::integer_t)std::strtol(str, &end, 10);\n                        if ( end == str )\n                        {\n                            throw Error(builtin.position(), \"cannot convert string '%s' to integer\", str);\n                        }\n                        return Value::integer(value);\n                    }\n                    break;\n                }\n                case Lexer::Scalar:\n                {\n                    if ( arg.kind() == Value::Scalar )\n                    {\n                        return arg;\n                    }\n                    else if ( arg.kind() == Value::Integer )\n                    {\n                        return Value::scalar((Value::scalar_t)arg.integer());\n                    }\n                    else if ( arg.kind() == Value::Logical )\n                    {\n                        return Value::scalar((Value::scalar_t)arg.logical());\n                    }\n                    else if ( arg.kind() == Value::String )\n                    {\n                        char* end;\n                        const char* str = arg.string().c_str();\n\n                        auto value = (Value::scalar_t)std::strtof(str, &end);\n                        if ( end == str )\n                        {\n                            throw Error(builtin.position(), \"cannot convert string '%s' to scalar\", str);\n                        }\n                        return Value::scalar(value);\n                    }\n                    break;\n                }\n                case Lexer::Logical:\n                {\n                    if ( arg.kind() == Value::Logical )\n                    {\n                        return arg;\n                    }\n                    else if ( arg.kind() == Value::Integer )\n                    {\n                        return Value::logical(arg.integer() != 0);\n                    }\n                    else if ( arg.kind() == Value::Scalar )\n                    {\n                        return Value::logical(arg.scalar() != 0);\n                    }\n                    else if ( arg.kind() == Value::String )\n                    {\n                        return Value::logical(!arg.string().empty());\n                    }\n                    break;\n                }\n                case Lexer::String:\n                {\n                    if ( arg.kind() == Value::Logical )\n                    {\n                        return Value::string(std::to_string(arg.logical()));\n                    }\n                    else if ( arg.kind() == Value::Integer )\n                    {\n                        return Value::string(std::to_string(arg.integer()));\n                    }\n                    else if ( arg.kind() == Value::Scalar )\n                    {\n                        return Value::string(std::to_string(arg.scalar()));\n                    }\n                    else if ( arg.kind() == Value::String )\n                    {\n                        return arg;\n                    }\n                    break;\n                }\n                default:\n                {\n                    break;\n                }\n            }\n\n            assert(false);\n            return Value::none();\n        }\n\n    private:\n\n        template<template<typename> class Op>\n        static Value evaluateBinary( const Value& left, const Value& right )\n        {\n            if ( left.kind() == Value::Integer && right.kind() == Value::Integer )\n            {\n                Op<Value::integer_t> op;\n                return Value::make(op(left.integer(), right.integer()));\n            }\n            else if ( left.kind() == Value::Scalar && right.kind() == Value::Scalar )\n            {\n                Op<Value::scalar_t> op;\n                return Value::make(op(left.scalar(), right.scalar()));\n            }\n            assert(false);\n            return Value::none();\n        }\n        \n        static Typename dtypeOf( const Value& value, const Dictionary<Typename>& dtypes )\n        {\n            switch ( value.kind() )\n            {\n                case Value::Scalar:\n                    return Typename::Scalar;\n                case Value::Integer:\n                    return Typename::Integer;\n                case Value::Logical:\n                    return Typename::Logical;\n                case Value::String:\n                    return Typename::String;\n                case Value::Identifier:\n                    return dtypes.at(value.identifier());\n                default:\n                    assert(false);\n                    return Typename::Generic;\n            }\n        }\n        \n        void insertCopy( const Value& lvalue, const Value& rvalue, Dictionary<Typename>& dtypes, Callback& callback )\n        {\n            const Typename dtype = dtypeOf(rvalue, dtypes);\n            const Value dvalue = Value::string(toString(dtype));\n            \n            const Prototype& proto = _fragments.at(\"copy\").prototype();\n            const Dictionary<Value> args =\n            {\n                std::make_pair(\"x\", rvalue),\n                std::make_pair(\"y\", lvalue),\n                std::make_pair(\"?\", dvalue)\n            };\n            \n            dtypes[lvalue.identifier()] = dtype;\n            callback.operation(proto, args, dtypes);\n        }\n\n        void assign( const Expr& lhs, const Value& rvalue, Dictionary<Value>& ids, Dictionary<Typename>& dtypes, Callback& callback )\n        {\n            switch ( lhs.kind() )\n            {\n                case Expr::Array:\n                {\n                    auto& array = static_cast<const ArrayExpr&>(lhs);\n                    if ( array.size() != rvalue.size() )\n                    {\n                        throw Error(lhs.position(), \"cannot assign array of length %d to array of length %d\",\n                                    (int)rvalue.size(), (int)array.size());\n                    }\n                    for ( size_t i = 0; i < array.size(); ++i )\n                    {\n                        assign(array.item(i), rvalue[i], ids, dtypes, callback);\n                    }\n                    break;\n                }\n                case Expr::Tuple:\n                {\n                    auto& tuple = static_cast<const TupleExpr&>(lhs);\n                    assert(tuple.size() == rvalue.size());\n\n                    for ( size_t i = 0; i < tuple.size(); ++i )\n                    {\n                        assign(tuple.item(i), rvalue[i], ids, dtypes, callback);\n                    }\n                    break;\n                }\n                case Expr::Identifier:\n                {\n                    auto& identifier = static_cast<const IdentifierExpr&>(lhs);\n                    auto& lvalue = ids[identifier.name()];\n\n                    if ( lvalue )\n                    {\n                        if ( lvalue != rvalue )\n                        {\n                            if ( lvalue.kind() == Value::Array || lvalue.kind() == Value::Tuple )\n                            {\n                                if ( lvalue.kind() == Value::Array && lvalue.size() != rvalue.size() )\n                                {\n                                    throw Error(lhs.position(), \"cannot assign array of length %d to array of length %d\",\n                                                (int)rvalue.size(), (int)lvalue.size());\n                                }\n                                for ( size_t i = 0; i < lvalue.size(); ++i )\n                                {\n                                    insertCopy(lvalue[i], rvalue[i], dtypes, callback);\n                                }\n                            }\n                            else\n                            {\n                                assert(lvalue.kind() == Value::Identifier);\n                                insertCopy(lvalue, rvalue, dtypes, callback);\n                            }\n                        }\n                    }\n                    else\n                    {\n                        lvalue = rvalue;\n                    }\n\n                    break;\n                }\n                default:\n                {\n                    assert(false);\n                    break;\n                }\n            }\n        }\n\n        void unassign( const Expr& lhs, Dictionary<Value>& ids )\n        {\n            switch ( lhs.kind() )\n            {\n                case Expr::Array:\n                case Expr::Tuple:\n                {\n                    auto& items = static_cast<const ItemExpr&>(lhs);\n                    for ( size_t i = 0; i < items.size(); ++i )\n                    {\n                        unassign(items.item(i), ids);\n                    }\n                    break;\n                }\n                case Expr::Identifier:\n                {\n                    auto& identifier = static_cast<const IdentifierExpr&>(lhs);\n                    ids.erase(identifier.name());\n                    break;\n                }\n                default:\n                {\n                    assert(false);\n                    break;\n                }\n            }\n        }\n\n        static void declare( const Value& arg, const Type* type, Dictionary<Typename>& dtypes, const PrimitiveType* dtype )\n        {\n            switch ( arg.kind() )\n            {\n                case Value::Identifier:\n                {\n                    assert(type->kind() == Type::Tensor);\n                    const std::string& id = arg.identifier();\n                    auto tensorType = static_cast<const TensorType*>(type);\n                    assert(tensorType->dataType()->kind() == Type::Primitive);\n                    auto dataType = static_cast<const PrimitiveType*>(tensorType->dataType());\n                    auto name = dataType->name() == Typename::Generic ? dtype->name() : dataType->name();\n                    assert(!dtypes.count(id) || dtypes.at(id) == name);\n                    dtypes.emplace(id, name);\n                    break;\n                }\n                case Value::Array:\n                {\n                    assert(type->kind() == Type::Array);\n                    auto arrayType = static_cast<const ArrayType*>(type);\n                    for ( size_t i = 0; i < arg.size(); ++i )\n                    {\n                        declare(arg[i], arrayType->itemType(), dtypes, dtype);\n                    }\n                    break;\n                }\n                case Value::Tuple:\n                {\n                    assert(type->kind() == Type::Tuple);\n                    auto tupleType = static_cast<const TupleType*>(type);\n                    for ( size_t i = 0; i < arg.size(); ++i )\n                    {\n                        declare(arg[i], tupleType->itemType(i), dtypes, dtype);\n                    }\n                    break;\n                }\n                default:\n                {\n                    break;\n                }\n            }\n        }\n        \n        static void checkStructure( const Value& value, const Type* type, const Error::Position& position )\n        {\n            switch ( type->kind() )\n            {\n                case Type::Primitive:\n                case Type::Tensor:\n                {\n                    if ( value.kind() != Value::Identifier )\n                    {\n                        throw Error(position, \"invocation context mismatch: expected identifier on left hand side to match type '%s'\",\n                                    type->toString().c_str());\n                    }\n                    break;\n                }\n                case Type::Array:\n                {\n                    if ( value.kind() == Value::Identifier || value.kind() == Value::None )\n                    {\n                        break;\n                    }\n                    if ( value.kind() != Value::Array )\n                    {\n                        throw Error(position, \"invocation context mismatch: expected array on left hand side to match type '%s'\",\n                                    type->toString().c_str());\n                    }\n                    auto& array = static_cast<const ArrayType&>(*type);\n                    for ( size_t i = 0; i < value.size(); ++i )\n                    {\n                        checkStructure(value[i], array.itemType(), position);\n                    }\n                    break;\n                }\n                case Type::Tuple:\n                {\n                    if ( value.kind() != Value::Tuple )\n                    {\n                        throw Error(position, \"invocation context mismatch: expected tuple on left hand side to match type '%s'\",\n                                    type->toString().c_str());\n                    }\n                    auto& tuple = static_cast<const TupleType&>(*type);\n                    for ( size_t i = 0; i < value.size(); ++i )\n                    {\n                        checkStructure(value[i], tuple.itemType(i), position);\n                    }\n                    break;\n                }\n            }\n        }\n\n    private:\n\n        typedef Error::Position Position;\n\n        Position chain( const Position& position, const Position& origin )\n        {\n            const Position chained = { position.line, position.column, position.filename, &origin };\n            return chained;\n        }\n\n    private:\n\n        std::string nextTensorId( const std::string& op )\n        {\n            return op + std::to_string(++_tensorCounts[op]);\n        }\n\n        std::string makeTensorId( const std::string& op )\n        {\n            std::string id;\n            do\n            {\n                id = nextTensorId(op);\n            }\n            while ( isReservedId(id) );\n\n            _reservedIds.insert(id);\n            return id;\n        }\n\n        Value makeResultValue( const std::string& op, size_t idx )\n        {\n            auto id = makeTensorId(op);\n            return Value::identifier(idx ? indexedId(id, idx) : id);\n        }\n        \n        Value getResultValue( const Value& context, const Type& type, const std::string op, size_t idx = 0 )\n        {\n            if ( !context )\n            {\n                if ( type.kind() == Type::Array )\n                {\n                    return Value::none();\n                }\n                return makeResultValue(op, idx);\n            }\n            else if ( context.kind() == Value::Identifier )\n            {\n                if ( type.kind() == Type::Array )\n                {\n                    return Value::none();\n                }\n                return context.identifier() != \"\" ? context : makeResultValue(op, idx);\n            }\n            else if ( context.kind() == Value::Array )\n            {\n                std::vector<Value> results(context.size());\n                auto& arrayType = static_cast<const ArrayType&>(type);\n                for ( size_t i = 0; i < context.size(); ++i )\n                {\n                    results[i] = getResultValue(context[i], *arrayType.itemType(), op, i + 1);\n                }\n                return Value::array(results);\n            }\n            else if ( context.kind() == Value::Tuple )\n            {\n                std::vector<Value> results(context.size());\n                auto& tupleType = static_cast<const TupleType&>(type);\n                for ( size_t i = 0; i < context.size(); ++i )\n                {\n                    results[i] = getResultValue(context[i], *tupleType.itemType(i), op);\n                }\n                return Value::array(results);\n            }\n            else\n            {\n                assert(false);\n                return Value();\n            }\n        }\n        \n        bool hasNone( const Value& value )\n        {\n            switch ( value.kind() )\n            {\n                case Value::None:\n                {\n                    return true;\n                }\n                case Value::Tuple:\n                case Value::Array:\n                {\n                    for ( size_t i = 0; i < value.size(); ++i )\n                    {\n                        if ( hasNone(value[i]) )\n                        {\n                            return true;\n                        }\n                    }\n                    return false;\n                }\n                default:\n                {\n                    return false;\n                }\n            }\n        }\n\n        void addReservedIdentifiers( const Expr& expr )\n        {\n            switch ( expr.kind() )\n            {\n                case Expr::Identifier:\n                {\n                    auto& identifier = static_cast<const IdentifierExpr&>(expr);\n                    _reservedIds.insert(identifier.name());\n                    break;\n                }\n                case Expr::Array:\n                case Expr::Tuple:\n                {\n                    auto& items = static_cast<const ItemExpr&>(expr);\n                    for ( size_t i = 0; i < items.size(); ++i )\n                    {\n                        addReservedIdentifiers(items.item(i));\n                    }\n                    break;\n                }\n                default:\n                {\n                    assert(false);\n                    break;\n                }\n            }\n        }\n\n        bool isReservedId( const std::string& id )\n        {\n            return _reservedIds.find(id) != _reservedIds.end();\n        }\n\n        bool isReservedId( const std::string& id, const size_t size )\n        {\n            for ( size_t i = 0; i < size; ++i )\n            {\n                if ( isReservedId(indexedId(id,i+1)) )\n                {\n                    return true;\n                }\n            }\n            return false;\n        }\n\n        std::string indexedId( const std::string& id, const size_t idx )\n        {\n            return id + \"_\" + std::to_string(idx);\n        }\n\n    private:\n\n        template<typename T>\n        struct power\n        {\n            T operator()( const T& left, const T& right )\n            {\n                return (T)std::pow(left, right);\n            }\n        };\n\n    private:\n\n        const Fragments& _fragments;\n        const std::set<std::string>& _lowered;\n\n        Dictionary<size_t> _tensorCounts;\n        std::set<std::string> _reservedIds;\n    };\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/comp/expression.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_EXPRESSION_H_\n#define _NNEF_EXPRESSION_H_\n\n#include \"../common/dictionary.h\"\n#include \"../common/typespec.h\"\n#include \"../common/lexer.h\"\n#include <functional>\n#include <algorithm>\n#include <iostream>\n#include <memory>\n#include <vector>\n#include <string>\n\n\nnamespace nnef\n{\n\n    template<typename T>\n    using Shared = std::shared_ptr<T>;\n    \n\n\n    class Expr\n    {\n    public:\n\n        typedef Error::Position Position;\n\n        enum Kind { Literal, Identifier, Array, Tuple, Subscript, Comprehension, Unary, Binary, Select, Invocation, Builtin };\n\n    public:\n\n        Expr( const Position& position )\n        : _position(position)\n        {\n        }\n\n        const Position& position() const\n        {\n            return _position;\n        }\n\n        virtual ~Expr() {}\n\n        virtual Kind kind() const = 0;\n        virtual const Type* type() const = 0;\n        virtual void print( std::ostream& os ) const = 0;\n\n    private:\n\n        Position _position;\n    };\n\n\n    inline std::ostream& operator<<( std::ostream& os, const Expr& expr )\n    {\n        expr.print(os);\n        return os;\n    }\n\n\n    template<typename V>\n    class LiteralExpr : public Expr\n    {\n    public:\n\n        typedef V value_type;\n\n    public:\n\n        LiteralExpr( const Position& position, const value_type& value, const Type* type )\n        : Expr(position), _value(value), _type(type)\n        {\n        }\n\n        const value_type& value() const\n        {\n            return _value;\n        }\n\n        virtual Kind kind() const\n        {\n            return Literal;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            print(os, _value);\n        }\n\n    private:\n\n        template<typename S>\n        static void print( std::ostream& os, const S& value )\n        {\n            os << std::boolalpha << value;\n        }\n\n        static void print( std::ostream& os, const std::string& value )\n        {\n            os << '\\'' << value << '\\'';\n        }\n\n    protected:\n\n        value_type _value;\n        const Type* _type;\n    };\n\n\n    typedef LiteralExpr<float> ScalarExpr;\n    typedef LiteralExpr<int> IntegerExpr;\n    typedef LiteralExpr<bool> LogicalExpr;\n    typedef LiteralExpr<std::string> StringExpr;\n\n\n    class IdentifierExpr : public Expr\n    {\n    public:\n\n        IdentifierExpr( const Position& position, const std::string& name, const Type* type )\n        : Expr(position), _name(name), _type(type)\n        {\n        }\n\n        const std::string& name() const\n        {\n            return _name;\n        }\n\n        virtual Kind kind() const\n        {\n            return Identifier;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            os << _name;\n        }\n\n    private:\n\n        std::string _name;\n        const Type* _type;\n    };\n\n\n    class SubscriptExpr : public Expr\n    {\n    public:\n\n        SubscriptExpr( const Position& position, const Shared<Expr>& sequence, const Shared<Expr>& begin, const Shared<Expr>& end, const Type* type )\n        : Expr(position), _sequence(sequence), _begin(begin), _end(end), _type(type)\n        {\n        }\n\n        virtual bool isRange() const\n        {\n            return _begin != _end || !_begin;\n        }\n\n        const Expr& sequence() const\n        {\n            return *_sequence;\n        }\n\n        const Expr* begin() const\n        {\n            return _begin.get();\n        }\n\n        const Expr* end() const\n        {\n            return _end.get();\n        }\n\n        virtual Kind kind() const\n        {\n            return Subscript;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            _sequence->print(os);\n            os << '[';\n            if ( _begin )\n            {\n                _begin->print(os);\n            }\n            if ( isRange() )\n            {\n                os << ':';\n            }\n            if ( _end )\n            {\n                _begin->print(os);\n            }\n            os << ']';\n        }\n\n    private:\n\n        const Shared<Expr> _sequence;\n        const Shared<Expr> _begin;\n        const Shared<Expr> _end;\n        const Type* _type;\n    };\n\n\n    class ItemExpr : public Expr\n    {\n    public:\n\n        ItemExpr( const Position& position, const Type* type )\n        : Expr(position), _type(type)\n        {\n        }\n\n        ItemExpr( const Position& position, std::vector<Shared<Expr>>& items, const Type* type )\n        : Expr(position), _items(std::move(items)), _type(type)\n        {\n        }\n\n        size_t size() const\n        {\n            return _items.size();\n        }\n\n        const Expr& item( const size_t i ) const\n        {\n            return *_items[i];\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n    protected:\n        \n        std::vector<Shared<Expr>> _items;\n        const Type* _type;\n    };\n\n\n    class ArrayExpr : public ItemExpr\n    {\n    public:\n\n        ArrayExpr( const Position& position, const Type* type )\n        : ItemExpr(position, type)\n        {\n        }\n\n        ArrayExpr( const Position& position, std::vector<Shared<Expr>>& items, const Type* type )\n        : ItemExpr(position, items, type)\n        {\n        }\n\n        virtual Kind kind() const\n        {\n            return Array;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            os << '[';\n            for ( size_t i = 0; i < _items.size(); ++i )\n            {\n                if ( i )\n                {\n                    os << ',';\n                }\n                _items[i]->print(os);\n            }\n            os << ']';\n        }\n    };\n\n\n    class TupleExpr : public ItemExpr\n    {\n    public:\n        \n        TupleExpr( const Position& position, const Type* type )\n        : ItemExpr(position, type)\n        {\n        }\n\n        TupleExpr( const Position& position, std::vector<Shared<Expr>>& items, const Type* type )\n        : ItemExpr(position, items, type)\n        {\n        }\n\n        virtual Kind kind() const\n        {\n            return Tuple;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            os << '(';\n            for ( size_t i = 0; i < _items.size(); ++i )\n            {\n                if ( i )\n                {\n                    os << ',';\n                }\n                _items[i]->print(os);\n            }\n            os << ')';\n        }\n    };\n\n\n    class ComprehensionExpr : public Expr\n    {\n    public:\n\n        ComprehensionExpr( const Position& position, std::vector<Shared<Expr>>& iterators, std::vector<Shared<Expr>>& iterables,\n                          const Shared<Expr>& condition, const Shared<Expr>& item, const Type* type )\n        : Expr(position), _iterators(std::move(iterators)), _iterables(std::move(iterables)), _condition(condition), _item(item), _type(type)\n        {\n        }\n        \n        size_t iteratorCount() const\n        {\n            return _iterators.size();\n        }\n\n        const Expr& iterator( const size_t i ) const\n        {\n            return *_iterators[i];\n        }\n\n        const Expr& iterable( const size_t i ) const\n        {\n            return *_iterables[i];\n        }\n\n        const Expr* condition() const\n        {\n            return _condition.get();\n        }\n        \n        const Expr& item() const\n        {\n            return *_item;\n        }\n\n        virtual Kind kind() const\n        {\n            return Comprehension;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            os << '[';\n            os << \"for \";\n            for ( size_t i = 0; i < _iterators.size(); ++i )\n            {\n                if ( i )\n                {\n                    os << \", \";\n                }\n                _iterators[i]->print(os);\n                os << \" in \";\n                _iterables[i]->print(os);\n            }\n            if ( _condition )\n            {\n                os << \" if \";\n                _condition->print(os);\n            }\n            os << \" yield \";\n            _item->print(os);\n            os << ']';\n        }\n\n    private:\n\n        const std::vector<Shared<Expr>> _iterators;\n        const std::vector<Shared<Expr>> _iterables;\n        const Shared<Expr> _condition;\n        const Shared<Expr> _item;\n        const Type* _type;\n    };\n\n\n    class UnaryExpr : public Expr\n    {\n    public:\n\n        UnaryExpr( const Position& position, const Shared<Expr>& right, int op, const Type* type )\n        : Expr(position), _right(right), _type(type), _op(op)\n        {\n        }\n\n        const Expr& right() const\n        {\n            return *_right;\n        }\n\n        int op() const\n        {\n            return _op;\n        }\n\n        virtual Kind kind() const\n        {\n            return Unary;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            const std::string str = Lexer::tokenString(_op);\n\n            os << str;\n            if ( str.length() > 1 )\n            {\n                os << '(';\n            }\n            _right->print(os);\n            if ( str.length() > 1 )\n            {\n                os << ')';\n            }\n        }\n\n    private:\n\n        const Shared<Expr> _right;\n        const Type* _type;\n        int _op;\n    };\n\n\n    class BinaryExpr : public Expr\n    {\n    public:\n\n        BinaryExpr( const Position& position, const Shared<Expr>& left, const Shared<Expr>& right, int op, const Type* type )\n        : Expr(position), _left(left), _right(right), _type(type), _op(op)\n        {\n        }\n\n        const Expr& left() const\n        {\n            return *_left;\n        }\n\n        const Expr& right() const\n        {\n            return *_right;\n        }\n\n        int op() const\n        {\n            return _op;\n        }\n\n        virtual Kind kind() const\n        {\n            return Binary;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            if ( _left->kind() == Binary )\n            {\n                os << '(';\n            }\n            _left->print(os);\n            if ( _left->kind() == Binary )\n            {\n                os << ')';\n            }\n            os << ' ' << Lexer::tokenString(_op) << ' ';\n            if ( _right->kind() == Binary )\n            {\n                os << '(';\n            }\n            _right->print(os);\n            if ( _right->kind() == Binary )\n            {\n                os << ')';\n            }\n        }\n\n    private:\n\n        const Shared<Expr> _left;\n        const Shared<Expr> _right;\n        const Type* _type;\n        int _op;\n    };\n\n\n    class BuiltinExpr : public Expr\n    {\n    public:\n\n        BuiltinExpr( const Position& position, const Shared<Expr>& arg, int op, const Type* type )\n        : Expr(position), _arg(arg), _type(type), _op(op)\n        {\n        }\n\n        const Expr& arg() const\n        {\n            return *_arg;\n        }\n\n        int op() const\n        {\n            return _op;\n        }\n\n        virtual Kind kind() const\n        {\n            return Builtin;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            os << Lexer::tokenString(_op) << '(';\n            _arg->print(os);\n            os << ')';\n        }\n\n    private:\n\n        const Shared<Expr> _arg;\n        const Type* _type;\n        int _op;\n    };\n\n\n    class SelectExpr : public Expr\n    {\n    public:\n\n        SelectExpr( const Position& position, const Shared<Expr>& condition, const Shared<Expr>& trueValue, const Shared<Expr>& falseValue, const Type* type )\n        : Expr(position), _cond(condition), _true(trueValue), _false(falseValue), _type(type)\n        {\n        }\n\n        const Expr& condition() const\n        {\n            return *_cond;\n        }\n\n        const Expr& trueValue() const\n        {\n            return *_true;\n        }\n\n        const Expr& falseValue() const\n        {\n            return *_false;\n        }\n\n        virtual Kind kind() const\n        {\n            return Select;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            _true->print(os);\n            os << \" if \";\n            _cond->print(os);\n            os << \" else \";\n            _false->print(os);\n        }\n\n    private:\n\n        const Shared<Expr> _cond;\n        const Shared<Expr> _true;\n        const Shared<Expr> _false;\n        const Type* _type;\n    };\n\n\n    class InvocationExpr : public Expr\n    {\n\tpublic:\n\t\t\n\t\ttypedef Dictionary<Shared<Expr>>::const_iterator const_iterator;\n\t\t\n    public:\n\n        InvocationExpr( const Position& position, const std::string& target, Dictionary<Shared<Expr>>& args, const Type* type,\n                       const PrimitiveType* dataType = nullptr )\n        : Expr(position), _target(target), _dataType(dataType), _args(std::move(args)), _type(type)\n        {\n        }\n\n        const std::string& target() const\n        {\n            return _target;\n        }\n        \n        const PrimitiveType* dataType() const\n        {\n            return _dataType;\n        }\n\n        const Expr* arg( const std::string& name ) const\n        {\n            auto it = _args.find(name);\n            return it != _args.end() ? it->second.get() : nullptr;\n        }\n\t\t\n\t\tconst_iterator begin() const\n\t\t{\n\t\t\treturn _args.begin();\n\t\t}\n\t\t\n\t\tconst_iterator end() const\n\t\t{\n\t\t\treturn _args.end();\n\t\t}\n\n        virtual Kind kind() const\n        {\n            return Kind::Invocation;\n        }\n\n        virtual const Type* type() const\n        {\n            return _type;\n        }\n\n        virtual void print( std::ostream& os ) const\n        {\n            os << _target;\n            if ( _dataType )\n            {\n                os << '<' << _dataType->toString() << '>';\n            }\n            os << '(';\n            for ( auto it = _args.begin(); it != _args.end(); ++it )\n            {\n                if ( it != _args.begin() )\n                {\n                    os << \", \";\n                }\n                os << it->first << \" = \" << *it->second;\n            }\n            os << ')';\n        }\n\n    private:\n\n        std::string _target;\n        const PrimitiveType* _dataType;\n        Dictionary<Shared<Expr>> _args;\n        const Type* _type;\n    };\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/comp/fragment.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_FRAGMENT_H_\n#define _NNEF_FRAGMENT_H_\n\n#include \"../common/prototype.h\"\n#include \"expression.h\"\n#include <iostream>\n\n\nnamespace nnef\n{\n\n    class Assignment\n    {\n    public:\n        \n        Assignment( const Shared<Expr>& lhs, const Shared<Expr>& rhs )\n        : _lhs(lhs), _rhs(rhs)\n        {\n        }\n        \n        const Expr& lhs() const\n        {\n            return *_lhs;\n        }\n        \n        const Expr& rhs() const\n        {\n            return *_rhs;\n        }\n        \n    private:\n        \n        const Shared<Expr> _lhs;\n        const Shared<Expr> _rhs;\n    };\n\n    \n    class Fragment\n    {\n    public:\n\n        Fragment( const Prototype& prototype )\n        : _prototype(prototype)\n        {\n        }\n\n        Fragment( const Prototype& prototype, std::vector<Assignment>& assignments )\n        : _prototype(prototype), _assignments(std::move(assignments))\n        {\n        }\n        \n        const Prototype& prototype() const\n        {\n            return _prototype;\n        }\n        \n        size_t assignmentCount() const\n        {\n            return _assignments.size();\n        }\n        \n        const Assignment& assignment( const size_t i ) const\n        {\n            return _assignments[i];\n        }\n        \n    private:\n        \n        const Prototype& _prototype;\n        const std::vector<Assignment> _assignments;\n    };\n\n\n    inline std::ostream& operator<<( std::ostream& os, const Assignment& assignment )\n    {\n        os << assignment.lhs() << \" = \" << assignment.rhs();\n        return os;\n    }\n\n    inline std::ostream& operator<<( std::ostream& os, const Fragment& fragment )\n    {\n        os << fragment.prototype() << std::endl;\n\n        if ( fragment.assignmentCount() )\n        {\n            os << '{' << std::endl;\n            for ( size_t i = 0; i < fragment.assignmentCount(); ++i )\n            {\n                os << '\\t' << fragment.assignment(i) << std::endl;\n            }\n            os << '}' << std::endl;\n        }\n\n        return os;\n    }\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/comp/stdlib_source.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_STDLIB_SOURCE_H_\n#define _NNEF_STDLIB_SOURCE_H_\n\n\nnamespace nnef {\n\n    template<typename T> struct _stdlib_source { static const char* text; };\n    template<typename T> const char* _stdlib_source<T>::text = R\"STDLIB(\n\n\n    # tensor declaration operations\n\n    fragment external<? = scalar>( shape: integer[] ) -> ( output: tensor<?> );\n    fragment variable<? = scalar>( shape: integer[], label: string ) -> ( output: tensor<?> );\n    fragment constant<? = scalar>( shape: integer[], value: ?[] ) ->  ( output: tensor<?> );\n\n    fragment update<?>( variable: tensor<?>, value: tensor<?> ) -> ( result: tensor<?> );\n\n\n    # tensor shape operations\n\n    fragment reshape<?>( input: tensor<?>, shape: integer[], axis_start: integer = 0, axis_count: integer = -1 ) -> ( output: tensor<?> );\n    fragment transpose<?>( input: tensor<?>, axes: integer[] ) -> ( output: tensor<?> );\n    fragment concat<?>( values: tensor<?>[], axis: integer ) -> ( value: tensor<?> );\n    fragment split<?>( value: tensor<?>, axis: integer, ratios: integer[] ) -> ( values: tensor<?>[] );\n    fragment slice<?>( input: tensor<?>, axes: integer[], begin: integer[], end: integer[], stride: integer[] = [] ) -> ( output: tensor<?> );\n    fragment squeeze<?>( input: tensor<?>, axes: integer[] ) -> ( output: tensor<?> );\n    fragment unsqueeze<?>( input: tensor<?>, axes: integer[] ) -> ( output: tensor<?> );\n    fragment stack<?>( values: tensor<?>[], axis: integer ) -> ( value: tensor<?> );\n    fragment unstack<?>( value: tensor<?>, axis: integer ) -> ( values: tensor<?>[] );\n    fragment tile<?>( input: tensor<?>, repeats: integer[] ) -> ( output: tensor<?> );\n    fragment pad( input: tensor<scalar>, padding: (integer, integer)[], border: string = 'constant', value: scalar = 0.0 ) -> ( output: tensor<scalar> );\n    fragment gather<?>( input: tensor<?>, indices: tensor<integer>, axis: integer = 0 ) -> ( output: tensor<?> );\n    fragment cast<?>( input: tensor<> ) -> ( output: tensor<?> );\n\n\n    # element-wise arithmetic operations\n\n    fragment add( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\n    fragment sub( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\n    fragment mul( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\n    fragment div( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\n    fragment pow( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\n\n    fragment exp( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment log( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment sin( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment cos( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment tan( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment sinh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment cosh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment tanh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment asin( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment acos( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment atan( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment asinh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment acosh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment atanh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment abs( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment sign( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment rcp( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment neg( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment copy<?>( x: tensor<?> ) -> ( y: tensor<?> );\n\n    # element-wise comparison operations\n\n    fragment lt( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\n    fragment gt( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\n    fragment le( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\n    fragment ge( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\n    fragment eq( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\n    fragment ne( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\n\n    # element-wise logical operations\n\n    fragment and( x: tensor<logical>, y: tensor<logical> ) -> ( z: tensor<logical> );\n    fragment or( x: tensor<logical>, y: tensor<logical> ) -> ( z: tensor<logical> );\n    fragment not( x: tensor<logical> ) -> ( y: tensor<logical> );\n\n    # element-wise rounding operations\n\n    fragment floor( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment ceil( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n    fragment round( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n\n    # element-wise select operation\n\n    fragment select<?>( condition: tensor<logical>, true_value: tensor<?>, false_value: tensor<?> ) -> ( output: tensor<?> );\n\n    # simplifier operations\n\n    fragment sqr( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = x ^ 2.0;\n    }\n\n    fragment sqrt( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = x ^ 0.5;\n    }\n\n    fragment rsqr( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = x ^ -2.0;\n    }\n\n    fragment rsqrt( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = x ^ -0.5;\n    }\n\n    fragment log2( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = log(x) / log(2.0);\n    }\n\n    fragment min( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> )\n    {\n        z = select(x < y, x, y);\n    }\n\n    fragment max( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> )\n    {\n        z = select(x > y, x, y);\n    }\n\n    fragment clamp( x: tensor<scalar>, a: tensor<scalar>, b: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = max(min(x, b), a);\n    }\n\n\n    # matrix multiplication\n\n    fragment matmul( A: tensor<scalar>, B: tensor<scalar>, transposeA: logical = false, transposeB: logical = false ) -> ( C: tensor<scalar> );\n\n    \n    )STDLIB\" /* break the raw literal because of max length limit */ R\"STDLIB(\n\n    \n    # sliding-window operations\n\n    fragment conv(\n        input: tensor<scalar>,\n        filter: tensor<scalar>,\n        bias: tensor<scalar> = 0.0,\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [],\n        groups: integer = 1 )\n    -> ( output: tensor<scalar> );\n\n    fragment deconv(\n        input: tensor<scalar>,\n        filter: tensor<scalar>,\n        bias: tensor<scalar> = 0.0,\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [],\n        output_shape: integer[] = [],\n        groups: integer = 1 )\n    -> ( output: tensor<scalar> );\n\n\n    fragment box(\n        input: tensor<scalar>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [],\n        normalize: logical = false )\n    -> ( output: tensor<scalar> );\n\n    fragment debox(\n        input: tensor<scalar>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [],\n        output_shape: integer[] = [],\n        normalize: logical = false )\n    -> ( output: tensor<scalar> );\n\n\n    fragment argmax_pool(\n        input: tensor<scalar>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [] )\n    -> ( index: tensor<integer> );\n\n\n    fragment sample(\n        input: tensor<scalar>,\n        index: tensor<integer>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [] )\n    -> ( output: tensor<scalar> );\n\n    fragment desample(\n        input: tensor<scalar>,\n        index: tensor<integer>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [],\n        output_shape: integer[] = [] )\n    -> ( output: tensor<scalar> );\n\n\n    # up/down-sampling operations\n\n    fragment nearest_downsample( input: tensor<scalar>, factor: integer[] ) -> ( output: tensor<scalar> )\n    {\n        dims = 2 + length_of(factor);\n        output = box(input, size = [1] * dims, stride = [1,1] + factor, padding = [(0,0)] * dims);\n    }\n\n    fragment area_downsample( input: tensor<scalar>, factor: integer[] ) -> ( output: tensor<scalar> )\n    {\n        dims = 2 + length_of(factor);\n        output = box(input, size = [1,1] + factor, stride = [1,1] + factor, padding = [(0,0)] * dims, normalize = true);\n    }\n\n    fragment nearest_upsample( input: tensor<scalar>, factor: integer[] ) -> ( output: tensor<scalar> )\n    {\n        dims = 2 + length_of(factor);\n        output = debox(input, size = [1,1] + factor, stride = [1,1] + factor, padding = [(0,0)] * dims);\n    }\n\n    fragment multilinear_upsample( input: tensor<scalar>, factor: integer[], method: string = 'symmetric', border: string = 'replicate' )\n    -> ( output: tensor<scalar> );\n\n\n    # reduce operations\n\n    fragment sum_reduce( input: tensor<scalar>, axes: integer[], normalize: logical = false ) -> ( output: tensor<scalar> );\n    fragment max_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<scalar> );\n    fragment min_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<scalar> );\n    fragment argmax_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<integer> );\n    fragment argmin_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<integer> );\n    fragment any_reduce( input: tensor<logical>, axes: integer[] ) -> ( output: tensor<logical> );\n    fragment all_reduce( input: tensor<logical>, axes: integer[] ) -> ( output: tensor<logical> );\n\n    fragment mean_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<scalar> )\n    {\n        output = sum_reduce(input, axes = axes, normalize = true);\n    }\n\n    fragment moments( input: tensor<scalar>, axes: integer[] ) -> ( mean: tensor<scalar>, variance: tensor<scalar> )\n    {\n        mean = mean_reduce(input, axes = axes);\n        variance = mean_reduce(sqr(input - mean), axes = axes);\n    }\n\n\n    # activation functions\n\n    fragment relu( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = max(x, 0.0);\n    }\n\n    fragment sigmoid( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = 1.0 / (1.0 + exp(-x));\n    }\n\n    fragment softabs( x: tensor<scalar>, epsilon: scalar ) -> ( y: tensor<scalar> )\n    {\n        y = sqrt(sqr(x) + epsilon);\n    }\n\n    fragment softmax( x: tensor<scalar>, axes: integer[] = [1] ) -> ( y: tensor<scalar> )\n    {\n        m = max_reduce(x, axes = axes);\n        e = exp(x - m);\n        y = e / sum_reduce(e, axes = axes);\n    }\n\n    fragment softplus( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = log(exp(x) + 1.0);\n    }\n\n    fragment elu( x: tensor<scalar>, alpha: scalar = 1.0 ) -> ( y: tensor<scalar> )\n    {\n        y = select(x < 0.0, alpha * (exp(x) - 1.0), x);\n    }\n    \n    fragment selu( x: tensor<scalar>, alpha: scalar = 1.67326319, lambda: scalar = 1.05070102 ) -> ( y: tensor<scalar> )\n    {\n        y = lambda * select(x < 0.0, alpha * (exp(x) - 1.0), x);\n    }\n\n    fragment gelu( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        # the exact definition of gelu is x * Phi(x) where Phi(x) is the\n        # CDF of the standard normal distribution, which can be approximated\n        # for example by sigmoid(1.702 * x)\n\n        y = x * sigmoid(1.702 * x);\n    }\n\n    fragment silu( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = x * sigmoid(x);\n    }\n\n    fragment prelu( x: tensor<scalar>, alpha: tensor<scalar> ) -> ( y: tensor<scalar> )\n    {\n        y = select(x < 0.0, alpha * x, x);\n    }\n\n    fragment leaky_relu( x: tensor<scalar>, alpha: scalar ) -> ( y: tensor<scalar> )\n    {\n        y = prelu(x, alpha = alpha);\n    }\n    \n    \n    )STDLIB\" /* break the raw literal because of max length limit */ R\"STDLIB(\n\n\n    # pooling operations\n\n    fragment max_pool_with_index(\n        input: tensor<scalar>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [] )\n    -> ( output: tensor<scalar>, index: tensor<integer> )\n    {\n        index = argmax_pool(input, size = size, border = border, padding = padding, stride = stride, dilation = dilation);\n        output = sample(input, index, size = size, border = border, padding = padding, stride = stride, dilation = dilation);\n    }\n\n    fragment max_pool(\n        input: tensor<scalar>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [] )\n    -> ( output: tensor<scalar> )\n    {\n        output, index = max_pool_with_index(input, size = size, border = border, padding = padding, stride = stride, dilation = dilation);\n    }\n\n    fragment avg_pool(\n        input: tensor<scalar>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [] )\n    -> ( output: tensor<scalar> )\n    {\n        output = box(input, size = size, border = border, padding = padding, stride = stride, dilation = dilation, normalize = true);\n    }\n\n    fragment rms_pool(\n        input: tensor<scalar>,\n        size: integer[],\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [] )\n    -> ( output: tensor<scalar> )\n    {\n        output = sqrt(avg_pool(sqr(input), size = size, border = border, padding = padding, stride = stride, dilation = dilation));\n    }\n\n\n    # linear operations\n\n    fragment linear(\n        input: tensor<scalar>,\n        filter: tensor<scalar>,\n        bias: tensor<scalar> = 0.0 )\n    -> ( output: tensor<scalar> )\n    {\n        output = matmul(input, filter, transposeB = true) + bias;\n    }\n\n    fragment separable_conv(\n        input: tensor<scalar>,\n        plane_filter: tensor<scalar>,\n        point_filter: tensor<scalar>,\n        bias: tensor<scalar> = 0.0,\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [],\n        groups: integer = 1 )\n    -> ( output: tensor<scalar> )\n    {\n        filtered = conv(input, plane_filter, border = border, padding = padding,\n                        stride = stride, dilation = dilation, groups = 0);\n        output = conv(filtered, point_filter, bias, groups = groups);\n    }\n\n    fragment separable_deconv(\n        input: tensor<scalar>,\n        plane_filter: tensor<scalar>,\n        point_filter: tensor<scalar>,\n        bias: tensor<scalar> = 0.0,\n        border: string = 'constant',\n        padding: (integer,integer)[] = [],\n        stride: integer[] = [],\n        dilation: integer[] = [],\n        output_shape: integer[] = [],\n        groups: integer = 1 )\n    -> ( output: tensor<scalar> )\n    {\n        filtered = deconv(input, point_filter, groups = groups);\n        output = deconv(filtered, plane_filter, bias, border = border, padding = padding,\n                        stride = stride, dilation = dilation, output_shape = output_shape, groups = 0);\n    }\n\n\n    # normalization operations\n\n    fragment local_response_normalization(\n        input: tensor<scalar>,\n        size: integer[],\n        alpha: scalar = 1.0,\n        beta: scalar = 0.5,\n        bias: scalar = 1.0 )\n    -> ( output: tensor<scalar> )\n    {\n        sigma = bias + alpha * box(sqr(input), size = size, normalize = true);\n        output = input / (sigma ^ beta);\n    }\n\n    fragment local_mean_normalization( input: tensor<scalar>, size: integer[] ) -> ( output: tensor<scalar> )\n    {\n        mean = box(input, size = size, normalize = true);\n        output = sub(input, mean);\n    }\n\n    fragment local_variance_normalization( input: tensor<scalar>, size: integer[], bias: scalar = 0.0, epsilon: scalar = 0.0 ) -> ( output: tensor<scalar> )\n    {\n        sigma = sqrt(box(sqr(input), size = size, normalize = true));\n        output = input / max(sigma + bias, epsilon);\n    }\n\n    fragment local_contrast_normalization( input: tensor<scalar>, size: integer[], bias: scalar = 0.0, epsilon: scalar = 0.0 ) -> ( output: tensor<scalar> )\n    {\n        centered = local_mean_normalization(input, size = size);\n        output = local_variance_normalization(centered, size = size, bias = bias, epsilon = epsilon);\n    }\n\n    fragment l1_normalization( input: tensor<scalar>, axes: integer[], bias: scalar = 0.0, epsilon: scalar = 0.0 ) -> ( output: tensor<scalar> )\n    {\n        sigma = sum_reduce(abs(input), axes = axes);\n        output = input / max(sigma + bias, epsilon);\n    }\n\n    fragment l2_normalization( input: tensor<scalar>, axes: integer[], bias: scalar = 0.0, epsilon: scalar = 0.0 ) -> ( output: tensor<scalar> )\n    {\n        sigma = sqrt(sum_reduce(sqr(input), axes = axes));\n        output = input / max(sigma + bias, epsilon);\n    }\n\n    fragment batch_normalization( input: tensor<scalar>, mean: tensor<scalar>, variance: tensor<scalar>, offset: tensor<scalar>, scale: tensor<scalar>, epsilon: scalar )\n    -> ( output: tensor<scalar> )\n    {\n        output = offset + scale * (input - mean) / sqrt(variance + epsilon);\n    }\n    \n    \n    )STDLIB\" /* break the raw literal because of max length limit */ R\"STDLIB(\n\n\n    # roi operations\n\n    fragment avg_roi_pool(\n        input: tensor<scalar>,\n        rois: tensor<scalar>,\n        batch_index: tensor<integer>,\n        output_size: integer[] )\n    -> ( output: tensor<scalar> );\n\n    fragment max_roi_pool(\n        input: tensor<scalar>,\n        rois: tensor<scalar>,\n        batch_index: tensor<integer>,\n        output_size: integer[] )\n    -> ( output: tensor<scalar> );\n\n    fragment roi_resample(\n        input: tensor<scalar>,\n        rois: tensor<scalar>,\n        batch_index: tensor<integer>,\n        output_size: integer[],\n        method: string = 'symmetric' )\n    -> ( output: tensor<scalar> );\n\n    fragment avg_roi_align(\n        input: tensor<scalar>,\n        rois: tensor<scalar>,\n        batch_index: tensor<integer>,\n        output_size: integer[],\n        sampling_rate: integer[],\n        resize_method: string = 'symmetric' )\n    -> ( output: tensor<scalar> )\n    {\n        size = [for i in range_of(output_size) yield output_size[i] * sampling_rate[i]];\n        resized = roi_resample(input, rois, batch_index, output_size = size,\n                             method = resize_method);\n        output = avg_pool(resized, size = sampling_rate, stride = sampling_rate);\n    }\n\n    fragment max_roi_align(\n        input: tensor<scalar>,\n        rois: tensor<scalar>,\n        batch_index: tensor<integer>,\n        output_size: integer[],\n        sampling_rate: integer[],\n        resize_method: string = 'symmetric' )\n    -> ( output: tensor<scalar> )\n    {\n        size = [for i in range_of(output_size) yield output_size[i] * sampling_rate[i]];\n        resized = roi_resample(input, rois, batch_index, output_size = size,\n                             method = resize_method);\n        output = max_pool(resized, size = sampling_rate, stride = sampling_rate);\n    }\n\n\n    # quantization operations\n\n    fragment min_max_linear_quantize(\n        x: tensor<scalar>,\n        min: tensor<scalar>,\n        max: tensor<scalar>,\n        bits: integer,\n        signed: logical,\n        symmetric: logical )\n    -> ( y: tensor<scalar> )\n    {\n        r = scalar(2 ^ bits - 1 - integer(signed && symmetric));\n        z = clamp(x, min, max);\n        p = scalar(2 ^ (bits - 1) - integer(symmetric) if signed else 0);\n        q = round((z - min) / (max - min) * r) - p;\n        y = (q + p) / r * (max - min) + min;\n    }\n\n    fragment zero_point_linear_quantize(\n        x: tensor<scalar>,\n        zero_point: tensor<integer>,\n        scale: tensor<scalar>,\n        bits: integer,\n        signed: logical,\n        symmetric: logical )\n    -> ( y: tensor<scalar> )\n    {\n        z = cast<scalar>(zero_point);\n        s = round(x / scale) + z;\n        r = scalar(2 ^ (bits - 1) - 1 if signed else 2 ^ bits - 1);\n        q = clamp(s, 0.0 if !signed else -r if symmetric else -r - 1.0, r);\n        y = (q - z) * scale;\n    }\n\n    fragment linear_quantize(\n        x: tensor<scalar>,\n        min: tensor<scalar>,\n        max: tensor<scalar>,\n        bits: integer )\n    -> ( y: tensor<scalar> )\n    {\n        y = min_max_linear_quantize(x, min = min, max = max, bits = bits,\n                                    signed = false, symmetric = false);\n    }\n\n    fragment logarithmic_quantize(\n        x: tensor<scalar>,\n        max: tensor<scalar>,\n        bits: integer )\n    -> ( y: tensor<scalar> )\n    {\n        m = ceil(log2(max));\n        r = scalar(2 ^ bits - 1);\n        q = round(clamp(log2(abs(x)), m - r, m));\n        y = sign(x) * 2.0 ^ q;\n    }\n\n\n    # misc operations\n\n    fragment copy_n<?>( x: tensor<?>, times: integer ) -> ( y: tensor<?>[] )\n    {\n        y = [x] * times;\n    }\n\n    fragment add_n( x: tensor<scalar>[] ) -> ( y: tensor<scalar> )\n    {\n        y = x[0] + add_n(x[1:]) if length_of(x) > 0 else constant(shape = [1], value = [0.0]);\n    }\n\n\n    )STDLIB\";\n\n\n    inline const char* stdlib_source()\n    {\n        return _stdlib_source<void>::text;\n    }\n\n}   // namespace nnef\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/flat/flat_parser.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_FLAT_PARSER_H_\n#define _NNEF_FLAT_PARSER_H_\n\n#include \"../common/prototype.h\"\n#include \"../common/dictionary.h\"\n#include \"../common/typeutils.h\"\n#include \"../common/parser.h\"\n#include \"../common/value.h\"\n#include \"../common/lexer.h\"\n#include \"../common/error.h\"\n#include \"stdlib_protos.h\"\n#include <cassert>\n\n\nnamespace nnef\n{\n\n    class FlatParser : public Parser\n    {\n    public:\n\n        typedef Error::Position Position;\n\n    public:\n\n        virtual void parse( std::istream& is, const char* filename, Callback& callback )\n        {\n            Lexer lexer(is, filename);\n            lexer.next();\n\n            auto version = readVersion(lexer);\n\n            callback.beginDocument(filename, version);\n\n            auto extensions = readExtensions(lexer, [&]( const std::string& ext )\n            {\n                return callback.handleExtension(ext);\n            });\n\n            static Dictionary<Prototype> prototypes = buildPrototypes();\n\n            parseGraph(lexer, prototypes, callback);\n\n            callback.endDocument(filename);\n        }\n\n    private:\n        \n        void parseGraph( Lexer& lexer, const Dictionary<Prototype>& prototypes, Callback& callback )\n        {\n            lexer.readToken(Lexer::Graph);\n            \n            const std::string name = lexer.string();\n            \n            lexer.readToken(Lexer::Identifier);\n            \n            auto params = parseIdentifiers<Param>(lexer);\n            \n            lexer.readToken(Lexer::Arrow);\n            \n            auto results = parseIdentifiers<Result>(lexer);\n            \n            const Prototype graph(name, params, results);\n\n            callback.beginGraph(graph, prototypes);\n            \n            lexer.readToken('{');\n            \n            Dictionary<Typename> dtypes;\n            \n            while ( lexer.token() != '}' )\n            {\n                parseAssignment(lexer, graph, prototypes, dtypes, callback);\n            }\n            \n            checkGraphParamsAssigned(graph, dtypes, lexer.position());\n            \n            lexer.readToken('}');\n            \n            callback.endGraph(graph, dtypes);\n\n            lexer.readToken(Lexer::Eof);\n        }\n        \n        template<typename T>\n        static std::vector<T> parseIdentifiers( Lexer& lexer )\n        {\n            std::vector<T> identifiers;\n            \n            lexer.readToken('(');\n            \n            do\n            {\n                const std::string id = lexer.string();\n                \n                lexer.readToken(Lexer::Identifier);\n                \n                identifiers.emplace_back(id, tensorType(Typename::Scalar));\n            }\n            while ( lexer.readIfToken(',') );\n            \n            lexer.readToken(')');\n            \n            return identifiers;\n        }\n\n        static void checkGraphParam( const Value& arg, const Prototype& graph, const std::string& target, const Position& position )\n        {\n            switch ( arg.kind() )\n            {\n                case Value::Identifier:\n                {\n                    if ( target == \"external\" )\n                    {\n                        if ( !graph.param(arg.identifier()) )\n                        {\n                            throw Error(position, \"identifier '%s' assigned by operation 'external' must be a graph parameter\",\n                                             arg.identifier().c_str());\n                        }\n                    }\n                    else\n                    {\n                        if ( graph.param(arg.identifier()) )\n                        {\n                            throw Error(position, \"graph parameter '%s' can only be assigned by operation 'external'\",\n                                             arg.identifier().c_str());\n                        }\n                    }\n                    break;\n                }\n                case Value::Array:\n                case Value::Tuple:\n                {\n                    for ( size_t i = 0; i < arg.size(); ++i )\n                    {\n                        checkGraphParam(arg[i], graph, target, position);\n                    }\n                    break;\n                }\n                default:\n                {\n                    assert(false);\n                }\n            }\n        }\n        \n        static void checkGraphParamsAssigned( const Prototype& graph, const Dictionary<Typename>& declared, const Position& position )\n        {\n            for ( size_t i = 0; i < graph.paramCount(); ++i )\n            {\n                auto& param = graph.param(i);\n                if ( !declared.count(param.name()) )\n                {\n                    throw Error(position, \"graph parameter '%s' not assigned\", param.name().c_str());\n                }\n            }\n            \n            for ( size_t i = 0; i < graph.resultCount(); ++i )\n            {\n                auto& result = graph.result(i);\n                if ( !declared.count(result.name()) )\n                {\n                    throw Error(position, \"graph result '%s' not assigned\", result.name().c_str());\n                }\n            }\n        }\n\n    private:\n\n        void parseAssignment( Lexer& lexer, const Prototype& graph, const Dictionary<Prototype>& prototypes,\n                             Dictionary<Typename>& dtypes, Callback& callback )\n        {\n            auto position = lexer.position();\n\n            const Value results = parseTuple(lexer, nullptr, false, true);\n\n            lexer.readToken('=');\n            \n            const std::string target = lexer.string();\n\n            lexer.readToken(Lexer::Identifier);\n\n            auto it = prototypes.find(target);\n            if ( it == prototypes.end() )\n            {\n                throw Error(lexer.position(), \"undefined operation '%s'\", target.c_str());\n            }\n            \n            auto& proto = it->second;\n            \n            checkGraphParam(results, graph, proto.name(), position);\n            \n            const PrimitiveType* dataType = proto.genericParamDefault();\n            if ( lexer.readIfToken('<') )\n            {\n                if ( lexer.token() == '?' )\n                {\n                    throw Error(lexer.position(), \"expected type name\");\n                }\n                \n                dataType = primitiveType(getTypename(lexer));\n                lexer.next();\n                \n                lexer.readToken('>');\n            }\n\n            lexer.readToken('(');\n\n            Dictionary<Value> args = parseArguments(proto, lexer, &dtypes, dataType, true, false, false);\n\n            lexer.readToken(')');\n            lexer.readToken(';');\n\n            if ( results.size() != proto.resultCount() )\n            {\n                throw Error(position, \"left-hand-side item count must match result count of operation (%d)\",\n                                 (int)proto.resultCount());\n            }\n\n            if ( proto.isGeneric() && !dataType && !deduceDataType(proto, args, dtypes, dataType, position) )\n            {\n                throw Error(position, \"could not deduce generic data-type\");\n            }\n\n            if ( dataType )\n            {\n                args[\"?\"] = Value::string(dataType->toString());\n            }\n            \n            for ( size_t i = 0; i < proto.resultCount(); ++i )\n            {\n                auto& result = proto.result(i);\n                auto type = dataType ? bindDataType(result.type(), dataType) : result.type();\n\n                declare(results[i], type, dtypes, position);\n\n                args.emplace(result.name(), std::move(results[i]));\n            }\n            \n            callback.operation(proto, args, dtypes);\n        }\n\n    protected:\n\n        static Dictionary<Value> parseArguments( const Prototype& proto, Lexer& lexer, const Dictionary<Typename>* decls,\n                                                const PrimitiveType* dataType, const bool allowIdentifier, const bool allowArrayToTensor,\n                                                bool expectNamed, const Param* exclusion = nullptr )\n        {\n            Dictionary<Value> args;\n\n            do\n            {\n                auto position = lexer.position();\n\n                if ( args.size() >= proto.paramCount() )\n                {\n                    throw Error(position, \"too many arguments; definition of '%s' has only %d parameters\",\n                                proto.name().c_str(), (int)proto.paramCount());\n                }\n\n                const Param* param = nullptr;\n                Value arg = Value::none();\n\n                bool named = false;\n                if ( lexer.token() == Lexer::Identifier )\n                {\n                    auto string = lexer.string();\n                    lexer.next();\n\n                    if ( lexer.token() == '=' )\n                    {\n                        lexer.next();\n\n                        param = proto.param(string);\n                        if ( !param )\n                        {\n                            throw Error(position, \"operation '%s' has no parameter called '%s'\",\n                                        proto.name().c_str(), string.c_str());\n                        }\n\n                        arg = parseValue(lexer, decls, true, allowIdentifier);\n                        named = true;\n                    }\n                    else if ( allowIdentifier )\n                    {\n                        param = &proto.param(args.size());\n                        arg = makeIdentifier(string, position, decls);\n                    }\n                    else\n                    {\n                        throw Error(position, \"token 'identifier' not allowed in this context\");\n                    }\n                }\n                else\n                {\n                    param = &proto.param(args.size());\n                    arg = parseValue(lexer, decls, true, allowIdentifier);\n                }\n\n                auto paramType = dataType ? bindDataType(param->type(), dataType) : param->type();\n                auto argType = typeOf(arg, *decls);\n                if ( !isCastable(argType, paramType, true, allowArrayToTensor) )\n                {\n                    throw Error(position, \"argument of type '%s' cannot be cast to type '%s' for parameter '%s'\",\n                                argType->toString().c_str(), paramType->toString().c_str(), param->name().c_str());\n                }\n\n                expectNamed |= named || paramType->isAttribute();\n                if ( expectNamed && !named )\n                {\n                    throw Error(position, \"expected named argument\");\n                }\n\n                if ( args.count(param->name()) )\n                {\n                    throw Error(position, \"duplicate arguments: parameter '%s' already assigned\",\n                                param->name().c_str());\n                }\n                if ( param == exclusion )\n                {\n                    throw Error(lexer.position(), \"argument '%s' of operation '%s' must not be provided in this context\",\n                                param->name().c_str(), proto.name().c_str());\n                }\n                if ( param->type()->kind() == Type::Tensor && isJaggedArray(arg) )\n                {\n                    throw Error(lexer.position(), \"tensor literal argument for argument '%s' must not be jagged nested array\",\n                                param->name().c_str());\n                }\n\n                args.emplace(param->name(), std::move(arg));\n            }\n            while ( lexer.readIfToken(',') );\n\n            for ( size_t i = 0; i < proto.paramCount(); ++i )\n            {\n                auto& param = proto.param(i);\n\n                if ( &param != exclusion && !args.count(param.name()) )\n                {\n                    if ( param.defaultValue() )\n                    {\n                        if ( param.type()->isGeneric() )\n                        {\n                            auto valueType = typeOf(param.defaultValue(), *decls);\n                            auto paramType = dataType ? bindDataType(param.type(), dataType) : param.type();\n                            if ( !isCastable(valueType, paramType, true, allowArrayToTensor) )\n                            {\n                                throw Error(lexer.position(), \"default value type '%s' cannot be cast to type '%s' for parameter '%s'\",\n                                            valueType->toString().c_str(), paramType->toString().c_str(), param.name().c_str());\n                            }\n                        }\n                        args[param.name()] = param.defaultValue();\n                    }\n                    else\n                    {\n                        throw Error(lexer.position(), \"missing argument for operation '%s'; parameter '%s' not assigned\",\n                                    proto.name().c_str(), param.name().c_str());\n                    }\n                }\n            }\n\n            return args;\n        }\n        \n    private:\n        \n        static bool checkNestedArrayShape( const Value& value, const int* shape, const size_t rank )\n        {\n            if ( rank == 0 )\n            {\n                return value.kind() != Value::Array;\n            }\n            else if ( value.kind() != Value::Array || value.size() != (size_t)*shape )\n            {\n                return false;\n            }\n            for ( size_t i = 0; i < value.size(); ++i )\n            {\n                if ( !checkNestedArrayShape(value[i], shape + 1, rank - 1) )\n                {\n                    return false;\n                }\n            }\n            return true;\n        }\n        \n        static bool isJaggedArray( const Value& value )\n        {\n            auto shape = nestedArrayShape(value);\n            return !checkNestedArrayShape(value, shape.data(), shape.size());\n        }\n\n    private:\n\n        static void declare( const Value& arg, const Type* type, Dictionary<Typename>& dtypes, const Position& position )\n        {\n            switch ( arg.kind() )\n            {\n                case Value::Identifier:\n                {\n                    if ( type->kind() != Type::Tensor )\n                    {\n                        throw Error(position, \"cannot assign result of type '%s' to tensor identifier\", type->toString().c_str());\n                    }\n                    const std::string& id = arg.identifier();\n                    if ( dtypes.count(id) )\n                    {\n                        throw Error(position, \"identifier '%s' already declared\", id.c_str());\n                    }\n                    auto dataType = static_cast<const TensorType*>(type)->dataType();\n                    assert(dataType->kind() == Type::Primitive);\n                    dtypes.emplace(id, static_cast<const PrimitiveType*>(dataType)->name());\n                    break;\n                }\n                case Value::Array:\n                {\n                    if ( type->kind() != Type::Array )\n                    {\n                        throw Error(position, \"cannot assign result of type '%s' to array\", type->toString().c_str());\n                    }\n                    auto arrayType = static_cast<const ArrayType*>(type);\n                    for ( size_t i = 0; i < arg.size(); ++i )\n                    {\n                        declare(arg[i], arrayType->itemType(), dtypes, position);\n                    }\n                    break;\n                }\n                case Value::Tuple:\n                {\n                    if ( type->kind() != Type::Tuple )\n                    {\n                        throw Error(position, \"cannot assign result of type '%s' to tuple\", type->toString().c_str());\n                    }\n                    auto tupleType = static_cast<const TupleType*>(type);\n                    for ( size_t i = 0; i < arg.size(); ++i )\n                    {\n                        declare(arg[i], tupleType->itemType(i), dtypes, position);\n                    }\n                    break;\n                }\n                default:\n                {\n                    throw Error(position, \"literal expression not allowed in this context\");\n                }\n            }\n        }\n\n    private:\n\n        static Value parseValue( Lexer& lexer, const Dictionary<Typename>* decls, bool allowLiteral, bool allowIdentifier )\n        {\n            switch ( lexer.token() )\n            {\n                case Lexer::True:\n                case Lexer::False:\n                {\n                    if ( allowLiteral )\n                    {\n                        return parseLogical(lexer);\n                    }\n                    break;\n                }\n                case '-':\n                case Lexer::Decimal:\n                case Lexer::Fractional:\n                {\n                    if ( allowLiteral )\n                    {\n                        return parseNumber(lexer);\n                    }\n                    break;\n                }\n                case Lexer::Characters:\n                {\n                    if ( allowLiteral )\n                    {\n                        return parseString(lexer);\n                    }\n                    break;\n                }\n                case '[':\n                {\n                    return parseArray(lexer, decls, allowLiteral, allowIdentifier);\n                }\n                case '(':\n                {\n                    return parseTuple(lexer, decls, allowLiteral, allowIdentifier);\n                }\n                case Lexer::Identifier:\n                {\n                    if ( allowIdentifier )\n                    {\n                        return parseIdentifier(lexer, decls);\n                    }\n                    break;\n                }\n                default:\n                {\n                    throw Error(lexer.position(), \"unexpected token '%s'\", Lexer::tokenString(lexer.token()).c_str());\n                }\n            }\n            throw Error(lexer.position(), \"token '%s' not allowed in this context\", Lexer::tokenString(lexer.token()).c_str());\n        }\n        \n        static Value parseNumber( Lexer& lexer )\n        {\n            bool negative = lexer.token() == '-';\n            if ( negative )\n            {\n                lexer.next();\n            }\n            if ( lexer.token() == Lexer::Decimal )\n            {\n                return parseInteger(lexer, negative);\n            }\n            else if ( lexer.token() == Lexer::Fractional )\n            {\n                return parseScalar(lexer, negative);\n            }\n            else\n            {\n                throw Error(lexer.position(), \"expected number\");\n            }\n        }\n\n        static Value parseInteger( Lexer& lexer, bool negative )\n        {\n            auto value = getIntegerValue(lexer);\n            lexer.next();\n            return Value::integer(negative ? -value : value);\n        }\n\n        static Value parseScalar( Lexer& lexer, bool negative )\n        {\n            auto value = getScalarValue(lexer);\n            lexer.next();\n            return Value::scalar(negative ? -value : value);\n        }\n\n        static Value parseLogical( Lexer& lexer )\n        {\n            auto value = lexer.token() == Lexer::True;\n            lexer.next();\n            return Value::logical(value);\n        }\n\n        static Value parseString( Lexer& lexer )\n        {\n            auto value = lexer.string();\n            lexer.next();\n            return Value::string(value);\n        }\n\n        static Value parseIdentifier( Lexer& lexer, const Dictionary<Typename>* decls )\n        {\n            auto value = makeIdentifier(lexer.string(), lexer.position(), decls);\n            lexer.next();\n            return value;\n        }\n\n        static Value makeIdentifier( const std::string& name, const Position& position, const Dictionary<Typename>* decls )\n        {\n            if ( decls && !decls->count(name) )\n            {\n                throw Error(position, \"undeclared identifier '%s'\", name.c_str());\n            }\n            return Value::identifier(name);\n        }\n\n        static Value parseArray( Lexer& lexer, const Dictionary<Typename>* decls, bool allowLiteral, bool allowIdentifier )\n        {\n            lexer.readToken('[');\n\n            std::vector<Value> items;\n\n            if ( lexer.token() != ']' )\n            {\n                do\n                {\n                    auto item = parseValue(lexer, decls, allowLiteral, allowIdentifier);\n                    items.push_back(std::move(item));\n                }\n                while ( lexer.readIfToken(',') );\n            }\n\n            lexer.readToken(']');\n\n            return Value::array(std::move(items));\n        }\n\n        static Value parseTuple( Lexer& lexer, const Dictionary<Typename>* decls, bool allowLiteral, bool allowIdentifier )\n        {\n            std::vector<Value> items;\n\n            bool parenthesized = lexer.token() == '(';\n            if ( parenthesized )\n            {\n                lexer.next();\n\n                auto first = parseValue(lexer, decls, allowLiteral, allowIdentifier);\n                lexer.readToken(',');\n\n                items.push_back(first);\n            }\n\n            do\n            {\n                auto item = parseValue(lexer, decls, allowLiteral, allowIdentifier);\n                items.push_back(std::move(item));\n            }\n            while ( lexer.readIfToken(',') );\n\n            if ( parenthesized )\n            {\n                lexer.readToken(')');\n            }\n\n            return Value::tuple(std::move(items));\n        }\n        \n    private:\n\n        static const Type* typeOf( const Value& value, const Dictionary<Typename>& declared )\n        {\n            switch ( value.kind() )\n            {\n                case Value::Integer:\n                {\n                    return primitiveType(Typename::Integer);\n                }\n                case Value::Scalar:\n                {\n                    return primitiveType(Typename::Scalar);\n                }\n                case Value::Logical:\n                {\n                    return primitiveType(Typename::Logical);\n                }\n                case Value::String:\n                {\n                    return primitiveType(Typename::String);\n                }\n                case Value::Identifier:\n                {\n                    return tensorType(declared.at(value.identifier()));\n                }\n                case Value::Array:\n                {\n                    auto itemType = value.size() ? typeOf(value[0], declared) : nullptr;\n                    return arrayType(itemType);\n                }\n                case Value::Tuple:\n                {\n                    std::vector<const Type*> itemTypes(value.size());\n                    for ( size_t i = 0; i < value.size(); ++i )\n                    {\n                        itemTypes[i] = typeOf(value[i], declared);\n                    }\n                    return tupleType(itemTypes);\n                }\n                case Value::None:\n                {\n                    return nullptr;\n                }\n            }\n            assert(false);\n            return nullptr;\n        }\n\n        static bool deduceDataType( const Prototype& proto, const Dictionary<Value>& args, const Dictionary<Typename>& declared,\n                                   const PrimitiveType*& dataType, const Position& position )\n        {\n            Dictionary<const Type*> types;\n            for ( auto& arg : args )\n            {\n                types[arg.first] = typeOf(arg.second, declared);\n            }\n            for ( size_t i = 0; i < proto.paramCount(); ++i )\n            {\n                auto& param = proto.param(i);\n                if ( !types.count(param.name()) )\n                {\n                    assert(param.defaultValue());\n                    types[param.name()] = typeOf(param.defaultValue(), declared);\n                }\n            }\n\n            try\n            {\n                return nnef::deduceDataType(proto, types, dataType);\n            }\n            catch ( std::pair<Typename,Typename> e )\n            {\n                throw Error(position, \"could not deduce data-type: ambiguous candidates '%s' vs '%s'\", toString(e.first), toString(e.second));\n            }\n        }\n        \n        static Dictionary<Prototype> buildPrototypes()\n        {\n            static auto stdlibPrototypes = nnef::stdlibPrototypes();\n            \n            Dictionary<Prototype> prototypes;\n            for ( auto& proto : stdlibPrototypes )\n            {\n                prototypes.emplace(proto.name(), std::move(proto));\n            }\n            return prototypes;\n        }\n    };\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/flat/quant_parser.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_QUANTIZATION_H_\n#define _NNEF_QUANTIZATION_H_\n\n#include \"../common/lexer.h\"\n#include \"../common/error.h\"\n#include \"../common/prototype.h\"\n#include \"../common/dictionary.h\"\n#include \"flat_parser.h\"\n#include <iostream>\n#include <sstream>\n\n\nnamespace nnef\n{\n\n    class QuantParser : public FlatParser\n    {\n    public:\n\n        static Dictionary<Dictionary<Value>> parse( std::istream& is, const char* filename, const Dictionary<Prototype>& prototypes )\n        {\n            Lexer lexer(is, filename);\n            lexer.next();\n\n            Dictionary<Dictionary<Value>> quantization;\n\n            for ( unsigned line = 0; lexer.token() != Lexer::Eof; ++line )\n            {\n                const std::string tensor = lexer.string();\n                if ( quantization.count(tensor) )\n                {\n                    throw Error(lexer.position(), \"duplicate quantization entries for tensor '%s'\", tensor.c_str());\n                }\n\n                lexer.readToken(Lexer::Characters);\n                lexer.readToken(':');\n\n                auto args = parseInvocation(lexer, prototypes);\n\n                quantization.emplace(tensor, std::move(args));\n            }\n\n            return quantization;\n        }\n\n    private:\n\n        static Dictionary<Value> parseInvocation( Lexer& lexer, const Dictionary<Prototype>& prototypes )\n        {\n            Position position = lexer.position();\n\n            const std::string op = lexer.string();\n            lexer.readToken(Lexer::Identifier);\n\n            auto it = prototypes.find(op);\n            if ( it == prototypes.end() )\n            {\n                throw Error(position, \"undefined quantization operation '%s'\", op.c_str());\n            }\n\n            auto& proto = it->second;\n            if ( !proto.paramCount() )\n            {\n                throw Error(position, \"quantization operation must have at least one parameter\");\n            }\n            if ( proto.param(0).type()->kind() != Type::Tensor )\n            {\n                throw Error(position, \"first parameter of quantization operation must be of type tensor\");\n            }\n\n            lexer.readToken('(');\n\n            Dictionary<Value> args = parseArguments(proto, lexer, nullptr, nullptr, false, true, true, &proto.param(0));\n\n            lexer.readToken(')');\n            lexer.readToken(';');\n\n            args[\"op-name\"] = Value::string(op);\n\n            return args;\n        }\n    };\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/flat/stdlib_protos.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_STDLIB_PROTOS_H_\n#define _NNEF_STDLIB_PROTOS_H_\n\n#include \"../common/value.h\"\n#include \"../common/typespec.h\"\n#include \"../common/prototype.h\"\n#include \"../common/dictionary.h\"\n\n\nnamespace nnef\n{\n\n    static std::vector<Prototype> stdlibPrototypes()\n    {\n        static const PrimitiveType* Scalar = primitiveType(Typename::Scalar);\n        static const PrimitiveType* Integer = primitiveType(Typename::Integer);\n        static const PrimitiveType* Logical = primitiveType(Typename::Logical);\n        static const PrimitiveType* String = primitiveType(Typename::String);\n        static const PrimitiveType* Generic = primitiveType(Typename::Generic);\n        \n        static const Type* ScalarTensor = tensorType(Typename::Scalar);\n        static const Type* IntegerTensor = tensorType(Typename::Integer);\n        static const Type* LogicalTensor = tensorType(Typename::Logical);\n        static const Type* GenericTensor = tensorType(Typename::Generic);\n        static const Type* TypelessTensor = tensorType();\n\n        static const Type* Integers = arrayType(Integer);\n        static const Type* Generics = arrayType(Generic);\n        static const Type* Tensors = arrayType(ScalarTensor);\n        static const Type* GenericTensors = arrayType(GenericTensor);\n        static const Type* IntegerPair = tupleType({ Integer, Integer });\n        static const Type* IntegerPairs = arrayType(IntegerPair);\n\n        static const Value ScalarZero = Value::scalar(0.0);\n        static const Value ScalarOne = Value::scalar(1.0);\n        static const Value ScalarHalf = Value::scalar(0.5);\n\n        static const Value IntegerMinusOne = Value::integer(-1);\n        static const Value IntegerZero = Value::integer(0);\n        static const Value IntegerOne = Value::integer(1);\n\n        static const Value LogicalFalse = Value::logical(false);\n        static const Value LogicalTrue = Value::logical(true);\n\n        static const Value StringConstant = Value::string(\"constant\");\n        static const Value StringSymmetric = Value::string(\"symmetric\");\n        static const Value StringReplicate = Value::string(\"replicate\");\n\n        static const Value EmptyArray = Value::array({});\n        static const Value IntegersOne = Value::array({ IntegerOne });\n\n        \n        static const std::vector<Prototype> prototypes =\n        {\n            Prototype(\"external\", {\n                Param(\"shape\", Integers),\n            }, { Result(\"output\", GenericTensor) }, Scalar),\n            \n            Prototype(\"constant\", {\n                Param(\"shape\", Integers),\n                Param(\"value\", Generics),\n            }, { Result(\"output\", GenericTensor) }, Scalar),\n\n            Prototype(\"variable\", {\n                Param(\"shape\", Integers),\n                Param(\"label\", String),\n            }, { Result(\"output\", GenericTensor) }, Scalar),\n\n            Prototype(\"update\", {\n                Param(\"variable\", GenericTensor),\n                Param(\"value\", GenericTensor),\n            }, { Result(\"result\", GenericTensor) }),\n\n\n            Prototype(\"reshape\", {\n                Param(\"input\", GenericTensor),\n                Param(\"shape\", Integers),\n                Param(\"axis_start\", Integer, IntegerZero),\n                Param(\"axis_count\", Integer, IntegerMinusOne),\n            }, { Result(\"output\", GenericTensor) }),\n\n            Prototype(\"transpose\", {\n                Param(\"input\", GenericTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", GenericTensor) }),\n\n            Prototype(\"concat\", {\n                Param(\"values\", GenericTensors),\n                Param(\"axis\", Integer),\n            }, { Result(\"value\", GenericTensor) }),\n\n            Prototype(\"split\", {\n                Param(\"value\", GenericTensor),\n                Param(\"axis\", Integer),\n                Param(\"ratios\", Integers),\n            }, { Result(\"values\", GenericTensors) }),\n\n            Prototype(\"slice\", {\n                Param(\"input\", GenericTensor),\n                Param(\"axes\", Integers),\n                Param(\"begin\", Integers),\n                Param(\"end\", Integers),\n                Param(\"stride\", Integers, EmptyArray),\n            }, { Result(\"output\", GenericTensor) }),\n\n            Prototype(\"stack\", {\n                Param(\"values\", GenericTensors),\n                Param(\"axis\", Integer),\n            }, { Result(\"value\", GenericTensor) }),\n\n            Prototype(\"unstack\", {\n                Param(\"value\", GenericTensor),\n                Param(\"axis\", Integer),\n            }, { Result(\"values\", GenericTensors) }),\n\n            Prototype(\"squeeze\", {\n                Param(\"input\", GenericTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", GenericTensor) }),\n\n            Prototype(\"unsqueeze\", {\n                Param(\"input\", GenericTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", GenericTensor) }),\n\n            Prototype(\"pad\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"padding\", IntegerPairs),\n                Param(\"border\", String, StringConstant),\n                Param(\"value\", Scalar, ScalarZero),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"tile\", {\n                Param(\"input\", GenericTensor),\n                Param(\"repeats\", Integers),\n            }, { Result(\"output\", GenericTensor) }),\n            \n            Prototype(\"gather\", {\n                Param(\"input\", GenericTensor),\n                Param(\"indices\", IntegerTensor),\n                Param(\"axis\", Integer, IntegerZero),\n            }, { Result(\"output\", GenericTensor) }),\n            \n            Prototype(\"cast\", {\n                Param(\"input\", TypelessTensor),\n            }, { Result(\"output\", GenericTensor) }),\n\n            Prototype(\"add\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", ScalarTensor) }),\n\n            Prototype(\"sub\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", ScalarTensor) }),\n\n            Prototype(\"mul\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", ScalarTensor) }),\n\n            Prototype(\"div\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", ScalarTensor) }),\n\n            Prototype(\"pow\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", ScalarTensor) }),\n            \n            Prototype(\"min\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", ScalarTensor) }),\n            \n            Prototype(\"max\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", ScalarTensor) }),\n            \n            Prototype(\"lt\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", LogicalTensor) }),\n            \n            Prototype(\"le\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", LogicalTensor) }),\n            \n            Prototype(\"gt\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", LogicalTensor) }),\n            \n            Prototype(\"ge\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", LogicalTensor) }),\n            \n            Prototype(\"eq\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", LogicalTensor) }),\n            \n            Prototype(\"ne\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"y\", ScalarTensor)\n            }, { Result(\"z\", LogicalTensor) }),\n            \n            Prototype(\"and\", {\n                Param(\"x\", LogicalTensor),\n                Param(\"y\", LogicalTensor)\n            }, { Result(\"z\", LogicalTensor) }),\n            \n            Prototype(\"or\", {\n                Param(\"x\", LogicalTensor),\n                Param(\"y\", LogicalTensor)\n            }, { Result(\"z\", LogicalTensor) }),\n            \n            \n            Prototype(\"select\", {\n                Param(\"condition\", LogicalTensor),\n                Param(\"true_value\", GenericTensor),\n                Param(\"false_value\", GenericTensor),\n            }, { Result(\"output\", GenericTensor) }),\n            \n            \n            Prototype(\"clamp\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"a\", ScalarTensor),\n                Param(\"b\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            \n            Prototype(\"copy\", {\n                Param(\"x\", GenericTensor),\n            }, { Result(\"y\", GenericTensor) }),\n            \n            Prototype(\"neg\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"rcp\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"exp\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"log\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"sin\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"cos\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"tan\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"asin\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"acos\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"atan\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"sinh\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"cosh\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"tanh\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"asinh\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"acosh\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"atanh\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"abs\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"sign\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"floor\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"ceil\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"round\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"sqr\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"sqrt\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"rsqr\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"rsqrt\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"log2\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"not\", {\n                Param(\"x\", LogicalTensor),\n            }, { Result(\"y\", LogicalTensor) }),\n            \n            \n            Prototype(\"relu\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"sigmoid\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"elu\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"alpha\", ScalarTensor, ScalarOne),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"selu\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"alpha\", ScalarTensor, Value::scalar(1.67326319)),\n                Param(\"lambda\", ScalarTensor, Value::scalar(1.05070102)),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"gelu\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"silu\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"prelu\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"alpha\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n\n            Prototype(\"leaky_relu\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"alpha\", Scalar),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"softabs\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"epsilon\", Scalar),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"softplus\", {\n                Param(\"x\", ScalarTensor),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"softmax\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"axes\", Integers, IntegersOne),\n            }, { Result(\"y\", ScalarTensor) }),\n\n\n            Prototype(\"conv\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"filter\", ScalarTensor),\n                Param(\"bias\", ScalarTensor, ScalarZero),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n                Param(\"groups\", Integer, IntegerOne),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            Prototype(\"deconv\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"filter\", ScalarTensor),\n                Param(\"bias\", ScalarTensor, ScalarZero),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n                Param(\"output_shape\", Integers, EmptyArray),\n                Param(\"groups\", Integer, IntegerOne),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            Prototype(\"box\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n                Param(\"normalize\", Logical, LogicalFalse),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            Prototype(\"debox\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n                Param(\"output_shape\", Integers, EmptyArray),\n                Param(\"normalize\", Logical, LogicalFalse),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"sample\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"index\", IntegerTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"desample\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"index\", IntegerTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n                Param(\"output_shape\", Integers, EmptyArray),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"max_pool\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"argmax_pool\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n            }, { Result(\"index\", IntegerTensor) }),\n            \n            Prototype(\"max_pool_with_index\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n            }, { Result(\"output\", ScalarTensor), Result(\"index\", IntegerTensor) }),\n            \n            Prototype(\"avg_pool\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"rms_pool\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            \n            Prototype(\"separable_conv\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"plane_filter\", ScalarTensor),\n                Param(\"point_filter\", ScalarTensor),\n                Param(\"bias\", ScalarTensor, ScalarZero),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n                Param(\"groups\", Integer, IntegerOne),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"separable_deconv\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"plane_filter\", ScalarTensor),\n                Param(\"point_filter\", ScalarTensor),\n                Param(\"bias\", ScalarTensor, ScalarZero),\n                Param(\"border\", String, StringConstant),\n                Param(\"padding\", IntegerPairs, EmptyArray),\n                Param(\"stride\", Integers, EmptyArray),\n                Param(\"dilation\", Integers, EmptyArray),\n                Param(\"output_shape\", Integers, EmptyArray),\n                Param(\"groups\", Integer, IntegerOne),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            \n            Prototype(\"nearest_downsample\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"factor\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"nearest_upsample\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"factor\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"area_downsample\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"factor\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"multilinear_upsample\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"factor\", Integers),\n                Param(\"method\", String, StringSymmetric),\n                Param(\"border\", String, StringReplicate),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            \n            Prototype(\"local_response_normalization\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"alpha\", Scalar, ScalarOne),\n                Param(\"beta\", Scalar, ScalarHalf),\n                Param(\"bias\", Scalar, ScalarOne),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"local_mean_normalization\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"local_variance_normalization\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"bias\", Scalar, ScalarZero),\n                Param(\"epsilon\", Scalar, ScalarZero),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"local_contrast_normalization\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"size\", Integers),\n                Param(\"bias\", Scalar, ScalarZero),\n                Param(\"epsilon\", Scalar, ScalarZero),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"l1_normalization\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n                Param(\"bias\", Scalar, ScalarZero),\n                Param(\"epsilon\", Scalar, ScalarZero),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"l2_normalization\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n                Param(\"bias\", Scalar, ScalarZero),\n                Param(\"epsilon\", Scalar, ScalarZero),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"batch_normalization\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"mean\", ScalarTensor),\n                Param(\"variance\", ScalarTensor),\n                Param(\"offset\", ScalarTensor, ScalarZero),\n                Param(\"scale\", ScalarTensor, ScalarOne),\n                Param(\"epsilon\", Scalar, ScalarZero),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            \n            Prototype(\"sum_reduce\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n                Param(\"normalize\", Logical, LogicalFalse),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"min_reduce\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"max_reduce\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            Prototype(\"mean_reduce\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            Prototype(\"argmax_reduce\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", IntegerTensor) }),\n\n            Prototype(\"argmin_reduce\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", IntegerTensor) }),\n            \n            Prototype(\"any_reduce\", {\n                Param(\"input\", LogicalTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", LogicalTensor) }),\n            \n            Prototype(\"all_reduce\", {\n                Param(\"input\", LogicalTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"output\", LogicalTensor) }),\n            \n            Prototype(\"moments\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"axes\", Integers),\n            }, { Result(\"mean\", ScalarTensor), Result(\"variance\", ScalarTensor) }),\n\n\n            Prototype(\"max_roi_pool\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"rois\", ScalarTensor),\n                Param(\"batch_index\", IntegerTensor),\n                Param(\"output_size\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            Prototype(\"avg_roi_pool\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"rois\", ScalarTensor),\n                Param(\"batch_index\", IntegerTensor),\n                Param(\"output_size\", Integers),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            Prototype(\"roi_resample\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"rois\", ScalarTensor),\n                Param(\"batch_index\", IntegerTensor),\n                Param(\"output_size\", Integers),\n                Param(\"method\", String, StringSymmetric),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            Prototype(\"max_roi_align\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"rois\", ScalarTensor),\n                Param(\"batch_index\", IntegerTensor),\n                Param(\"output_size\", Integers),\n                Param(\"sampling_rate\", Integers),\n                Param(\"resize_method\", String, StringSymmetric),\n            }, { Result(\"output\", ScalarTensor) }),\n\n            Prototype(\"avg_roi_align\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"rois\", ScalarTensor),\n                Param(\"batch_index\", IntegerTensor),\n                Param(\"output_size\", Integers),\n                Param(\"sampling_rate\", Integers),\n                Param(\"resize_method\", String, StringSymmetric),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            \n            Prototype(\"matmul\", {\n                Param(\"A\", ScalarTensor),\n                Param(\"B\", ScalarTensor),\n                Param(\"transposeA\", Logical, LogicalFalse),\n                Param(\"transposeB\", Logical, LogicalFalse),\n            }, { Result(\"C\", ScalarTensor) }),\n            \n            Prototype(\"linear\", {\n                Param(\"input\", ScalarTensor),\n                Param(\"filter\", ScalarTensor),\n                Param(\"bias\", ScalarTensor, ScalarZero),\n            }, { Result(\"output\", ScalarTensor) }),\n            \n            \n            Prototype(\"add_n\", {\n                Param(\"x\", Tensors),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"copy_n\", {\n                Param(\"x\", GenericTensor),\n                Param(\"times\", Integer),\n            }, { Result(\"y\", GenericTensors) }),\n            \n            \n            Prototype(\"min_max_linear_quantize\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"min\", ScalarTensor),\n                Param(\"max\", ScalarTensor),\n                Param(\"bits\", Integer),\n                Param(\"signed\", Logical, LogicalTrue),\n                Param(\"symmetric\", Logical, LogicalFalse),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"zero_point_linear_quantize\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"zero_point\", IntegerTensor),\n                Param(\"scale\", ScalarTensor),\n                Param(\"bits\", Integer),\n                Param(\"signed\", Logical),\n                Param(\"symmetric\", Logical),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"linear_quantize\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"min\", ScalarTensor),\n                Param(\"max\", ScalarTensor),\n                Param(\"bits\", Integer),\n            }, { Result(\"y\", ScalarTensor) }),\n            \n            Prototype(\"logarithmic_quantize\", {\n                Param(\"x\", ScalarTensor),\n                Param(\"max\", ScalarTensor),\n                Param(\"bits\", Integer),\n            }, { Result(\"y\", ScalarTensor) }),\n        };\n\n        return prototypes;\n    }\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/runtime/execution.h",
    "content": "/*\n* Copyright (c) 2017 The Khronos Group Inc.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n\n#ifndef _NNEF_RUNTIME_EXECUTION_H_\n#define _NNEF_RUNTIME_EXECUTION_H_\n\n#include \"nnef.h\"\n#include \"operations.h\"\n#include <cassert>\n\n\n#define DISPATCH_BY_DTYPE(name) \\\n    inline void execute_##name( const Operation& op, TensorDict& tensors ) \\\n    { \\\n        if ( op.dtype == \"scalar\" ) _execute_##name<float>(op, tensors); \\\n        else if ( op.dtype == \"integer\" ) _execute_##name<int>(op, tensors); \\\n        else if ( op.dtype == \"logical\" ) _execute_##name<bool>(op, tensors); \\\n        else throw std::runtime_error(\"operation not implemented: \" + std::string(#name) + \"<string>\"); \\\n    } \\\n\n\nnamespace nnef { namespace rt\n{\n\n    inline Tensor _make_tensor( const size_t rank, const int shape[], const size_t item_bytes )\n    {\n        Tensor tensor;\n        tensor.shape.assign(shape, shape + rank);\n        tensor.data.resize(volume_of(tensor.shape) * item_bytes);\n        return tensor;\n    }\n\n    typedef std::map<std::string,Tensor> TensorDict;\n    typedef std::function<void( const Operation& op, TensorDict& tensors )> Executor;\n\n\n    template<typename T>\n    tensor_view<T> _tensor_view( const Tensor& tensor )\n    {\n        return tensor_view<T>{ tensor.shape.size(), volume_of(tensor.shape), tensor.shape.data(), (T*)tensor.data.data() };\n    }\n\n    template<typename T>\n    tensor_view<T> _tensor_view( const T& value )\n    {\n        return tensor_view<T>{ 0, 1, nullptr, (T*)&value };\n    }\n\n    template<typename T>\n    tensor_view<T> _tensor_view( const Value& value, const TensorDict& tensors )\n    {\n        return value.kind() == Value::Identifier ? _tensor_view<T>(tensors.at(value.identifier())) : _tensor_view<T>(value.get<T>());\n    }\n\n    const std::string& _literal_dtype( const Value& value )\n    {\n        static const std::string dtypes[] = { \"\", \"integer\", \"scalar\", \"logical\", \"string\" };\n        return dtypes[(size_t)value.kind()];\n    }\n\n\n    inline void check_supported_rank( const std::string& op, const size_t rank, const size_t max )\n    {\n        if ( rank > max )\n        {\n            throw std::runtime_error(\"operation not implemented: \" + op + \" with rank = \" + std::to_string(rank));\n        }\n    }\n\n\n    inline void execute_external( const Operation& op, TensorDict& tensors )\n    {\n    }\n\n    inline void execute_variable( const Operation& op, TensorDict& tensors )\n    {\n    }\n\n    template<typename T>\n    inline void _execute_constant( const Operation& op, TensorDict& tensors )\n    {\n        auto& output = op.outputs.get(\"output\");\n        auto& value = op.attribs.get(\"value\");\n        \n        auto& tensor = tensors.at(output.identifier());\n        const size_t n = volume_of(tensor.shape);\n        auto data = (T*)tensor.data.data();\n        \n        if ( value.kind() == Value::Array )\n        {\n            if ( value.size() == n )\n            {\n                for ( size_t i = 0; i < n; ++i )\n                {\n                    data[i] = value[i].get<T>();\n                }\n            }\n            else\n            {\n                std::fill_n(data, n, value[0].scalar());\n            }\n        }\n        else\n        {\n            std::fill_n(data, n, value.scalar());\n        }\n    }\n\n    DISPATCH_BY_DTYPE(constant)\n    \n\n    template<typename T, typename F>\n    Executor make_unary_executor( const F func )\n    {\n        return [=]( const Operation& op, TensorDict& tensors )\n        {\n            auto& x = op.inputs.get(\"x\");\n            auto& y = op.outputs.get(\"y\");\n            \n            unary(_tensor_view<const T>(x, tensors), _tensor_view<T>(y, tensors), func);\n        };\n    }\n    \n    template<typename T, typename F, typename... S>\n    Executor make_unary_executor_ext( const F func, const S ...attrib )\n    {\n        return [=]( const Operation& op, TensorDict& tensors )\n        {\n            auto& x = op.inputs.get(\"x\");\n            auto& y = op.outputs.get(\"y\");\n            \n            unary(_tensor_view<const T>(x, tensors), _tensor_view<T>(y, tensors), [&]( const T x )\n            {\n                return func(x, op.attribs.get(attrib).scalar()...);\n            });\n        };\n    }\n    \n    template<typename T, typename R = T, typename F>\n    Executor make_binary_executor( const F func )\n    {\n        return [=]( const Operation& op, TensorDict& tensors )\n        {\n            auto& x = op.inputs.get(\"x\");\n            auto& y = op.inputs.get(\"y\");\n            auto& z = op.outputs.get(\"z\");\n            \n            binary(_tensor_view<const T>(x, tensors), _tensor_view<const T>(y, tensors), _tensor_view<R>(z, tensors), func);\n        };\n    }\n\n    template<typename T, typename F>\n    Executor make_reduce_executor( const F func, const T init )\n    {\n        return [=]( const Operation& op, TensorDict& tensors )\n        {\n            auto& input = op.inputs.get(\"input\");\n            auto& output = op.outputs.get(\"output\");\n            \n            auto input_view = _tensor_view<const T>(input, tensors);\n            auto output_view = _tensor_view<T>(output, tensors);\n            \n            reduce(input_view, output_view, func, init);\n            \n            if ( op.name == \"mean_reduce\" || (op.name == \"sum_reduce\" && op.attribs.get(\"normalize\").logical()) )\n            {\n                const T volume = (T)(input_view.volume / output_view.volume);\n                binary((tensor_view<const T>)output_view, _tensor_view<const T>(volume), output_view, std::divides<T>());\n            }\n        };\n    }\n    \n    template<typename T>\n    void _execute_select( const Operation& op, TensorDict& tensors )\n    {\n        auto& c = op.inputs.get(\"condition\");\n        auto& x = op.inputs.get(\"true_value\");\n        auto& y = op.inputs.get(\"false_value\");\n        auto& z = op.outputs.get(\"output\");\n        \n        select(_tensor_view<const bool>(c, tensors),\n               _tensor_view<const T>(x, tensors),\n               _tensor_view<const T>(y, tensors),\n               _tensor_view<T>(z, tensors));\n    }\n\n    DISPATCH_BY_DTYPE(select)\n\n    inline Shape _extract_items( const Value& value )\n    {\n        Shape items(value.size());\n        for ( size_t i = 0; i < value.size(); ++i )\n        {\n            auto& v = value[i];\n            items[i] = v.kind() == Value::Tuple ? v[0].integer() : v.integer();\n        }\n        return items;\n    }\n\n    inline Shape _make_padding( const size_t rank, const int input[], const int output[], const int filter[],\n                              const int stride[], const int dilation[] )\n    {\n        Shape padding(rank);\n        for ( size_t i = 0; i < rank; ++i )\n        {\n            padding[i] = std::max((output[i] - 1) * stride[i] + (filter[i] - 1) * dilation[i] + 1 - input[i], 0) / 2;\n        }\n        return padding;\n    }\n\n    template<bool Transposed, typename T>\n    void execute_conv( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& filter = op.inputs.get(\"filter\");\n        auto& bias = op.inputs.get(\"bias\");\n        auto& output = op.outputs.get(\"output\");\n        \n        auto& padding = op.attribs.get(\"padding\");\n        auto& stride = op.attribs.get(\"stride\");\n        auto& dilation = op.attribs.get(\"dilation\");\n        auto& groups = op.attribs.get(\"groups\").integer();\n        auto& border = op.attribs.get(\"border\").string();\n        \n        if ( border != \"constant\" )\n        {\n            throw std::runtime_error(\"operation not implemented: \" + op.name + \" with border = '\" + border + \"'\");\n        }\n        \n        auto input_view = _tensor_view<T>(Transposed ? output : input, tensors);\n        auto output_view = _tensor_view<T>(Transposed ? input : output, tensors);\n        auto filter_view = _tensor_view<const T>(filter, tensors);\n        auto bias_view = _tensor_view<const T>(bias, tensors);\n        \n        const size_t d = input_view.rank - 2;\n        check_supported_rank(op.name, d, 3);\n        \n        const Shape strideShape = stride.size() ? _extract_items(stride) : Shape(d, 1);\n        const Shape dilationShape = dilation.size() ? _extract_items(dilation) : Shape(d, 1);\n        const Shape paddingShape = padding.size() ? _extract_items(padding) : _make_padding(d, input_view.shape + 2,\n                                                                                            output_view.shape + 2, filter_view.shape + 2,\n                                                                                            strideShape.data(), dilationShape.data());\n        if ( groups == 1 )\n        {\n            conv<Transposed>(filter_view, bias_view, input_view, output_view,\n                             paddingShape.data(), strideShape.data(), dilationShape.data());\n        }\n        else if ( groups == 0 || groups == input_view.shape[1] )\n        {\n            depthwise_conv<Transposed>(filter_view, bias_view, input_view, output_view,\n                                       paddingShape.data(), strideShape.data(), dilationShape.data());\n        }\n        else\n        {\n            grouped_conv<Transposed>(filter_view, bias_view, input_view, output_view,\n                                     paddingShape.data(), strideShape.data(), dilationShape.data(), groups);\n        }\n    }\n\n    template<bool Transposed, typename T, typename F>\n    void _execute_pool( const Operation& op, TensorDict& tensors, const F func, const T init )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& output = op.outputs.get(\"output\");\n        \n        auto& size = op.attribs.get(\"size\");\n        auto& padding = op.attribs.get(\"padding\");\n        auto& stride = op.attribs.get(\"stride\");\n        auto& dilation = op.attribs.get(\"dilation\");\n        auto& border = op.attribs.get(\"border\").string();\n        \n        if ( border != \"constant\" && border != \"ignore\" )\n        {\n            throw std::runtime_error(\"operation not implemented: \" + op.name + \" with border = '\" + border + \"'\");\n        }\n        \n        auto input_view = _tensor_view<T>(Transposed ? output : input, tensors);\n        auto output_view = _tensor_view<T>(Transposed ? input : output, tensors);\n        \n        const size_t d = input_view.rank;\n        check_supported_rank(op.name, d, 5);\n        \n        const Shape sizeShape = _extract_items(size);\n        const Shape strideShape = stride.size() ? _extract_items(stride) : Shape(d, 1);\n        const Shape dilationShape = dilation.size() ? _extract_items(dilation) : Shape(d, 1);\n        const Shape paddingShape = padding.size() ? _extract_items(padding) : _make_padding(d, input_view.shape, output_view.shape,\n                                                                                            sizeShape.data(), strideShape.data(),\n                                                                                            dilationShape.data());\n        \n        pool<Transposed>(input_view, output_view, sizeShape.data(), paddingShape.data(), strideShape.data(), dilationShape.data(),\n                         func, init, border != \"ignore\");\n        \n        if ( op.name == \"avg_pool\" || op.name == \"avg_unpool\" || ((op.name == \"box\" || op.name == \"debox\") &&\n                                                                  op.attribs.get(\"normalize\").logical()) )\n        {\n            if ( border == \"constant\" )\n            {\n                const T volume = (T)volume_of(sizeShape);\n                binary((tensor_view<const T>)output_view, _tensor_view<const T>(volume), output_view, std::divides<T>());\n            }\n            else if ( border == \"ignore\" )\n            {\n                Tensor tensor = _make_tensor(d, output_view.shape, sizeof(T));\n                \n                pool_area<Transposed>(_tensor_view<T>(tensor), input_view.shape, output_view.shape, sizeShape.data(),\n                                      paddingShape.data(), strideShape.data(), dilationShape.data());\n                \n                binary((tensor_view<const T>)output_view, _tensor_view<const T>(tensor), output_view, std::divides<T>());\n            }\n        }\n    }\n\n    template<bool Transposed, typename T, typename F>\n    Executor make_pool_executor( const F func, const T init )\n    {\n        return [=]( const Operation& op, TensorDict& tensors )\n        {\n            _execute_pool<Transposed>(op, tensors, func, init);\n        };\n    }\n    \n    template<typename T>\n    void _execute_reshape( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& output = op.outputs.get(\"output\");\n        \n        auto input_view = _tensor_view<const T>(input, tensors);\n        auto output_view = _tensor_view<T>(output, tensors);\n        \n        std::copy_n(input_view.data, input_view.volume, output_view.data);\n    }\n\n    DISPATCH_BY_DTYPE(reshape)\n    \n    template<typename T>\n    void _execute_transpose( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& output = op.outputs.get(\"output\");\n        auto& axes = op.attribs.get(\"axes\");\n        \n        const size_t rank = tensors.at(input.identifier()).shape.size();\n        check_supported_rank(op.name, rank, 5);\n        \n        std::vector<size_t> perm(rank);\n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            perm[i] = axes[i].integer();\n        }\n        std::iota(perm.begin() + axes.size(), perm.end(), axes.size());\n        \n        transpose(_tensor_view<const T>(input, tensors), _tensor_view<T>(output, tensors), perm.data());\n    }\n\n    DISPATCH_BY_DTYPE(transpose)\n\n    template<typename T>\n    void _execute_concat( const Operation& op, TensorDict& tensors )\n    {\n        auto& values = op.inputs.get(\"values\");\n        auto& value = op.outputs.get(\"value\");\n        auto& axis = op.attribs.get(\"axis\").integer();\n        \n        std::vector<tensor_view<const T>> v;\n        for ( size_t i = 0; i < values.size(); ++i )\n        {\n            v.emplace_back(_tensor_view<const T>(values[i], tensors));\n        }\n        \n        if ( op.name == \"stack\" )\n        {\n            concat<true>(v.size(), v.data(), _tensor_view<T>(value, tensors), axis);\n        }\n        else\n        {\n            concat<false>(v.size(), v.data(), _tensor_view<T>(value, tensors), axis);\n        }\n    }\n\n    DISPATCH_BY_DTYPE(concat)\n\n    template<typename T>\n    void _execute_split( const Operation& op, TensorDict& tensors )\n    {\n        auto& value = op.inputs.get(\"value\");\n        auto& values = op.outputs.get(\"values\");\n        auto& axis = op.attribs.get(\"axis\").integer();\n        \n        std::vector<tensor_view<T>> v;\n        for ( size_t i = 0; i < values.size(); ++i )\n        {\n            v.emplace_back(_tensor_view<T>(values[i], tensors));\n        }\n        \n        if ( op.name == \"unstack\" )\n        {\n            split<true>(v.size(), _tensor_view<const T>(value, tensors), v.data(), axis);\n        }\n        else\n        {\n            split<false>(v.size(), _tensor_view<const T>(value, tensors), v.data(), axis);\n        }\n    }\n\n    DISPATCH_BY_DTYPE(split)\n\n    template<typename T>\n    void execute_pad( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& output = op.outputs.get(\"output\");\n        auto& padding = op.attribs.get(\"padding\");\n        auto& border = op.attribs.get(\"border\").string();\n        auto& value = op.attribs.get(\"value\");\n        \n        auto input_view = _tensor_view<T>(input, tensors);\n        auto output_view = _tensor_view<T>(output, tensors);\n        \n        auto paddingShape = _extract_items(padding);\n        \n        const size_t d = input_view.rank;\n        check_supported_rank(op.name, d, 5);\n        \n        if ( border == \"constant\" )\n        {\n            pad_constant<T>(input_view, output_view, paddingShape.data(), value.get<T>());\n        }\n        else if ( border == \"replicate\" )\n        {\n            pad_replicate<T>(input_view, output_view, paddingShape.data());\n        }\n        else if ( border == \"reflect\" )\n        {\n            pad_reflect<T>(input_view, output_view, paddingShape.data());\n        }\n        else if ( border == \"reflect-even\" )\n        {\n            pad_reflect_even<T>(input_view, output_view, paddingShape.data());\n        }\n        else\n        {\n            throw std::runtime_error(\"operation not implemented: pad with border == '\" + border + \"'\");\n        }\n    }\n\n    template<typename T>\n    void _execute_tile( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& output = op.outputs.get(\"output\");\n        \n        auto input_view = _tensor_view<T>(input, tensors);\n        auto output_view = _tensor_view<T>(output, tensors);\n        \n        const size_t d = input_view.rank;\n        check_supported_rank(op.name, d, 5);\n        \n        tile<T>(input_view, output_view);\n    }\n\n    DISPATCH_BY_DTYPE(tile)\n\n    template<typename T>\n    void _execute_slice( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& output = op.outputs.get(\"output\");\n        auto& axes = op.attribs.get(\"axes\");\n        auto& begin = op.attribs.get(\"begin\");\n        auto& stride = op.attribs.get(\"stride\");\n        \n        auto input_view = _tensor_view<T>(input, tensors);\n        auto output_view = _tensor_view<T>(output, tensors);\n        \n        const size_t d = input_view.rank;\n        check_supported_rank(op.name, d, 5);\n        \n        std::vector<int> offset(d, 0);\n        std::vector<int> step(d, 1);\n        for ( size_t i = 0; i < axes.size(); ++i )\n        {\n            auto axis = axes[i].integer();\n            auto offs = begin[i].integer();\n            if ( offs < 0 )\n            {\n                offs += input_view.shape[axis];\n            }\n            if ( offs < 0 )\n            {\n                offs = -1;\n            }\n            if ( offs > input_view.shape[axis] )\n            {\n                offs = input_view.shape[axis];\n            }\n            \n            offset[axis] = offs;\n            step[axis] = stride.size() ? stride[i].integer() : 1;\n        }\n        \n        slice<T>(input_view, output_view, offset.data(), step.data());\n    }\n\n    DISPATCH_BY_DTYPE(slice)\n    \n    template<typename T>\n    void _execute_gather( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& indices = op.inputs.get(\"indices\");\n        auto& output = op.outputs.get(\"output\");\n        auto& axis = op.attribs.get(\"axis\").integer();\n        \n        auto input_view = _tensor_view<T>(input, tensors);\n        auto indices_view = _tensor_view<const int>(indices, tensors);\n        auto output_view = _tensor_view<T>(output, tensors);\n        \n        gather<T>(input_view, indices_view, output_view, axis);\n    }\n    \n    DISPATCH_BY_DTYPE(gather)\n\n    template<typename T>\n    void _execute_cast( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& output = op.outputs.get(\"output\");\n        \n        auto& input_dtype = input.kind() == Value::Identifier ? tensors.at(input.identifier()).dtype : _literal_dtype(input);\n        auto output_view = _tensor_view<T>(output, tensors);\n        \n        if ( input_dtype == \"scalar\" )\n        {\n            auto input_view = _tensor_view<const Value::scalar_t>(input, tensors);\n            std::copy_n(input_view.data, input_view.volume, output_view.data);\n        }\n        else if ( input_dtype == \"integer\" )\n        {\n            auto input_view = _tensor_view<const Value::integer_t>(input, tensors);\n            std::copy_n(input_view.data, input_view.volume, output_view.data);\n        }\n        else if ( input_dtype == \"logical\" )\n        {\n            auto input_view = _tensor_view<const Value::logical_t>(input, tensors);\n            std::copy_n(input_view.data, input_view.volume, output_view.data);\n        }\n        else\n        {\n            throw std::runtime_error(\"operation 'cast' from dtype 'string' is not implemented\");\n        }\n    }\n\n    DISPATCH_BY_DTYPE(cast)\n\n    template<typename T>\n    void execute_matmul( const Operation& op, TensorDict& tensors )\n    {\n        auto& A = op.inputs.get(\"A\");\n        auto& B = op.inputs.get(\"B\");\n        auto& C = op.outputs.get(\"C\");\n        \n        bool trA = op.attribs.get(\"transposeA\").logical();\n        bool trB = op.attribs.get(\"transposeB\").logical();\n        \n        matmul(trA, trB, _tensor_view<const T>(A, tensors), _tensor_view<const T>(B, tensors), _tensor_view<T>(C, tensors));\n    }\n\n    template<typename T>\n    void execute_linear( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& filter = op.inputs.get(\"filter\");\n        auto& bias = op.inputs.get(\"bias\");\n        auto& output = op.outputs.get(\"output\");\n        \n        linear(_tensor_view<const T>(filter, tensors), _tensor_view<const T>(bias, tensors),\n               _tensor_view<const T>(input, tensors), _tensor_view<T>(output, tensors));\n    }\n    \n    template<typename T>\n    void execute_softmax( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"x\");\n        auto& output = op.outputs.get(\"y\");\n        auto& axes = op.attribs.get(\"axes\");\n        \n        auto input_view = _tensor_view<const T>(input, tensors);\n        auto output_view = _tensor_view<T>(output, tensors);\n        \n        if ( axes.size() != 1 )\n        {\n            throw std::runtime_error(\"operation not implemented: softmax with multiple axes\");\n        }\n        \n        softmax(input_view, output_view, axes[0].integer());\n    }\n\n    template<typename T, typename I, typename F>\n    Executor make_arg_reduce_executor( const F func )\n    {\n        return [=]( const Operation& op, TensorDict& tensors )\n        {\n            auto& input = op.inputs.get(\"input\");\n            auto& output = op.outputs.get(\"output\");\n            auto& axes = op.attribs.get(\"axes\");\n            \n            auto input_view = _tensor_view<const T>(input, tensors);\n            auto output_view = _tensor_view<I>(output, tensors);\n            \n            if ( axes.size() != 1 )\n            {\n                throw std::runtime_error(\"operation not implemented: argmax_reduce with multiple axes\");\n            }\n            \n            arg_reduce(input_view, output_view, axes[0].integer(), func);\n        };\n    }\n\n    template<typename T>\n    void execute_multilinear_upsample( const Operation& op, TensorDict& tensors )\n    {\n        auto& input = op.inputs.get(\"input\");\n        auto& output = op.outputs.get(\"output\");\n        auto& factor = op.attribs.get(\"factor\");\n        auto& border = op.attribs.get(\"border\").string();\n        auto& method = op.attribs.get(\"method\").string();\n        \n        auto input_view = _tensor_view<T>(input, tensors);\n        auto output_view = _tensor_view<T>(output, tensors);\n        \n        const size_t d = input_view.rank - 2;\n        check_supported_rank(op.name, d, 2);\n        \n        for ( size_t i = 0; i < factor.size(); ++i )\n        {\n            if ( factor[i].integer() != 2 )\n            {\n                throw std::runtime_error(\"operation not implemented: multilinear_upsample with factor != 2\");\n            }\n        }\n        \n        if ( method == \"aligned\" )\n        {\n            throw std::runtime_error(\"operation not implemented: multilinear_upsample with method == 'aligned'\");\n        }\n        \n        if ( border == \"constant\" )\n        {\n            if ( method == \"symmetric\" )\n            {\n                multilinear_upsample2x_symmetric(input_view, output_view);\n            }\n            else if ( method == \"asymmetric\" )\n            {\n                multilinear_upsample2x_asymmetric(input_view, output_view);\n            }\n        }\n        else if ( border == \"replicate\" )\n        {\n            Shape input_padding(input_view.rank, 0);\n            for ( size_t i = 2; i < input_view.rank; ++i )\n            {\n                input_padding[i] = 1;\n            }\n            \n            Shape output_padding(output_view.rank, 0);\n            for ( size_t i = 2; i < output_view.rank; ++i )\n            {\n                output_padding[i] = factor[i-2].integer();\n            }\n            \n            Shape padded_input_shape(input_view.shape, input_view.shape + input_view.rank);\n            for ( size_t i = 2; i < padded_input_shape.size(); ++i )\n            {\n                padded_input_shape[i] += 1 + 1;\n            }\n            \n            Shape padded_output_shape(output_view.shape, output_view.shape + output_view.rank);\n            for ( size_t i = 2; i < padded_output_shape.size(); ++i )\n            {\n                padded_output_shape[i] += 2 * factor[i-2].integer();\n            }\n            \n            Tensor padded_input = _make_tensor(padded_input_shape.size(), padded_input_shape.data(), sizeof(T));\n            Tensor padded_output = _make_tensor(padded_output_shape.size(), padded_output_shape.data(), sizeof(T));\n            \n            pad_replicate((tensor_view<const T>)input_view, _tensor_view<T>(padded_input), input_padding.data());\n            \n            if ( method == \"symmetric\" )\n            {\n                multilinear_upsample2x_symmetric(_tensor_view<T>(padded_input), _tensor_view<T>(padded_output));\n            }\n            else if ( method == \"asymmetric\" )\n            {\n                multilinear_upsample2x_asymmetric(_tensor_view<T>(padded_input), _tensor_view<T>(padded_output));\n            }\n            \n            const Shape stride(input_view.rank, 1);\n            \n            slice(_tensor_view<const T>(padded_output), output_view, output_padding.data(), stride.data());\n        }\n        else\n        {\n            throw std::runtime_error(\"operation not implemented: multilinear_upsample with border == '\" + border + \"'\");\n        }\n    }\n\n    template<typename T>\n    void _execute_update( const Operation& op, TensorDict& tensors )\n    {\n        auto& value = op.inputs.get(\"value\");\n        auto& result = op.outputs.get(\"result\");\n        \n        auto input_view = _tensor_view<const T>(value, tensors);\n        auto output_view = _tensor_view<T>(result, tensors);\n        \n        std::copy_n(input_view.data, input_view.volume, output_view.data);\n    }\n\n    DISPATCH_BY_DTYPE(update)\n\n    \n    static const std::map<std::string,Executor> Executors =\n    {\n        { \"external\", execute_external },\n        { \"constant\", execute_constant },\n        { \"variable\", execute_variable },\n        \n        { \"neg\", make_unary_executor<float>(std::negate<float>()) },\n        { \"not\", make_unary_executor<bool>(std::logical_not<bool>()) },\n        { \"abs\", make_unary_executor<float>([]( float x ){ return std::abs(x); }) },\n        { \"sign\", make_unary_executor<float>([]( float x ){ return x > 0.f ? 1.f : x < 0.f ? -1.f : 0.f; }) },\n        { \"exp\", make_unary_executor<float>([]( float x ){ return std::exp(x); }) },\n        { \"log\", make_unary_executor<float>([]( float x ){ return std::log(x); }) },\n        { \"log2\", make_unary_executor<float>([]( float x ){ return std::log(x) / std::log(2.f); }) },\n        { \"sin\", make_unary_executor<float>([]( float x ){ return std::sin(x); }) },\n        { \"cos\", make_unary_executor<float>([]( float x ){ return std::cos(x); }) },\n        { \"tan\", make_unary_executor<float>([]( float x ){ return std::tan(x); }) },\n        { \"asin\", make_unary_executor<float>([]( float x ){ return std::asin(x); }) },\n        { \"acos\", make_unary_executor<float>([]( float x ){ return std::acos(x); }) },\n        { \"atan\", make_unary_executor<float>([]( float x ){ return std::atan(x); }) },\n        { \"sinh\", make_unary_executor<float>([]( float x ){ return std::sinh(x); }) },\n        { \"cosh\", make_unary_executor<float>([]( float x ){ return std::cosh(x); }) },\n        { \"tanh\", make_unary_executor<float>([]( float x ){ return std::tanh(x); }) },\n        { \"asinh\", make_unary_executor<float>([]( float x ){ return std::asinh(x); }) },\n        { \"acosh\", make_unary_executor<float>([]( float x ){ return std::acosh(x); }) },\n        { \"atanh\", make_unary_executor<float>([]( float x ){ return std::atanh(x); }) },\n        { \"round\", make_unary_executor<float>([]( float x ){ return std::round(x); }) },\n        { \"floor\", make_unary_executor<float>([]( float x ){ return std::floor(x); }) },\n        { \"ceil\", make_unary_executor<float>([]( float x ){ return std::ceil(x); }) },\n        { \"sqrt\", make_unary_executor<float>([]( float x ){ return std::sqrt(x); }) },\n        { \"sqr\", make_unary_executor<float>([]( float x ){ return x * x; }) },\n        { \"rsqrt\", make_unary_executor<float>([]( float x ){ return 1.f / std::sqrt(x); }) },\n        { \"rsqr\", make_unary_executor<float>([]( float x ){ return 1.f / (x * x); }) },\n        { \"rcp\", make_unary_executor<float>([]( float x ){ return 1.f / x; }) },\n        { \"copy\", make_unary_executor<float>([]( float x ){ return x; }) },\n        \n        { \"sigmoid\", make_unary_executor<float>([]( float x ){ return 1.f / (1.f + std::exp(-x)); }) },\n        { \"tanh\", make_unary_executor<float>([]( float x ){ return std::tanh(x); }) },\n        { \"relu\", make_unary_executor<float>([]( float x ){ return std::max(x, 0.f); }) },\n        { \"leaky_relu\", make_unary_executor_ext<float>([]( float x, float alpha )\n            { return x < 0.f ? alpha * x : x; }, \"alpha\") },\n        { \"elu\", make_unary_executor_ext<float>([]( float x, float alpha )\n            { return x < 0.f ? alpha * (std::exp(x) - 1.f) : x; }, \"alpha\") },\n        { \"selu\", make_unary_executor_ext<float>([]( float x, float alpha, float lambda )\n            { return lambda * (x < 0.f ? alpha * (std::exp(x) - 1.f) : x); }, \"alpha\", \"lambda\") },\n        { \"gelu\", make_unary_executor<float>([]( float x ){ return x / (1.f + std::exp(-1.702f * x)); }) },\n        { \"silu\", make_unary_executor<float>([]( float x ){ return x / (1.f + std::exp(-x)); }) },\n        { \"softplus\", make_unary_executor<float>([]( float x ){ return std::log(std::exp(x) + 1.f); }) },\n        \n        { \"add\", make_binary_executor<float>(std::plus<float>()) },\n        { \"sub\", make_binary_executor<float>(std::minus<float>()) },\n        { \"mul\", make_binary_executor<float>(std::multiplies<float>()) },\n        { \"div\", make_binary_executor<float>(std::divides<float>()) },\n        { \"pow\", make_binary_executor<float>([]( float x, float y ){ return std::pow(x,y); }) },\n        { \"min\", make_binary_executor<float>([]( float x, float y ){ return std::min(x,y); }) },\n        { \"max\", make_binary_executor<float>([]( float x, float y ){ return std::max(x,y); }) },\n        { \"and\", make_binary_executor<bool>(std::logical_and<bool>()) },\n        { \"or\", make_binary_executor<bool>(std::logical_or<bool>()) },\n        { \"lt\", make_binary_executor<float,bool>(std::less<float>()) },\n        { \"gt\", make_binary_executor<float,bool>(std::greater<float>()) },\n        { \"le\", make_binary_executor<float,bool>(std::less_equal<float>()) },\n        { \"ge\", make_binary_executor<float,bool>(std::greater_equal<float>()) },\n        { \"eq\", make_binary_executor<float,bool>(std::equal_to<float>()) },\n        { \"ne\", make_binary_executor<float,bool>(std::not_equal_to<float>()) },\n        \n        { \"select\", execute_select },\n        \n        { \"sum_reduce\", make_reduce_executor(std::plus<float>(), 0.f) },\n        { \"mean_reduce\", make_reduce_executor(std::plus<float>(), 0.f) },\n        { \"min_reduce\", make_reduce_executor([]( float x, float y ){ return std::min(x,y); }, std::numeric_limits<float>::infinity()) },\n        { \"max_reduce\", make_reduce_executor([]( float x, float y ){ return std::max(x,y); }, -std::numeric_limits<float>::infinity()) },\n        { \"any_reduce\", make_reduce_executor(std::logical_or<bool>(), false) },\n        { \"all_reduce\", make_reduce_executor(std::logical_and<bool>(), true) },\n        \n        { \"conv\", execute_conv<false,float> },\n        { \"deconv\", execute_conv<true,float> },\n        \n        { \"box\", make_pool_executor<false>(std::plus<float>(), 0.f) },\n        { \"debox\", make_pool_executor<true>(std::plus<float>(), 0.f) },\n        { \"sum_pool\", make_pool_executor<false>(std::plus<float>(), 0.f) },\n        { \"sum_unpool\", make_pool_executor<true>(std::plus<float>(), 0.f) },\n        { \"avg_pool\", make_pool_executor<false>(std::plus<float>(), 0.f) },\n        { \"avg_unpool\", make_pool_executor<true>(std::plus<float>(), 0.f) },\n        { \"min_pool\", make_pool_executor<false>([]( float x, float y ){ return std::min(x,y); }, std::numeric_limits<float>::infinity()) },\n        { \"max_pool\", make_pool_executor<false>([]( float x, float y ){ return std::max(x,y); }, -std::numeric_limits<float>::infinity()) },\n        \n        { \"reshape\", execute_reshape },\n        { \"squeeze\", execute_reshape },\n        { \"unsqueeze\", execute_reshape },\n        { \"transpose\", execute_transpose },\n        \n        { \"concat\", execute_concat },\n        { \"split\", execute_split },\n        { \"stack\", execute_concat },\n        { \"unstack\", execute_split },\n        { \"pad\", execute_pad<float> },\n        { \"tile\", execute_tile },\n        { \"slice\", execute_slice },\n        { \"gather\", execute_gather },\n        { \"cast\", execute_cast },\n        \n        { \"matmul\", execute_matmul<float> },\n        { \"linear\", execute_linear<float> },\n        \n        { \"softmax\", execute_softmax<float> },\n        { \"argmin_reduce\", make_arg_reduce_executor<float,int>(std::less<float>()) },\n        { \"argmax_reduce\", make_arg_reduce_executor<float,int>(std::greater<float>()) },\n        \n        { \"multilinear_upsample\", execute_multilinear_upsample<float> },\n        \n        { \"update\", execute_update },\n    };\n\n}}   // namespace nnef::rt\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/runtime/ndrange.h",
    "content": "/*\n* Copyright (c) 2017 The Khronos Group Inc.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n\n#ifndef _NNEF_RUNTIME_NDRANGE_H_\n#define _NNEF_RUNTIME_NDRANGE_H_\n\n\nnamespace nnef { namespace rt\n{\n\n    template<size_t N, size_t K, typename I, typename S, typename Op>\n    struct _nd_loop\n    {\n        static inline void call( const S shape[], I index[], const Op& op )\n        {\n            for ( index[N-K] = 0; index[N-K] < shape[N-K]; ++index[N-K] )\n            {\n                _nd_loop<N,K-1,I,S,Op>::call(shape, index, op);\n            }\n        }\n    };\n\n    template<size_t N, typename I, typename S, typename Op>\n    struct _nd_loop<N,1,I,S,Op>\n    {\n        static inline void call( const S shape[], I index[], const Op& op )\n        {\n            for ( index[N-1] = 0; index[N-1] < shape[N-1]; ++index[N-1] )\n            {\n                op(index);\n            }\n        }\n    };\n    \n    template<typename I, typename S, typename Op>\n    struct _nd_loop<0,0,I,S,Op>\n    {\n        static inline void call( const S shape[], I index[], const Op& op )\n        {\n            op(index);\n        }\n    };\n\n    template<size_t N, typename I, typename S, typename Op>\n    inline void nd_loop( const S shape[], const Op& op )\n    {\n        I index[N];\n        _nd_loop<N,N,S,I,Op>::call(shape, index, op);\n    };\n\n\n    template<size_t N, typename I, typename S>\n    struct _nd_offset\n    {\n        static inline size_t call( const S shape[], const I index[] )\n        {\n            return _nd_offset<N-1,I,S>::call(shape, index) * shape[N-1] + index[N-1];\n        }\n    };\n\n    template<typename I, typename S>\n    struct _nd_offset<1,I,S>\n    {\n        static inline size_t call( const S shape[], const I index[] )\n        {\n            return index[0];\n        }\n    };\n\n    template<typename I, typename S>\n    struct _nd_offset<0,I,S>\n    {\n        static inline size_t call( const S shape[], const I index[] )\n        {\n            return 0;\n        }\n    };\n\n    template<size_t N, typename I, typename S>\n    inline size_t nd_offset( const S shape[], const I index[] )\n    {\n        return _nd_offset<N,I,S>::call(shape, index);\n    }\n\n    \n    template<size_t N, typename S>\n    struct _nd_volume\n    {\n        static inline size_t call( const S shape[] )\n        {\n            return _nd_volume<N-1,S>::call(shape) * shape[N-1];\n        }\n    };\n\n    template<typename S>\n    struct _nd_volume<1,S>\n    {\n        static inline size_t call( const S shape[] )\n        {\n            return shape[0];\n        }\n    };\n\n    template<typename S>\n    struct _nd_volume<0,S>\n    {\n        static inline size_t call( const S shape[] )\n        {\n            return 1;\n        }\n    };\n\n    template<size_t N, typename S>\n    inline size_t nd_volume( const S shape[] )\n    {\n        return _nd_volume<N,S>::call(shape);\n    }\n\n    template<typename S>\n    inline size_t nd_volume( const size_t rank, const S shape[] )\n    {\n        return std::accumulate(shape, shape + rank, (S)1, std::multiplies<S>());\n    }\n\n    \n    template<size_t N, typename Op>\n    struct _for_n\n    {\n        static inline void call( const Op& op )\n        {\n            _for_n<N-1,Op>::call(op);\n            op(N-1);\n        };\n    };\n\n    template<typename Op>\n    struct _for_n<1,Op>\n    {\n        static inline void call( const Op& op )\n        {\n            op(0);\n        };\n    };\n\n    template<typename Op>\n    struct _for_n<0,Op>\n    {\n        static inline void call( const Op& op )\n        {\n        };\n    };\n\n    template<size_t N, typename Op>\n    inline void for_n( const Op& op )\n    {\n        _for_n<N,Op>::call(op);\n    }\n\n\n    template<size_t N, typename Op>\n    struct _all_n\n    {\n        static inline bool call( const Op& op )\n        {\n            return _all_n<N-1,Op>::call(op) && op(N-1);\n        };\n    };\n\n    template<typename Op>\n    struct _all_n<1,Op>\n    {\n        static inline bool call( const Op& op )\n        {\n            return op(0);\n        };\n    };\n\n    template<typename Op>\n    struct _all_n<0,Op>\n    {\n        static inline bool call( const Op& op )\n        {\n            return true;\n        };\n    };\n\n    template<size_t N, typename Op>\n    inline bool all_n( const Op& op )\n    {\n        return _all_n<N,Op>::call(op);\n    };\n\n\n    template<typename T>\n    struct tensor_view\n    {\n        const size_t rank;\n        const size_t volume;\n        const int* shape;\n        T* data;\n        \n        tensor_view operator[]( const size_t idx ) const\n        {\n            const size_t size = volume / *shape;\n            return tensor_view{ rank - 1, size, shape + 1, data + size * idx };\n        }\n        \n        operator tensor_view<const T>() const\n        {\n            return tensor_view<const T>{ rank, volume, shape, data };\n        }\n    };\n\n    \n    template<size_t D, typename T>\n    T& at( tensor_view<T>& view, const int idx[] )\n    {\n        return view.data[nd_offset<D>(view.shape, idx)];\n    }\n\n}}  // namespace nnef::rt\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef/runtime/operations.h",
    "content": "/*\n* Copyright (c) 2017 The Khronos Group Inc.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n\n#ifndef _NNEF_RUNTIME_OPERATIONS_H_\n#define _NNEF_RUNTIME_OPERATIONS_H_\n\n#include <cmath>\n#include <algorithm>\n#include \"ndrange.h\"\n\n\nnamespace nnef { namespace rt\n{\n    \n    template<typename T, typename Op>\n    void _unary( const size_t n, const T* x, const size_t dx, T* y, const size_t dy, const Op& op )\n    {\n        for ( size_t i = 0; i < n; ++ i, x += dx, y += dy )\n        {\n            *y = op(*x);\n        }\n    }\n\n    template<typename T, typename Op>\n    void unary( tensor_view<const T> x, tensor_view<T> y, const Op& op )\n    {\n        _unary(y.volume, x.data, 1, y.data, 1, op);\n    }\n\n\n    template<typename T, typename R, typename Op>\n    inline void _binary( const size_t n, const T* x, const size_t dx, const T* y, const size_t dy, R* z, const size_t dz, const Op& op )\n    {\n        for ( size_t i = 0; i < n; ++i, x += dx, y += dy, z += dz )\n        {\n            *z = op(*x, *y);\n        }\n    }\n\n    template<typename T, typename R, typename Op>\n    inline void binary( tensor_view<const T> x, tensor_view<const T> y, tensor_view<R> z, const Op& op )\n    {\n        if ( (x.volume == z.volume || x.volume == 1) && (y.volume == z.volume || y.volume == 1) )\n        {\n            _binary(z.volume, x.data, x.volume == z.volume, y.data, y.volume == z.volume, z.data, 1, op);\n        }\n        else\n        {\n            const size_t dx = *x.shape != 1;\n            const size_t dy = *y.shape != 1;\n            \n            for ( size_t xi = 0, yi = 0, zi = 0; zi < *z.shape; ++zi, xi += dx, yi += dy )\n            {\n                binary(x[xi], y[yi], z[zi], op);\n            }\n        }\n    }\n\n    \n    template<typename T>\n    void _select( const size_t n, const bool* c, const size_t dc, const T* x, const size_t dx, const T* y, const size_t dy, T* z, const size_t dz )\n    {\n        for ( size_t i = 0; i < n; ++i, c += dc, x += dx, y += dy, z += dz )\n        {\n            *z = *c ? *x : *y;\n        }\n    }\n    \n    template<typename T>\n    void select( tensor_view<const bool> c, tensor_view<const T> x, tensor_view<const T> y, tensor_view<T> z )\n    {\n        if ( (c.volume == z.volume || c.volume == 1) && (x.volume == z.volume || x.volume == 1) && (y.volume == z.volume || y.volume == 1) )\n        {\n            _select(z.volume, c.data, c.volume == z.volume, x.data, x.volume == z.volume, y.data, y.volume == z.volume, z.data, 1);\n        }\n        else\n        {\n            const size_t dc = *c.shape != 1;\n            const size_t dx = *x.shape != 1;\n            const size_t dy = *y.shape != 1;\n            \n            for ( size_t ci = 0, xi = 0, yi = 0, zi = 0; zi < *z.shape; ++zi, ci += dc, xi += dx, yi += dy )\n            {\n                select(c[ci], x[xi], y[yi], z[zi]);\n            }\n        }\n    }\n    \n\n    template<typename T, typename Op>\n    void _reduce( const size_t n, const T* x, const size_t dx, T* y, const size_t dy, const Op& op )\n    {\n        for ( size_t i = 0; i < n; ++i, x += dx, y += dy )\n        {\n            *y = op(*x, *y);\n        }\n    }\n\n    template<typename T, typename Op>\n    void _reduce( tensor_view<const T> x, tensor_view<T> y, const Op& op )\n    {\n        if ( y.volume == x.volume || y.volume == 1 )\n        {\n            _reduce(x.volume, x.data, 1, y.data, y.volume == x.volume, op);\n        }\n        else\n        {\n            const size_t dy = *y.shape != 1;\n            \n            for ( size_t xi = 0, yi = 0; xi < *x.shape; ++xi, yi += dy )\n            {\n                _reduce(x[xi], y[yi], op);\n            }\n        }\n    }\n\n    template<typename T, typename Op>\n    void reduce( tensor_view<const T> x, tensor_view<T> y, const Op& op, const T init )\n    {\n        std::fill_n(y.data, y.volume, init);\n        _reduce(x, y, op);\n    }\n\n\n    template<typename T>\n    void _bias( tensor_view<const T> bias, tensor_view<T> tensor )\n    {\n        if ( bias.volume == 1 )\n        {\n            std::fill_n(tensor.data, tensor.volume, *bias.data);\n        }\n        else\n        {\n            T* data = tensor.data;\n            const size_t size = nd_volume(tensor.rank - 2, tensor.shape + 2);\n            for ( size_t b = 0; b < tensor.shape[0]; ++b )\n            {\n                for ( size_t c = 0; c < tensor.shape[1]; ++c, data += size )\n                {\n                    std::fill_n(data, size, bias.data[c]);\n                }\n            }\n        }\n    }\n\n    template<bool Transposed, size_t D, typename T>\n    static void _conv_core( tensor_view<const T> filter, tensor_view<T> input, tensor_view<T> output,\n                           const int padding[], const int stride[], const int dilation[] )\n    {\n        int input_index[D];\n        nd_loop<D,int>(output.shape, [&]( const int output_index[] )\n        {\n            nd_loop<D,int>(filter.shape, [&]( const int filter_index[] )\n            {\n                for_n<D>([&]( const size_t k )\n                {\n                    input_index[k] = output_index[k] * stride[k] + filter_index[k] * dilation[k] - padding[k];\n                });\n                \n                if ( all_n<D>([&]( const size_t k ){ return input_index[k] >= 0 && input_index[k] < input.shape[k]; }) )\n                {\n                    if ( Transposed )\n                    {\n                        at<D>(input, input_index) += at<D>(output, output_index) * at<D>(filter, filter_index);\n                    }\n                    else\n                    {\n                        at<D>(output, output_index) += at<D>(input, input_index) * at<D>(filter, filter_index);\n                    }\n                }\n            });\n        });\n    }\n\n    template<bool Transposed, size_t D, typename T>\n    void _conv( tensor_view<const T> filter, tensor_view<const T> bias, tensor_view<T> input, tensor_view<T> output,\n               const int padding[], const int stride[], const int dilation[] )\n    {\n        _bias(bias, Transposed ? input : output);\n        \n        for ( size_t b = 0; b < output.shape[0]; ++b )\n        {\n            for ( size_t z = 0; z < output.shape[1]; ++z )\n            {\n                for ( size_t c = 0; c < input.shape[1]; ++c )\n                {\n                    _conv_core<Transposed,D>(filter[z][c], input[b][c], output[b][z], padding, stride, dilation);\n                }\n            }\n        }\n    }\n\n    template<bool Transposed, typename T>\n    void conv( tensor_view<const T> filter, tensor_view<const T> bias, tensor_view<T> input, tensor_view<T> output,\n               const int padding[], const int stride[], const int dilation[] )\n    {\n        static decltype(&_conv<Transposed,1,T>) funcs[] =\n        {\n            _conv<Transposed,1,T>,\n            _conv<Transposed,2,T>,\n            _conv<Transposed,3,T>,\n        };\n        funcs[input.rank - 3](filter, bias, input, output, padding, stride, dilation);\n    }\n\n    template<bool Transposed, size_t D, typename T>\n    void _depthwise_conv( tensor_view<const T> filter, tensor_view<const T> bias, tensor_view<T> input, tensor_view<T> output,\n                         const int padding[], const int stride[], const int dilation[] )\n    {\n        const size_t multiplier = output.shape[1] / input.shape[1];\n        const bool broadcast = filter.shape[0] == 1;\n        \n        _bias(bias, Transposed ? input : output);\n        \n        for ( size_t b = 0; b < input.shape[0]; ++b )\n        {\n            for ( size_t c = 0; c < input.shape[1]; ++c )\n            {\n                for ( size_t m = 0; m < multiplier; ++m )\n                {\n                    const size_t z = multiplier * c + m;\n                    _conv_core<Transposed,D>(filter[broadcast ? 0 : z][0], input[b][c], output[b][z], padding, stride, dilation);\n                }\n            }\n        }\n    }\n    \n    template<bool Transposed, typename T>\n    void depthwise_conv( tensor_view<const T> filter, tensor_view<const T> bias, tensor_view<T> input, tensor_view<T> output,\n                        const int padding[], const int stride[], const int dilation[] )\n    {\n        static decltype(&_depthwise_conv<Transposed,1,T>) funcs[] =\n        {\n            _depthwise_conv<Transposed,1,T>,\n            _depthwise_conv<Transposed,2,T>,\n            _depthwise_conv<Transposed,3,T>,\n        };\n        funcs[input.rank - 3](filter, bias, input, output, padding, stride, dilation);\n    }\n\n    template<bool Transposed, size_t D, typename T>\n    void _grouped_conv( tensor_view<const T> filter, tensor_view<const T> bias, tensor_view<T> input, tensor_view<T> output,\n                       const int padding[], const int stride[], const int dilation[], const size_t groups )\n    {\n        _bias(bias, Transposed ? input : output);\n        \n        const size_t input_block = input.shape[1] / groups;\n        const size_t output_block = output.shape[1] / groups;\n        \n        for ( size_t b = 0; b < input.shape[0]; ++b )\n        {\n            for ( size_t g = 0; g < groups; ++g )\n            {\n                for ( size_t z = 0; z < output_block; ++z )\n                {\n                    for ( size_t c = 0; c < input_block; ++c )\n                    {\n                        _conv_core<Transposed,D>(filter[g * output_block + z][c],\n                                                 input[b][g * input_block + c],\n                                                 output[b][g * output_block + z],\n                                                 padding, stride, dilation);\n                    }\n                }\n            }\n        }\n    }\n\n    template<bool Transposed, typename T>\n    void grouped_conv( tensor_view<const T> filter, tensor_view<const T> bias, tensor_view<T> input, tensor_view<T> output,\n                      const int padding[], const int stride[], const int dilation[], const size_t groups )\n    {\n        static decltype(&_grouped_conv<Transposed,1,T>) funcs[] =\n        {\n            _grouped_conv<Transposed,1,T>,\n            _grouped_conv<Transposed,2,T>,\n            _grouped_conv<Transposed,3,T>,\n        };\n        funcs[input.rank - 3](filter, bias, input, output, padding, stride, dilation, groups);\n    }\n\n\n    template<bool Transposed, size_t D, typename T, typename Op>\n    static void _pool_core( tensor_view<T> input, tensor_view<T> output, const int size[], const int padding[],\n                           const int stride[], const int dilation[], const Op& op, const bool include_border )\n    {\n        int input_index[D];\n        nd_loop<D,int>(output.shape, [&]( const int output_index[] )\n        {\n            nd_loop<D,int>(size, [&]( const int kernel_index[] )\n            {\n                for_n<D>([&]( const size_t k )\n                {\n                    input_index[k] = output_index[k] * stride[k] + kernel_index[k] * dilation[k] - padding[k];\n                });\n                \n                const bool valid = all_n<D>([&]( const size_t k ){ return input_index[k] >= 0 && input_index[k] < input.shape[k]; });\n                \n                T& value = Transposed ? at<D>(input, input_index) : at<D>(output, output_index);\n                \n                if ( valid )\n                {\n                    value = op(value, Transposed ? at<D>(output, output_index) : at<D>(input, input_index));\n                }\n                else if ( include_border && !Transposed )\n                {\n                    value = op(value, (T)0);\n                }\n            });\n        });\n    }\n\n    template<bool Transposed, size_t D, typename T, typename Op>\n    void _pool( tensor_view<T> input, tensor_view<T> output, const int size[], const int padding[],\n               const int stride[], const int dilation[], const Op& op, const T init, const bool include_border )\n    {\n        std::fill_n(Transposed ? input.data : output.data, Transposed ? input.volume : output.volume, init);\n        \n        _pool_core<Transposed,D>(input, output, size, padding, stride, dilation, op, include_border);\n    }\n\n    template<bool Transposed, typename T, typename Op>\n    void pool( tensor_view<T> input, tensor_view<T> output, const int size[], const int padding[],\n              const int stride[], const int dilation[], const Op& op, const T init, const bool include_border )\n    {\n        static decltype(&_pool<Transposed,1,T,Op>) funcs[] =\n        {\n            _pool<Transposed,1,T,Op>,\n            _pool<Transposed,2,T,Op>,\n            _pool<Transposed,3,T,Op>,\n            _pool<Transposed,4,T,Op>,\n            _pool<Transposed,5,T,Op>,\n        };\n        funcs[input.rank - 1](input, output, size, padding, stride, dilation, op, init, include_border);\n    }\n\n    template<bool Transposed, size_t D, typename T>\n    static void _pool_area( tensor_view<T> tensor, const int input_shape[], const int output_shape[],\n                           const int size[], const int padding[], const int stride[], const int dilation[] )\n    {\n        std::fill_n(tensor.data, nd_volume<D>(tensor.shape), (T)0);\n        \n        int input_index[D];\n        nd_loop<D,int>(output_shape, [&]( const int output_index[] )\n        {\n            nd_loop<D,int>(size, [&]( const int kernel_index[] )\n            {\n                for_n<D>([&]( const size_t k )\n                {\n                    input_index[k] = output_index[k] * stride[k] + kernel_index[k] * dilation[k] - padding[k];\n                });\n                \n                if ( all_n<D>([&]( const size_t k ){ return input_index[k] >= 0 && input_index[k] < input_shape[k]; }) )\n                {\n                    ++at<D>(tensor, Transposed ? input_index : output_index);\n                }\n            });\n        });\n    }\n\n    template<bool Transposed, typename T>\n    static void pool_area( tensor_view<T> tensor, const int input_shape[], const int output_shape[],\n                          const int size[], const int padding[], const int stride[], const int dilation[] )\n    {\n        static decltype(&_pool_area<Transposed,1,T>) funcs[] =\n        {\n            _pool_area<Transposed,1,T>,\n            _pool_area<Transposed,2,T>,\n            _pool_area<Transposed,3,T>,\n            _pool_area<Transposed,4,T>,\n            _pool_area<Transposed,5,T>,\n        };\n        funcs[tensor.rank - 1](tensor, input_shape, output_shape, size, padding, stride, dilation);\n    }\n\n\n    template<bool trA, bool trB, typename T>\n    void _matmul( const size_t m, const size_t n, const size_t k, const T* A, const T* B, T* C )\n    {\n        for ( size_t i = 0; i < m; ++i )\n        {\n            for ( size_t j = 0; j < n; ++j, ++C )\n            {\n                for ( size_t l = 0; l < k; ++l )\n                {\n                    *C += A[trA ? l * m + i : i * k + l] * B[trB ? j * k + l : l * n + j];\n                }\n            }\n        }\n    }\n\n    template<typename T>\n    void matmul( const bool trA, const bool trB, tensor_view<const T> A, tensor_view<const T> B, tensor_view<T> C )\n    {\n        std::fill_n(C.data, nd_volume(C.rank, C.shape), 0.f);\n        \n        const size_t offset = C.rank - 2;\n        const size_t dA = nd_volume<2>(A.shape + offset);\n        const size_t dB = nd_volume<2>(B.shape + offset);\n        const size_t dC = nd_volume<2>(C.shape + offset);\n        const size_t m = C.shape[offset];\n        const size_t n = C.shape[offset + 1];\n        const size_t k = trA ? A.shape[offset] : A.shape[offset + 1];\n        \n        const size_t b = nd_volume(offset, C.shape);\n        for ( size_t i = 0; i < b; ++i, A.data += dA, B.data += dB, C.data += dC )\n        {\n            if ( trA && trB )\n            {\n                _matmul<true,true>(m, n, k, A.data, B.data, C.data);\n            }\n            else if ( trA )\n            {\n                _matmul<true,false>(m, n, k, A.data, B.data, C.data);\n            }\n            else if ( trB )\n            {\n                _matmul<false,true>(m, n, k, A.data, B.data, C.data);\n            }\n            else\n            {\n                _matmul<false,false>(m, n, k, A.data, B.data, C.data);\n            }\n        }\n    }\n\n    template<typename T>\n    void linear( tensor_view<const T> filter, tensor_view<const T> bias, tensor_view<const T> input, tensor_view<T> output )\n    {\n        const size_t m = output.shape[0];\n        const size_t n = output.shape[1];\n        const size_t k = input.shape[1];\n        \n        if ( bias.volume == 1 )\n        {\n            std::fill_n(output.data, m * n, *bias.data);\n        }\n        else\n        {\n            T* data = output.data;\n            for ( size_t i = 0; i < m; ++i, data += n )\n            {\n                std::copy_n(bias.data, n, data);\n            }\n        }\n        \n        _matmul<false,true>(m, n, k, input.data, filter.data, output.data);\n    }\n    \n\n    template<typename T>\n    void _linear_upsample2x_symmetric( tensor_view<T> input, tensor_view<T> output )\n    {\n        const T zero = 0;\n        const T weights[] =\n        {\n            0.25, 0.75, 0.75, 0.25\n        };\n        const int shape[] = { 1, 1, 4 };\n        tensor_view<const T> filter = { 3, 4, shape, weights };\n        tensor_view<const T> bias = { 0, 1, nullptr, &zero };\n        const int padding[] = { 1 };\n        const int stride[] = { 2 };\n        const int dilation[] = { 1 };\n        return _depthwise_conv<true,1>(filter, bias, output, input, padding, stride, dilation);\n    }\n\n    template<typename T>\n    void _linear_upsample2x_asymmetric( tensor_view<T> input, tensor_view<T> output )\n    {\n        const T zero = 0;\n        const T weights[] =\n        {\n            0.5, 1.0, 0.5,\n        };\n        const int shape[] = { 1, 1, 3 };\n        tensor_view<const T> filter = { 3, 3, shape, weights };\n        tensor_view<const T> bias = { 0, 1, nullptr, &zero };\n        const int padding[] = { 1 };\n        const int stride[] = { 2 };\n        const int dilation[] = { 1 };\n        return _depthwise_conv<true,1>(filter, bias, output, input, padding, stride, dilation);\n    }\n\n    template<typename T>\n    void _bilinear_upsample2x_symmetric( tensor_view<T> input, tensor_view<T> output )\n    {\n        const T zero = 0;\n        const T weights[] =\n        {\n            0.0625, 0.1875, 0.1875, 0.0625,\n            0.1875, 0.5625, 0.5625, 0.1875,\n            0.1875, 0.5625, 0.5625, 0.1875,\n            0.0625, 0.1875, 0.1875, 0.0625,\n        };\n        const int shape[] = { 1, 1, 4, 4 };\n        tensor_view<const T> filter = { 4, 16, shape, weights };\n        tensor_view<const T> bias = { 0, 1, nullptr, &zero };\n        const int padding[] = { 1, 1 };\n        const int stride[] = { 2, 2 };\n        const int dilation[] = { 1, 1 };\n        return _depthwise_conv<true,2>(filter, bias, output, input, padding, stride, dilation);\n    }\n\n    template<typename T>\n    void _bilinear_upsample2x_asymmetric( tensor_view<T> input, tensor_view<T> output )\n    {\n        const T zero = 0;\n        const T weights[] =\n        {\n            0.25, 0.5, 0.25,\n            0.50, 1.0, 0.50,\n            0.25, 0.5, 0.25,\n        };\n        const int shape[] = { 1, 1, 3, 3 };\n        tensor_view<const T> filter = { 4, 9, shape, weights };\n        tensor_view<const T> bias = { 0, 1, nullptr, &zero };\n        const int padding[] = { 1, 1 };\n        const int stride[] = { 2, 2 };\n        const int dilation[] = { 1, 1 };\n        return _depthwise_conv<true,2>(filter, bias, output, input, padding, stride, dilation);\n    }\n\n    template<typename T>\n    void multilinear_upsample2x_symmetric( tensor_view<T> input, tensor_view<T> output )\n    {\n        static decltype(&_linear_upsample2x_symmetric<T>) funcs[] =\n        {\n            _linear_upsample2x_symmetric<T>,\n            _bilinear_upsample2x_symmetric<T>,\n        };\n        return funcs[input.rank - 3](input, output);\n    }\n\n    template<typename T>\n    void multilinear_upsample2x_asymmetric( tensor_view<T> input, tensor_view<T> output )\n    {\n        static decltype(&_linear_upsample2x_asymmetric<T>) funcs[] =\n        {\n            _linear_upsample2x_asymmetric<T>,\n            _bilinear_upsample2x_asymmetric<T>,\n        };\n        return funcs[input.rank - 3](input, output);\n    }\n\n\n    template<typename T, typename Op>\n    T _reduce( const size_t n, const T* x, const size_t dx, const Op& op )\n    {\n        T r = *x;\n        x += dx;\n        for ( size_t i = 1; i < n; ++i, x += dx )\n        {\n            r = op(r, *x);\n        }\n        return r;\n    }\n\n    template<typename T>\n    void _softmax( const size_t n, const size_t m, const T* x, T* y )\n    {\n        const T max = _reduce(n, x, m, []( const T x, const T y ){ return std::max(x,y); });\n        _unary(n, x, m, y, m, [&]( const T x ){ return std::exp(x - max); });\n        const T sum = _reduce(n, y, m, std::plus<T>());\n        _binary(n, y, m, &sum, 0, y, m, std::divides<T>());\n    }\n\n    template<typename T>\n    void softmax( tensor_view<const T> input, tensor_view<T> output, const size_t axis )\n    {\n        const size_t batch = nd_volume(axis, input.shape);\n        const size_t channels = input.shape[axis];\n        const size_t size = nd_volume(input.rank - axis - 1, input.shape + axis + 1);\n        const size_t volume = channels * size;\n        \n        for ( size_t i = 0; i < batch; ++i, input.data += volume, output.data += volume )\n        {\n            for ( size_t j = 0; j < size; ++j )\n            {\n                _softmax(channels, size, input.data + j, output.data + j);\n            }\n        }\n    }\n\n    template<typename I, typename T, typename Op>\n    I _arg_reduce( const size_t n, const T* x, const size_t dx, const Op& op )\n    {\n        I idx = 0;\n        T val = *x;\n        x += dx;\n        for ( size_t i = 1; i < n; ++i, x += dx )\n        {\n            if ( op(*x, val) )\n            {\n                val = *x;\n                idx = (I)i;\n            }\n        }\n        return idx;\n    }\n\n    template<typename T, typename I, typename Op>\n    void arg_reduce( tensor_view<const T> input, tensor_view<I> output, const size_t axis, const Op& op )\n    {\n        const size_t batch = nd_volume(axis, input.shape);\n        const size_t channels = input.shape[axis];\n        const size_t size = nd_volume(input.rank - axis - 1, input.shape + axis + 1);\n        const size_t volume = channels * size;\n        \n        for ( size_t i = 0; i < batch; ++i, input.data += volume, output.data += size )\n        {\n            for ( size_t j = 0; j < size; ++j )\n            {\n                output.data[j] = _arg_reduce<I>(channels, input.data + j, size, op);\n            }\n        }\n    }\n\n\n    template<size_t D, typename T>\n    void _transpose( tensor_view<const T> x, tensor_view<T> y, const size_t perm[] )\n    {\n        int yi[D];\n        nd_loop<D,int>(x.shape, [&]( const int xi[] )\n        {\n            for_n<D>([&]( const size_t k )\n            {\n                yi[k] = xi[perm[k]];\n            });\n            at<D>(y,yi) = at<D>(x,xi);\n        });\n    }\n\n    template<typename T>\n    void transpose( tensor_view<const T> x, tensor_view<T> y, const size_t perm[] )\n    {\n        static decltype(&_transpose<1,T>) funcs[] =\n        {\n            _transpose<1,T>,\n            _transpose<2,T>,\n            _transpose<3,T>,\n            _transpose<4,T>,\n            _transpose<5,T>,\n        };\n        funcs[x.rank - 1](x, y, perm);\n    }\n    \n    template<bool Singular, typename T>\n    void concat( const size_t n, tensor_view<const T> x[], tensor_view<T> y, const size_t axis )\n    {\n        const size_t b = nd_volume(axis, y.shape);\n        const size_t m = nd_volume(y.rank - axis - 1, y.shape + axis + 1);\n        \n        for ( size_t i = 0; i < b; ++i )\n        {\n            for ( size_t j = 0; j < n; ++j )\n            {\n                const size_t size = Singular ? m : x[j].shape[axis] * m;\n                std::copy_n(x[j].data, size, y.data);\n                x[j].data += size;\n                y.data += size;\n            }\n        }\n    }\n\n    template<bool Singular, typename T>\n    void split( const size_t n, tensor_view<const T> x, tensor_view<T> y[], const size_t axis )\n    {\n        const size_t b = nd_volume(axis, x.shape);\n        const size_t m = nd_volume(x.rank - axis - 1, x.shape + axis + 1);\n        \n        for ( size_t i = 0; i < b; ++i )\n        {\n            for ( size_t j = 0; j < n; ++j )\n            {\n                const size_t size = Singular ? m : y[j].shape[axis] * m;\n                std::copy_n(x.data, size, y[j].data);\n                x.data += size;\n                y[j].data += size;\n            }\n        }\n    }\n\n    template<size_t D, typename T>\n    void _tile( tensor_view<const T> input, tensor_view<T> output )\n    {\n        int input_index[D];\n        nd_loop<D,int>(output.shape, [&]( const int output_index[] )\n        {\n            for_n<D>([&]( const size_t k )\n            {\n                input_index[k] = output_index[k] % input.shape[k];\n            });\n            at<D>(output, output_index) = at<D>(input, input_index);\n        });\n    }\n\n    template<typename T>\n    void tile( tensor_view<const T> input, tensor_view<T> output )\n    {\n        static decltype(&_tile<1,T>) funcs[] =\n        {\n            _tile<1,T>,\n            _tile<2,T>,\n            _tile<3,T>,\n            _tile<4,T>,\n            _tile<5,T>,\n        };\n        return funcs[input.rank - 1](input, output);\n    }\n\n    template<size_t D, typename T>\n    void _pad_constant( tensor_view<const T> input, tensor_view<T> output, const int padding[], const T value )\n    {\n        int input_index[D];\n        nd_loop<D,int>(output.shape, [&]( const int output_index[] )\n        {\n            for_n<D>([&]( const size_t k )\n            {\n                input_index[k] = output_index[k] - padding[k];\n            });\n            \n            const bool valid = all_n<D>([&]( const size_t k ){ return input_index[k] >= 0 && input_index[k] < input.shape[k]; });\n            at<D>(output, output_index) = valid ? at<D>(input, input_index) : (T)value;\n        });\n    }\n\n    template<size_t D, typename T>\n    void _pad_replicate( tensor_view<const T> input, tensor_view<T> output, const int padding[] )\n    {\n        int input_index[D];\n        nd_loop<D,int>(output.shape, [&]( const int output_index[] )\n        {\n            for_n<D>([&]( const size_t k )\n            {\n                input_index[k] = std::min(std::max(output_index[k] - padding[k], 0), input.shape[k] - 1);\n            });\n            \n            at<D>(output, output_index) = at<D>(input, input_index);\n        });\n    }\n\n    template<size_t D, typename T>\n    void _pad_reflect( tensor_view<const T> input, tensor_view<T> output, const int padding[] )\n    {\n        int input_index[D];\n        nd_loop<D,int>(output.shape, [&]( const int output_index[] )\n        {\n            for_n<D>([&]( const size_t k )\n            {\n                auto index = output_index[k] - padding[k];\n                if ( index < 0 )\n                {\n                    input_index[k] = -index;\n                }\n                else if ( index >= input.shape[k] )\n                {\n                    input_index[k] = 2 * (input.shape[k] - 1) - index;\n                }\n                else\n                {\n                    input_index[k] = index;\n                }\n            });\n            \n            at<D>(output, output_index) = at<D>(input, input_index);\n        });\n    }\n\n    template<size_t D, typename T>\n    void _pad_reflect_even( tensor_view<const T> input, tensor_view<T> output, const int padding[] )\n    {\n        int input_index[D];\n        nd_loop<D,int>(output.shape, [&]( const int output_index[] )\n        {\n            for_n<D>([&]( const size_t k )\n            {\n                auto index = output_index[k] - padding[k];\n                if ( index < 0 )\n                {\n                    input_index[k] = -index - 1;\n                }\n                else if ( index >= input.shape[k] )\n                {\n                    input_index[k] = 2 * (input.shape[k] - 1) - index + 1;\n                }\n                else\n                {\n                    input_index[k] = index;\n                }\n            });\n            \n            at<D>(output, output_index) = at<D>(input, input_index);\n        });\n    }\n    \n    template<typename T>\n    void pad_constant( tensor_view<const T> input, tensor_view<T> output, const int padding[], const T value )\n    {\n        static decltype(&_pad_constant<1,T>) funcs[] =\n        {\n            _pad_constant<1,T>,\n            _pad_constant<2,T>,\n            _pad_constant<3,T>,\n            _pad_constant<4,T>,\n            _pad_constant<5,T>,\n        };\n        return funcs[input.rank - 1](input, output, padding, value);\n    }\n\n    template<typename T>\n    void pad_replicate( tensor_view<const T> input, tensor_view<T> output, const int padding[] )\n    {\n        static decltype(&_pad_replicate<1,T>) funcs[] =\n        {\n            _pad_replicate<1,T>,\n            _pad_replicate<2,T>,\n            _pad_replicate<3,T>,\n            _pad_replicate<4,T>,\n            _pad_replicate<5,T>,\n        };\n        return funcs[input.rank - 1](input, output, padding);\n    }\n\n    template<typename T>\n    void pad_reflect( tensor_view<const T> input, tensor_view<T> output, const int padding[] )\n    {\n        static decltype(&_pad_reflect<1,T>) funcs[] =\n        {\n            _pad_reflect<1,T>,\n            _pad_reflect<2,T>,\n            _pad_reflect<3,T>,\n            _pad_reflect<4,T>,\n            _pad_reflect<5,T>,\n        };\n        return funcs[input.rank - 1](input, output, padding);\n    }\n\n    template<typename T>\n    void pad_reflect_even( tensor_view<const T> input, tensor_view<T> output, const int padding[] )\n    {\n        static decltype(&_pad_reflect_even<1,T>) funcs[] =\n        {\n            _pad_reflect_even<1,T>,\n            _pad_reflect_even<2,T>,\n            _pad_reflect_even<3,T>,\n            _pad_reflect_even<4,T>,\n            _pad_reflect_even<5,T>,\n        };\n        return funcs[input.rank - 1](input, output, padding);\n    }\n\n    template<size_t D, typename T>\n    void _slice( tensor_view<const T> input, tensor_view<T> output, const int offset[], const int stride[] )\n    {\n        int input_index[D];\n        nd_loop<D,int>(output.shape, [&]( const int output_index[] )\n        {\n            for_n<D>([&]( const size_t k )\n            {\n                input_index[k] = offset[k] + stride[k] * output_index[k];\n            });\n            \n            at<D>(output, output_index) = at<D>(input, input_index);\n        });\n    }\n\n    template<typename T>\n    void slice( tensor_view<const T> input, tensor_view<T> output, const int offset[], const int stride[] )\n    {\n        static decltype(&_slice<1,T>) funcs[] =\n        {\n            _slice<1,T>,\n            _slice<2,T>,\n            _slice<3,T>,\n            _slice<4,T>,\n            _slice<5,T>,\n        };\n        return funcs[input.rank - 1](input, output, offset, stride);\n    }\n    \n    template<typename T, typename I>\n    void _gather( const T* input, const I* indices, T* output, const size_t b, const size_t d, const size_t n, const size_t m )\n    {\n        for ( size_t k = 0; k < b; ++k, input += d * m )\n        {\n            for ( size_t i = 0; i < n; ++i, output += m )\n            {\n                std::copy_n(input + indices[i] * m, m, output);\n            }\n        }\n    }\n    \n    template<typename T, typename I>\n    void gather( tensor_view<const T> input, tensor_view<const I> indices, tensor_view<T> output, const size_t axis )\n    {\n        const size_t b = nd_volume(axis, input.shape);\n        const size_t d = input.shape[axis];\n        const size_t n = nd_volume(indices.rank, indices.shape);\n        const size_t m = nd_volume(input.rank - axis - 1, input.shape + axis + 1);\n        \n        _gather(input.data, indices.data, output.data, b, d, n, m);\n    }\n\n}}   // namespace nnef::rt\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/include/nnef.h",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef _NNEF_H_\n#define _NNEF_H_\n\n#include <map>\n#include <set>\n#include <string>\n#include <iostream>\n#include <functional>\n#include <algorithm>\n#include \"nnef/common/value.h\"\n\n\nnamespace nnef\n{\n    \n    /*\n     * Ordered key-value pairs of arbitrary typed parameter values used for operation attributes\n     */\n    struct ValueDict : public std::vector<std::pair<std::string,Value>>\n    {\n        typedef std::pair<std::string,Value> item_type;\n        \n        bool contains( const std::string& key ) const\n        {\n            return std::find_if(this->begin(), this->end(), [&]( const item_type& item ){ return item.first == key; }) != this->end();\n        }\n        \n        const Value& get( const std::string& key, const Value& defult = Value::none() ) const\n        {\n            auto it = std::find_if(this->begin(), this->end(), [&]( const item_type& item ){ return item.first == key; });\n            return it != this->end() ? it->second : defult;\n        }\n        \n        const Value& get( const size_t idx ) const\n        {\n            return std::vector<item_type>::at(idx).second;\n        }\n    };\n    \n\n    /*\n     * Tensor data-structure used both for activation and variable tensors\n     */\n    struct Tensor\n    {\n        std::string name;           // name of the tensor in the graph\n        std::string dtype;          // data-type of the tensor (such as \"scalar\", \"integer\", \"logical\")\n        std::vector<int> shape;     // shape of the tensor, filled if shape propagation is in effect\n        std::vector<char> data;     // byte array of the data of the tensor, filled in if tensor is a variable\n        ValueDict quantization;     // quantization algorithm info for both activation and variable tensors\n                                    // used keys: \"op-name\" (string), attribute names depending on op-name\n    };\n\n    \n    /*\n     * Operation data-structure to represent a single operation in the graph\n     */\n    struct Operation\n    {\n        std::string name;           // name (kind) of the operation\n        std::string dtype;          // data-type in case the operation is generic (such as \"scalar\", \"integer\", \"logical\")\n        ValueDict attribs;          // ordered dictionary of non-tensor attributes of the operation (declaration order)\n        ValueDict inputs;           // ordered dictionary of tensor inputs of the operation (may also contain constants)\n        ValueDict outputs;          // ordered dictionary tensor outputs of the operation\n    };\n\n    \n    /*\n     * Graph data-structure, list of tensors and operations\n     */\n    struct Graph\n    {\n        std::string name;                           // name of the graph\n        std::map<std::string,Tensor> tensors;       // list of tensors in the graph\n        std::vector<Operation> operations;          // list of operations in the graph, in topograpic order\n        std::vector<std::string> inputs;            // list of input tensor ids\n        std::vector<std::string> outputs;           // list of output tensor ids\n    };\n\n\n    /*\n     * Parse the NNEF graph from file\n     *\n     * @param graph_fn: name of the graph file\n     * @param quant_fn: name of the quantization file\n     * @param graph: the graph data structure to fill in\n     * @param error: the string to store the error message if any\n     * @param stdlib: the implementation of standard operations to use\n     * @param lowered: a list of operations to be lowered\n     *\n     * @return true if there were no parsing errors, false otherwise\n     */\n    bool parse_file( const std::string& graph_fn, const std::string& quant_fn, Graph& graph, std::string& error,\n                    const std::string& stdlib = \"\", const std::set<std::string>& lowered = {} ) noexcept;\n    \n    /*\n     * Parse the NNEF graph from string\n     *\n     * @param graph_str: the graph string\n     * @param quant_str: the quantization string\n     * @param graph: the graph data structure to fill in\n     * @param error: the string to store the error message if any\n     * @param stdlib: the implementation of standard operations to use\n     * @param lowered: a list of operations to be lowered\n     *\n     * @return true if there were no parsing errors, false otherwise\n     */\n    bool parse_string( const std::string& graph_str, const std::string& quant_str, Graph& graph, std::string& error,\n                      const std::string& stdlib = \"\", const std::set<std::string>& lowered = {} ) noexcept;\n    \n    /*\n     * Read/write a single tensor from/to binary stream\n     *\n     * @param is/os: the stream to read from/write to\n     * @param tensor: the tensor object to fill into/from\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    bool read_tensor( std::istream& is, Tensor& tensor, std::string& error ) noexcept;\n    bool write_tensor( std::ostream& os, const Tensor& tensor, std::string& error ) noexcept;\n\n    /*\n     * Read/write a single tensor from/to a binary file\n     *\n     * @param filename: the name of the file to read from/write to\n     * @param tensor: the tensor object to fill into/from\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    bool read_tensor( const std::string& filename, Tensor& tensor, std::string& error ) noexcept;\n    bool write_tensor( const std::string& filename, const Tensor& tensor, std::string& error ) noexcept;\n    \n    /*\n     * Load variables/whole model from set of files in a folder\n     *\n     * @param path: the path to the top level NNEF model folder\n     * @param graph: the graph object to load tensors into\n     * @param error: the string to store the error message if any\n     * @param stdlib: the implementation of standard operations to use\n     * @param lowered: a list of operations to be lowered\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    bool load_variables( const std::string& path, Graph& graph, std::string& error ) noexcept;\n    bool load_graph( const std::string& path, Graph& graph, std::string& error,\n                    const std::string& stdlib = \"\", const std::set<std::string>& lowered = {} ) noexcept;\n    \n    \n    /*\n     * Shape propagation function type\n     */\n    typedef std::function<void( const Operation& op, Graph& graph )> ShapeFunc;\n    \n    /*\n     * Perform shape inference on the graph\n     *\n     * @param graph: the graph object\n     * @param error: the string to store the error message if any\n     * @param custom_shapes: shape inference functions for custom operations\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    bool infer_shapes( Graph& graph, std::string& error, const std::map<std::string,std::vector<int>>& input_shapes = {},\n                      const std::map<std::string,ShapeFunc>& custom_shapes = {} ) noexcept;\n\n    /*\n     * Allocate tensor buffers in the graph\n     *\n     * @param graph: the graph object\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    bool allocate_buffers( Graph& graph, std::string& error ) noexcept;\n\n    /*\n     * Execute a graph\n     *\n     * @param graph: the graph object\n     * @param error: the string to store the error message if any\n     *\n     * @return true if there were no errors, false otherwise\n     */\n    bool execute( Graph& graph, std::string& error ) noexcept;\n\n}   // namespace nnef\n\n\n#endif\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/infer.cpp",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"nnef.h\"\n\n#include <string>\n#include <iostream>\n#include <fstream>\n#include <numeric>\n#include <chrono>\n#include <cmath>\n#ifdef _WIN32\n#include <io.h>\n#else\n#include <unistd.h>\n#endif\n\n\nconst std::set<std::string> lowered =\n{\n    \"separable_conv\",\n    \"separable_deconv\",\n    \"rms_pool\",\n    \"local_response_normalization\",\n    \"local_mean_normalization\",\n    \"local_variance_normalization\",\n    \"local_contrast_normalization\",\n    \"l1_normalization\",\n    \"l2_normalization\",\n    \"batch_normalization\",\n    \"area_downsample\",\n    \"nearest_downsample\",\n    \"nearest_upsample\",\n    \"linear_quantize\",\n    \"logarithmic_quantize\",\n    \"leaky_relu\",\n    \"prelu\",\n    \"clamp\",\n};\n\nstd::string read_file( const char* fn )\n{\n    std::ifstream is(fn);\n    if ( !is )\n    {\n        throw std::runtime_error(\"file not found: \" + std::string(fn));\n    }\n    \n    return std::string((std::istreambuf_iterator<char>(is)), std::istreambuf_iterator<char>());\n}\n\nbool read_inputs_from_cin( nnef::Graph& graph, std::string& error )\n{\n    for ( auto& input : graph.inputs )\n    {\n        auto& tensor = graph.tensors.at(input);\n        if ( !nnef::read_tensor(std::cin, tensor, error) )\n        {\n            return false;\n        }\n    }\n    return true;\n}\n\nbool read_inputs_from_file( nnef::Graph& graph, const std::vector<std::string>& inputs, std::string& error )\n{\n    size_t idx = 0;\n    for ( auto& input : graph.inputs )\n    {\n        auto& tensor = graph.tensors.at(input);\n        if ( !nnef::read_tensor(inputs[idx++], tensor, error) )\n        {\n            return false;\n        }\n    }\n    return true;\n}\n\nbool write_output_to_cout( const nnef::Graph& graph, std::string& error )\n{\n    for ( auto& output : graph.outputs )\n    {\n        auto& tensor = graph.tensors.at(output);\n        if ( !nnef::write_tensor(std::cout, tensor, error) )\n        {\n            return false;\n        }\n    }\n    return true;\n}\n\nbool write_output_to_file( const nnef::Graph& graph, const std::vector<std::string>& outputs, std::string& error )\n{\n    size_t idx = 0;\n    for ( auto& output : graph.outputs )\n    {\n        auto& tensor = graph.tensors.at(output);\n        if ( !nnef::write_tensor(outputs[idx++], tensor, error) )\n        {\n            return false;\n        }\n    }\n    return true;\n}\n\ntemplate<typename T>\nT sqr( const T x )\n{\n    return x * x;\n}\n\ntemplate<typename T>\nT relative_difference( const size_t n, const T* ref, const T* dat )\n{\n    T diff = 0;\n    T range = 0;\n    for ( size_t i = 0; i < n; ++i )\n    {\n        diff += sqr(ref[i] - dat[i]);\n        range += sqr(ref[i]);\n    }\n    return std::sqrt(diff / range);\n}\n\nstd::ostream& operator<<( std::ostream& os, const std::vector<int>& v )\n{\n    os << '[';\n    for ( size_t i = 0; i < v.size(); ++i )\n    {\n        if ( i )\n        {\n            os << ',';\n        }\n        os << v[i];\n    }\n    os << ']';\n    return os;\n}\n\nint volume( const std::vector<int>& v )\n{\n    return std::accumulate(v.begin(), v.end(), 1, std::multiplies<int>());\n}\n\n\nint main( int argc, const char * argv[] )\n{\n    if ( argc < 2 )\n    {\n        std::cerr << \"Input file name must be provided\" << std::endl;\n        return -1;\n    }\n    \n    const std::string path = argv[1];\n    std::string stdlib;\n    std::vector<std::string> inputs;\n    std::vector<std::string> outputs;\n    bool compare = false;\n    \n    for ( size_t i = 2; i < argc; ++i )\n    {\n        const std::string arg = argv[i];\n        if ( arg == \"--stdlib\" )\n        {\n            if ( ++i == argc )\n            {\n                std::cerr << \"Stdlib file name must be provided after --stdlib; ignoring option\" << std::endl;\n            }\n            try\n            {\n                stdlib = read_file(argv[i]);\n            }\n            catch ( std::runtime_error e )\n            {\n                std::cerr << e.what() << std::endl;\n            }\n        }\n        else if ( arg == \"--input\" )\n        {\n            if ( i + 1 == argc )\n            {\n                std::cerr << \"Input file name(s) must be provided after --input; ignoring option\" << std::endl;\n            }\n            while ( i + 1 < argc && *argv[i+1] != '-' )\n            {\n                inputs.push_back(argv[++i]);\n            }\n        }\n        else if ( arg == \"--output\" )\n        {\n            if ( i + 1 == argc )\n            {\n                std::cerr << \"Output file name(s) must be provided after --output; ignoring option\" << std::endl;\n            }\n            while ( i + 1 < argc && *argv[i+1] != '-' )\n            {\n                outputs.push_back(argv[++i]);\n            }\n        }\n        else if ( arg == \"--compare\" )\n        {\n            compare = true;\n        }\n        else\n        {\n            std::cerr << \"Unrecognized option: \" << argv[i] << \"; ignoring\" << std::endl;\n        }\n    }\n    \n    nnef::Graph graph;\n    std::string error;\n    \n    if ( !nnef::load_graph(path, graph, error, stdlib, lowered) )\n    {\n        std::cerr << error << std::endl;\n        return -1;\n    }\n    \n    std::map<std::string,std::vector<int>> input_shapes;\n    if ( !inputs.empty() || !isatty(STDIN_FILENO) )\n    {\n        bool read = !inputs.empty() ? read_inputs_from_file(graph, inputs, error) : read_inputs_from_cin(graph, error);\n        if ( !read )\n        {\n            std::cerr << error << std::endl;\n            return -1;\n        }\n        for ( auto& input : graph.inputs )\n        {\n            input_shapes.emplace(input, graph.tensors.at(input).shape);\n        }\n    }\n    \n    if ( !nnef::infer_shapes(graph, error, input_shapes) )\n    {\n        std::cerr << error << std::endl;\n        return -1;\n    }\n    \n    if ( !nnef::allocate_buffers(graph, error) )\n    {\n        std::cerr << error << std::endl;\n        return -1;\n    }\n    \n    std::cerr << \"Executing model: \" << path << std::endl;\n    \n    if ( !nnef::execute(graph, error) )\n    {\n        std::cerr << error << std::endl;\n        return -1;\n    }\n    \n    if ( compare && !outputs.empty() )\n    {\n        for ( size_t i = 0; i < graph.outputs.size(); ++i )\n        {\n            const nnef::Tensor& output = graph.tensors.at(graph.outputs[i]);\n            \n            nnef::Tensor tensor;\n            if ( !nnef::read_tensor(outputs[i], tensor, error) )\n            {\n                std::cerr << error << std::endl;\n                return -1;\n            }\n            \n            if ( output.dtype != tensor.dtype )\n            {\n                std::cout << \"data-type \" << output.dtype << \" of '\" << graph.outputs[i] << \"' does not match reference data-type \" << tensor.dtype << std::endl;\n            }\n            else if ( output.shape != tensor.shape )\n            {\n                std::cout << \"shape \" << output.shape << \" of '\" << graph.outputs[i] << \"' does not match reference shape \" << tensor.shape << std::endl;\n            }\n            else\n            {\n                if ( tensor.dtype == \"scalar\" )\n                {\n                    auto diff = relative_difference(volume(tensor.shape), (const float*)tensor.data.data(),\n                                                    (const float*)output.data.data());\n                    std::cout << \"'\" << graph.outputs[i] << \"' diff = \" << diff << std::endl;\n                }\n                else\n                {\n                    auto matches = output.data == tensor.data;\n                    std::cout << \"'\" << graph.outputs[i] << \"' \" << (matches ? \"matches\" : \"does not match\") << std::endl;\n                }\n            }\n        }\n    }\n    else if ( !outputs.empty() || !isatty(STDOUT_FILENO) )\n    {\n        bool write = !outputs.empty() ? write_output_to_file(graph, outputs, error) : write_output_to_cout(graph, error);\n        if ( !write )\n        {\n            std::cerr << error << std::endl;\n            return -1;\n        }\n    }\n    \n    return 0;\n}\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/sample.cpp",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"nnef.h\"\n#include <string>\n#include <iostream>\n\n\nint main( int argc, const char * argv[] )\n{\n    if ( argc < 2 )\n    {\n        std::cerr << \"Input file name must be provided\" << std::endl;\n        return -1;\n    }\n    \n    const std::string path = argv[1];\n    \n    nnef::Graph graph;\n    std::string error;\n    \n    if ( !nnef::load_graph(path, graph, error, \"\") )\n    {\n        std::cerr << error << std::endl;\n        return -1;\n    }\n    \n    if ( !nnef::infer_shapes(graph, error) )\n    {\n\t\tstd::cerr << error << std::endl;\n\t\treturn -1;\n    }\n    \n    std::cerr << \"Successfully parsed file: \" << path << std::endl;\n    \n    return 0;\n}\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/src/cnnef.cpp",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"cnnef.h\"\n#include \"nnef.h\"\n#include <cstring>\n\n\nusing namespace nnef;\n\n\nnnef_graph_t nnef_graph_load( const char* path, char *perror )\n{\n    std::string error;\n    Graph *nnef_graph = new Graph();\n    bool success = load_graph(path, *nnef_graph, error);\n    if ( !success )\n    {\n        if ( perror != NULL )\n        {\n            strncpy(perror, error.c_str(), error.length() + 1);\n        }\n\n        return NULL;\n    }\n    return nnef_graph;\n}\n\nnnef_graph_t nnef_graph_copy( nnef_graph_t graph )\n{\n    const Graph *nnef_graph = (const Graph*)graph;\n    return new Graph(*nnef_graph);\n}\n\nvoid nnef_graph_release( nnef_graph_t graph )\n{\n    Graph *nnef_graph = (Graph *)graph;\n    if ( nnef_graph )\n    {\n        delete nnef_graph;\n    }\n}\n\nint nnef_graph_infer_shapes( nnef_graph_t graph, char *perror )\n{\n    std::string error;\n\n    Graph* nnef_graph = (Graph*)graph;\n\n    if ( !infer_shapes(*nnef_graph, error) )\n    {\n        if ( perror != NULL )\n        {\n            strncpy(perror, error.c_str(), error.length() + 1);\n        }\n        return 0;\n    }\n    return 1;\n}\n\nint nnef_graph_allocate_buffers( nnef_graph_t graph, char *perror )\n{\n    std::string error;\n    Graph *nnef_graph = (Graph *)graph;\n\n    if ( nnef_graph == NULL )\n    {\n        return 0;\n    }\n    if ( !nnef::allocate_buffers(*nnef_graph, error) )\n    {\n        if ( perror != NULL )\n        {\n            strncpy(perror, error.c_str(), error.length() + 1);\n        }\n        return 0;\n    }\n    return 1;\n}\n\nint nnef_graph_execute( nnef_graph_t graph, char *perror )\n{\n    Graph *nnef_graph = (Graph *)graph;\n    if ( nnef_graph == NULL )\n    {\n        return 0;\n    }\n    \n    std::string error;\n    if ( !nnef::execute(*nnef_graph, error) )\n    {\n        if ( perror != NULL )\n        {\n            strncpy(perror, error.c_str(), error.length() + 1);\n        }\n        return 0;\n    }\n    return 1;\n}\n\nsize_t nnef_graph_input_names( nnef_graph_t graph, const char** inputs )\n{\n    const Graph* nnef_graph = (const Graph*)graph;\n\n    if ( inputs != NULL )\n    {\n        for ( size_t i = 0; i < nnef_graph->inputs.size(); ++i )\n        {\n            inputs[i] = nnef_graph->inputs[i].c_str();\n        }\n    }\n    return nnef_graph->inputs.size();\n}\n\nsize_t nnef_graph_output_names( nnef_graph_t graph, const char** outputs )\n{\n    const Graph* nnef_graph = (const Graph*)graph;\n\n    if ( outputs != NULL )\n    {\n        for ( size_t i = 0; i < nnef_graph->outputs.size(); ++i )\n        {\n            outputs[i] = nnef_graph->outputs[i].c_str();\n        }\n    }\n    return nnef_graph->outputs.size();\n}\n\nnnef_tensor_t nnef_graph_find_tensor( nnef_graph_t graph, const char* tensor_name )\n{\n    const Graph *nnef_graph = (const Graph*)graph;\n    if ( nnef_graph == NULL )\n    {\n        return NULL;\n    }\n    \n    std::map<std::string,Tensor>::const_iterator it = nnef_graph->tensors.find(tensor_name);\n    return it != nnef_graph->tensors.end() ? (nnef_tensor_t)&it->second : NULL;\n}\n\nconst char* nnef_graph_name( nnef_graph_t graph )\n{\n    const Graph *nnef_graph = (const Graph*)graph;\n    return nnef_graph->name.c_str();\n}\n\n\n\nnnef_tensor_t nnef_tensor_create(void)\n{\n    return new nnef::Tensor();\n}\n\nvoid nnef_tensor_release( nnef_tensor_t tensor )\n{\n    Tensor* nnef_tensor = (Tensor*)tensor;\n    if ( nnef_tensor != NULL )\n    {\n        delete nnef_tensor;\n    }\n}\n\nconst char* nnef_tensor_name( nnef_tensor_t tensor )\n{\n    const Tensor *nnef_tensor = (const Tensor*)tensor;\n    return nnef_tensor->name.c_str();\n}\n\nconst char* nnef_tensor_dtype( nnef_tensor_t tensor )\n{\n    const Tensor *nnef_tensor = (const Tensor*)tensor;\n    return nnef_tensor->dtype.c_str();\n}\n\nsize_t nnef_tensor_rank( nnef_tensor_t tensor )\n{\n    const Tensor *nnef_tensor = (const Tensor*)tensor;\n    return nnef_tensor->shape.size();\n}\n\nconst int* nnef_tensor_dims( nnef_tensor_t tensor )\n{\n    const Tensor *nnef_tensor = (const Tensor*)tensor;\n    return nnef_tensor->shape.data();\n}\n\nvoid* nnef_tensor_data( nnef_tensor_t tensor )\n{\n    const Tensor* nnef_tensor = (const Tensor*)tensor;\n    return (void*)nnef_tensor->data.data();\n}\n\nint nnef_tensor_read( const char* path, nnef_tensor_t tensor, char *perror )\n{\n    Tensor *nnef_tensor = (Tensor *)tensor;\n    if ( nnef_tensor == NULL )\n    {\n        return 0;\n    }\n    \n    std::string error;\n    if ( !read_tensor(path, *nnef_tensor, error) )\n    {\n        if ( perror != NULL )\n        {\n            strncpy(perror, error.c_str(), error.length() + 1);\n        }\n        return 0;\n    }\n    return 1;\n}\n\nint nnef_tensor_write( const char* path, nnef_tensor_t tensor, char *perror )\n{\n    const Tensor *nnef_tensor = (const Tensor *)tensor;\n    if ( nnef_tensor == NULL )\n    {\n        return 0;\n    }\n    \n    std::string error;\n    if ( !write_tensor(path, *nnef_tensor, error) )\n    {\n        if ( perror != NULL )\n        {\n            strncpy(perror, error.c_str(), error.length() + 1);\n        }\n        return 0;\n    }\n    return 1;\n}\n"
  },
  {
    "path": "nnef-pyproject/nnef/cpp/src/nnef.cpp",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include <fstream>\n#include <iterator>\n\n#include \"nnef.h\"\n#include \"nnef/comp/comp_parser.h\"\n#include \"nnef/flat/quant_parser.h\"\n#include \"nnef/common/binary.h\"\n#include \"nnef/common/shapes.h\"\n#include \"nnef/runtime/execution.h\"\n\n\nnamespace nnef\n{\n    \n    struct ParseCallback : public Parser::Callback\n    {\n        Graph& graph;\n        \n        std::istream& qis;\n        const std::string& qfn;\n        Dictionary<Dictionary<Value>> quantizations;\n        \n        ParseCallback( Graph& graph, std::istream& qis, const std::string& qfn )\n        : graph(graph), qis(qis), qfn(qfn)\n        {\n        }\n        \n        virtual void beginGraph( const Prototype& proto, const Dictionary<Prototype>& fragments )\n        {\n            graph.name = proto.name();\n            graph.operations.clear();\n            graph.tensors.clear();\n            \n            graph.inputs.resize(proto.paramCount());\n            for ( size_t i = 0; i < proto.paramCount(); ++i )\n            {\n                graph.inputs[i] = proto.param(i).name();\n            }\n            \n            graph.outputs.resize(proto.resultCount());\n            for ( size_t i = 0; i < proto.resultCount(); ++i )\n            {\n                graph.outputs[i] = proto.result(i).name();\n            }\n            \n            if ( qis )\n            {\n                quantizations = nnef::QuantParser::parse(qis, qfn.c_str(), fragments);\n            }\n        }\n        \n        virtual void endGraph( const Prototype& proto, const Dictionary<Typename>& dtypes )\n        {\n            for ( auto& it : dtypes )\n            {\n                Tensor tensor;\n                tensor.name = it.first;\n                tensor.dtype = toString(it.second);\n                if ( quantizations.count(it.first) )\n                {\n                    for ( auto& item : quantizations.at(it.first) )\n                    {\n                        tensor.quantization.push_back(item);\n                    }\n                }\n                \n                graph.tensors.emplace(it.first, std::move(tensor));\n            }\n        }\n        \n        virtual void operation( const Prototype& proto, const Dictionary<Value>& args, const Dictionary<Typename>& dtypes )\n        {\n            Operation operation;\n            operation.name = proto.name();\n            operation.dtype = args.count(\"?\") ? args.at(\"?\").string() : std::string();\n            \n            for ( size_t i = 0; i < proto.paramCount(); ++i )\n            {\n                auto& param = proto.param(i);\n                auto& value = args.at(param.name());\n                if ( param.type()->isAttribute() )\n                {\n                    operation.attribs.emplace_back(param.name(), value);\n                }\n                else\n                {\n                    operation.inputs.emplace_back(param.name(), value);\n                }\n            }\n            for ( size_t i = 0; i < proto.resultCount(); ++i )\n            {\n                auto& result = proto.result(i);\n                auto& value = args.at(result.name());\n                operation.outputs.emplace_back(result.name(), value);\n            }\n            \n            graph.operations.push_back(std::move(operation));\n        }\n    };\n    \n    std::string format_error_position( const Error::Position& pos )\n    {\n        return \"'\" + std::string(pos.filename) + \"' [\" + std::to_string(pos.line) + \":\" + std::to_string(pos.column) + \"]\";\n    }\n    \n    bool parse( std::istream& graph_is, const std::string& graph_fn, std::istream& quant_is, const std::string& quant_fn,\n               Graph& graph, std::string& error, const std::string& stdlib, const std::set<std::string>& lowered ) noexcept\n    {\n        ParseCallback callback(graph, quant_is, quant_fn);\n        CompParser parser(stdlib, lowered);\n        \n        try\n        {\n            parser.parse(graph_is, graph_fn.c_str(), callback);\n            return true;\n        }\n        catch ( const nnef::Error& e )\n        {\n            error = \"Parse error in file \" + format_error_position(e.position()) + \" \" + e.what();\n            \n            auto origin = e.position().origin;\n            while ( origin )\n            {\n                error += \"\\n... evaluated from file \" + format_error_position(e.position());\n                origin = origin->origin;\n            }\n            return false;\n        }\n    }\n    \n    bool parse_file( const std::string& graph_fn, const std::string& quant_fn, Graph& graph, std::string& error,\n                     const std::string& stdlib, const std::set<std::string>& lowered ) noexcept\n    {\n        std::ifstream graph_is(graph_fn);\n        if ( !graph_is )\n        {\n            error = \"Could not open graph file: \" + std::string(graph_fn);\n            return false;\n        }\n        \n        std::ifstream quant_is;\n        if ( !quant_fn.empty() )\n        {\n            quant_is.open(quant_fn);\n            if ( !quant_is )\n            {\n                error = \"Could not open quantization file: \" + std::string(quant_fn);\n                return false;\n            }\n        }\n        \n        return parse(graph_is, graph_fn, quant_is, quant_fn, graph, error, stdlib, lowered);\n    }\n    \n    bool parse_string( const std::string& graph_str, const std::string& quant_str, Graph& graph, std::string& error,\n                      const std::string& stdlib, const std::set<std::string>& lowered ) noexcept\n    {\n        std::stringstream graph_is(graph_str);\n        std::stringstream quant_is;\n        if ( !quant_str.empty() )\n        {\n            quant_is.str(quant_str);\n        }\n        return parse(graph_is, \"input\", quant_is, \"quantization\", graph, error, stdlib, lowered);\n    }\n\n    size_t item_bytes( const std::string& dtype )\n    {\n        return dtype == \"scalar\" ? sizeof(float) : dtype == \"integer\" ? sizeof(int) : dtype == \"logical\" ? sizeof(bool) : 0;\n    }\n    \n    size_t item_bits( const std::string& dtype )\n    {\n        return dtype == \"scalar\" ? 32 : dtype == \"integer\" ? sizeof(int) * 8 : dtype == \"logical\" ? 1 : 0;\n    }\n\n    bool read_tensor( std::istream& is, Tensor& tensor, std::string& error ) noexcept\n    {\n        TensorHeader header;\n        is.read((char*)&header, sizeof(header));\n        if ( header.item_type == TensorHeader::Uint && header.reserved[0] != 0 )\n        {\n            header.item_type = TensorHeader::Int;\n        }\n        \n        try\n        {\n            validate_tensor_header(header);\n            tensor.shape.assign(header.extents, header.extents + header.rank);\n        }\n        catch ( const nnef::Error& e )\n        {\n            error = \"Invalid tensor header: \" + std::string(e.what());\n            return false;\n        }\n        \n        std::vector<char> bytes(header.data_length);\n        is.read(bytes.data(), bytes.size());\n        \n        if ( !is )\n        {\n            error = \"Failed to read tensor data\";\n            return false;\n        }\n\n        const size_t count = volume_of(tensor.shape);\n        \n        if ( header.item_type == TensorHeader::Float )\n        {\n            tensor.dtype = \"scalar\";\n            tensor.data.resize(count * sizeof(float));\n            from_bytes(bytes.data(), count, header.bits_per_item, (float*)tensor.data.data());\n        }\n        else if ( header.item_type == TensorHeader::Bool )\n        {\n            tensor.dtype = \"logical\";\n            tensor.data.resize(count * sizeof(bool));\n            from_bytes(bytes.data(), count, header.bits_per_item, (bool*)tensor.data.data());\n        }\n        else if ( header.item_type == TensorHeader::Int || header.item_type == TensorHeader::Uint )\n        {\n            tensor.dtype = \"integer\";\n            tensor.data.resize(count * sizeof(int));\n            from_bytes(bytes.data(), count, header.bits_per_item, (int*)tensor.data.data(), header.item_type == TensorHeader::Int);\n        }\n        else if ( header.item_type == TensorHeader::Qint || header.item_type == TensorHeader::Quint )\n        {\n            tensor.dtype = \"scalar\";\n            tensor.data.resize(header.data_length);\n            tensor.data = bytes;\n            tensor.quantization.emplace_back(\"signed\", Value::logical(header.item_type == TensorHeader::Qint));\n        }\n        else\n        {\n            error = \"Unsupported tensor item-type '\" + std::to_string(header.item_type) + \"' and bits per item '\" + std::to_string(header.bits_per_item) + \"'\";\n            return false;\n        }\n        \n        return (bool)is;\n    }\n\n    bool write_tensor( std::ostream& os, const Tensor& tensor, std::string& error ) noexcept\n    {\n        if ( tensor.shape.size() > TensorHeader::MaxRank )\n        {\n            error = \"Tensor rank \" + std::to_string(tensor.shape.size()) + \" exceeds maximum allowed rank (\" + std::to_string(TensorHeader::MaxRank) + \")\";\n            return false;\n        }\n        \n        const bool quantized = !tensor.quantization.empty();\n        const bool is_signed = tensor.quantization.get(\"signed\", Value::logical(true)).logical();\n        const TensorHeader::ItemType item_type = quantized ? (is_signed ? TensorHeader::Qint : TensorHeader::Quint) :\n                                                 tensor.dtype == \"scalar\" ? TensorHeader::Float :\n                                                 tensor.dtype == \"integer\" ? TensorHeader::Int : TensorHeader::Bool;\n        \n        TensorHeader header;\n        const size_t version[] = { 1, 0 };\n        const size_t count = volume_of(tensor.shape);\n        const size_t bits_per_item = quantized ? tensor.data.size() * 8 / count : item_bits(tensor.dtype);\n        fill_tensor_header(header, version, tensor.shape.size(), tensor.shape.data(), bits_per_item, item_type);\n        \n        std::vector<char> bytes(header.data_length);\n        \n        if ( tensor.dtype == \"scalar\" )\n        {\n            if ( quantized )\n            {\n                bytes = tensor.data;\n            }\n            else\n            {\n                to_bytes((const float*)tensor.data.data(), count, bytes.data());\n            }\n        }\n        else if ( tensor.dtype == \"integer\" )\n        {\n            to_bytes((const int*)tensor.data.data(), count, bytes.data(), true);\n        }\n        else if ( tensor.dtype == \"logical\" )\n        {\n            to_bytes((const bool*)tensor.data.data(), count, bytes.data());\n        }\n        else\n        {\n            error = \"Invalid tensor data-type: '\" + tensor.dtype + \"'\";\n            return false;\n        }\n        \n        os.write((char*)&header, sizeof(header));\n        os.write(bytes.data(), bytes.size());\n        \n        if ( !os )\n        {\n            error = \"Failed to write tensor data\";\n            return false;\n        }\n        return true;\n    }\n\n    bool read_tensor( const std::string& filename, Tensor& tensor, std::string& error ) noexcept\n    {\n        std::ifstream is(filename, std::ios::binary);\n        if ( !is )\n        {\n            error = \"Could not open tensor file: \" + filename;\n            return false;\n        }\n        return read_tensor(is, tensor, error);\n    }\n\n    bool write_tensor( const std::string& filename, const Tensor& tensor, std::string& error ) noexcept\n    {\n        std::ofstream os(filename, std::ios::binary);\n        if ( !os )\n        {\n            error = \"Could not open tensor file: \" + filename;\n            return false;\n        }\n        return write_tensor(os, tensor, error);\n    }\n    \n    bool load_variables( const std::string& path, Graph& graph, std::string& error ) noexcept\n    {\n        const std::string sep = path.back() == '/' || path.back() == '\\\\' ? \"\" : \"/\";\n        \n        for ( auto& op : graph.operations )\n        {\n            if ( op.name == \"variable\" )\n            {\n                auto& label = op.attribs.get(\"label\").string();\n                auto& shape = op.attribs.get(\"shape\");\n                auto& id = op.outputs.begin()->second.identifier();\n                auto& tensor = graph.tensors.at(id);\n                \n                const std::string filename = path + sep + label + \".dat\";\n                if ( !read_tensor(filename, tensor, error) )\n                {\n                    return false;\n                }\n                \n                if ( tensor.dtype != op.dtype )\n                {\n                    error = \"item-type '\" + tensor.dtype + \"' in variable file '\" + filename + \"' does not match data-type '\" + op.dtype +\n                            \"' defined in network structure\";\n                    return false;\n                }\n                \n                Value::items_t items(tensor.shape.size());\n                for ( size_t i = 0; i < items.size(); ++i )\n                {\n                    items[i] = Value::integer(tensor.shape[i]);\n                }\n                Value tensorShape = Value::array(items);\n                \n                if ( tensorShape != shape )\n                {\n                    error = \"shape \" + tensorShape.toString() + \" in variable file '\" + filename + \"' does not match shape \"\n                            + shape.toString() + \" defined in network structure\";\n                    return false;\n                }\n            }\n        }\n        return true;\n    }\n    \n    bool file_exists( const std::string& path )\n    {\n        std::ifstream is(path);\n        return is.is_open();\n    }\n    \n    bool load_graph( const std::string& path, Graph& graph, std::string& error,\n                    const std::string& stdlib, const std::set<std::string>& lowered ) noexcept\n    {\n        const std::string sep = path.back() == '/' || path.back() == '\\\\' ? \"\" : \"/\";\n        const std::string graph_fn = path + sep + \"graph.nnef\";\n        const std::string quant_fn = path + sep + \"graph.quant\";\n        \n        if ( !file_exists(graph_fn) )\n        {\n            return parse_file(path, \"\", graph, error, stdlib, lowered);\n        }\n        \n        if ( !parse_file(graph_fn, file_exists(quant_fn) ? quant_fn : \"\", graph, error, stdlib, lowered) )\n        {\n            return false;\n        }\n        if ( !load_variables(path, graph, error) )\n        {\n            return false;\n        }\n        return true;\n    }\n    \n    \n    namespace impl\n    {\n        \n        template<size_t...> struct index_sequence {};\n        \n        template<std::size_t N, std::size_t... Next>\n        struct index_sequence_maker : public index_sequence_maker<N-1U, N-1U, Next...> {};\n        \n        template<std::size_t... Next>\n        struct index_sequence_maker<0U, Next ... > { using type = index_sequence<Next ... >; };\n        \n        template<std::size_t N>\n        using make_index_sequence = typename index_sequence_maker<N>::type;\n        \n        \n        template<typename T, typename... Args>\n        struct front_count_of\n        {\n            enum { value = 0 };\n        };\n        \n        template<typename T, typename... Args>\n        struct front_count_of<T,T,Args...>\n        {\n            enum { value = front_count_of<T,Args...>::value + 1 };\n        };\n        \n        \n        const Shape shape_of( const Graph& graph, const Value& value )\n        {\n            return value.kind() == Value::Identifier ? graph.tensors.at(value.identifier()).shape : nestedArrayShape(value);\n        }\n        \n        Shape& shape_ref( Graph& graph, const Value& value )\n        {\n            return graph.tensors[value.identifier()].shape;\n        }\n        \n        \n        template<typename... Args, size_t... Idxs1, size_t... Idxs2>\n        ShapeFunc make_shape_func( Shape(*func)(const Args&...), index_sequence<Idxs1...>, index_sequence<Idxs2...> )\n        {\n            return [=]( const Operation& op, Graph& graph )\n            {\n                const Shape shape = func(shape_of(graph, op.inputs[Idxs1].second)..., op.attribs[Idxs2].second...);\n                for ( size_t i = 0; i < op.outputs.size(); ++i )\n                {\n                    shape_ref(graph, op.outputs[i].second) = shape;\n                }\n            };\n        }\n        \n        template<typename... Args, size_t... Idxs>\n        ShapeFunc make_shape_func( std::vector<Shape>(*func)(const Shape&,const Args&...), index_sequence<Idxs...> )\n        {\n            return [=]( const Operation& op, Graph& graph )\n            {\n                const std::vector<Shape> shapes = func(shape_of(graph, op.inputs.front().second), op.attribs[Idxs].second...);\n                \n                const auto& outputs = op.outputs.front().second;\n                check(shapes.size() == outputs.size(), \"number of shapes (%d) does not match number of outputs (%d)\", (int)shapes.size(), (int)outputs.size());\n                \n                for ( size_t i = 0; i < outputs.size(); ++i )\n                {\n                    shape_ref(graph, outputs[i]) = shapes[i];\n                }\n            };\n        }\n        \n        template<typename... Args, size_t... Idxs>\n        ShapeFunc make_shape_func( Shape(*func)(const std::vector<Shape>&,const Args&...), index_sequence<Idxs...> )\n        {\n            return [=]( const Operation& op, Graph& graph )\n            {\n                const auto& inputs = op.inputs.front().second;\n                std::vector<Shape> shapes(inputs.size());\n                for ( size_t i = 0; i < shapes.size(); ++i )\n                {\n                    shapes[i] = shape_of(graph, inputs[i]);\n                }\n                \n                const Shape shape = func(shapes, op.attribs[Idxs].second...);\n                for ( size_t i = 0; i < op.outputs.size(); ++i )\n                {\n                    shape_ref(graph, op.outputs[i].second) = shape;\n                }\n            };\n        }\n        \n    }   // namespace impl\n    \n    \n    template<typename... Args>\n    ShapeFunc make_shape_func( Shape(*func)(const Value&,const Args&...) )\n    {\n        return impl::make_shape_func(func, impl::make_index_sequence<0>(), impl::make_index_sequence<sizeof...(Args)+1>());\n    }\n    \n    template<typename... Args, size_t N = impl::front_count_of<Shape,Args...>::value>\n    ShapeFunc make_shape_func( Shape(*func)(const Shape&,const Args&...) )\n    {\n        return impl::make_shape_func(func, impl::make_index_sequence<N+1>(), impl::make_index_sequence<sizeof...(Args)-N>());\n    }\n    \n    template<typename... Args>\n    ShapeFunc make_shape_func( Shape(*func)(const std::vector<Shape>&,const Args&...) )\n    {\n        return impl::make_shape_func(func, impl::make_index_sequence<sizeof...(Args)>());\n    }\n    \n    template<typename... Args>\n    ShapeFunc make_shape_func( std::vector<Shape>(*func)(const Shape&,const Args&...) )\n    {\n        return impl::make_shape_func(func, impl::make_index_sequence<sizeof...(Args)>());\n    }\n    \n    \n    static const std::map<std::string,ShapeFunc> StandardShapeFuncs =\n    {\n        { \"external\", make_shape_func(nullary_shape) },\n        { \"constant\", make_shape_func(constant_shape) },\n        { \"variable\", make_shape_func(nullary_shape) },\n        \n        { \"copy\", make_shape_func(unary_shape) },\n        { \"neg\", make_shape_func(unary_shape) },\n        { \"not\", make_shape_func(unary_shape) },\n        { \"rcp\", make_shape_func(unary_shape) },\n        { \"exp\", make_shape_func(unary_shape) },\n        { \"log\", make_shape_func(unary_shape) },\n        { \"sin\", make_shape_func(unary_shape) },\n        { \"cos\", make_shape_func(unary_shape) },\n        { \"tan\", make_shape_func(unary_shape) },\n        { \"asin\", make_shape_func(unary_shape) },\n        { \"acos\", make_shape_func(unary_shape) },\n        { \"atan\", make_shape_func(unary_shape) },\n        { \"sinh\", make_shape_func(unary_shape) },\n        { \"cosh\", make_shape_func(unary_shape) },\n        { \"tanh\", make_shape_func(unary_shape) },\n        { \"asinh\", make_shape_func(unary_shape) },\n        { \"acosh\", make_shape_func(unary_shape) },\n        { \"atanh\", make_shape_func(unary_shape) },\n        { \"abs\", make_shape_func(unary_shape) },\n        { \"sign\", make_shape_func(unary_shape) },\n        { \"floor\", make_shape_func(unary_shape) },\n        { \"ceil\", make_shape_func(unary_shape) },\n        { \"round\", make_shape_func(unary_shape) },\n        { \"sqr\", make_shape_func(unary_shape) },\n        { \"sqrt\", make_shape_func(unary_shape) },\n        { \"rsqr\", make_shape_func(unary_shape) },\n        { \"rsqrt\", make_shape_func(unary_shape) },\n        { \"log2\", make_shape_func(unary_shape) },\n        \n        { \"relu\", make_shape_func(unary_shape) },\n        { \"sigmoid\", make_shape_func(unary_shape) },\n        { \"elu\", make_shape_func(unary_shape) },\n        { \"selu\", make_shape_func(unary_shape) },\n        { \"gelu\", make_shape_func(unary_shape) },\n        { \"silu\", make_shape_func(unary_shape) },\n        { \"softabs\", make_shape_func(unary_shape) },\n        { \"softplus\", make_shape_func(unary_shape) },\n        { \"leaky_relu\", make_shape_func(unary_shape) },\n        { \"prelu\", make_shape_func(asymmetric_binary_shape) },\n        \n        { \"linear_quantize\", make_shape_func(linear_quantize_shape) },\n        { \"logarithmic_quantize\", make_shape_func(logarithmic_quantize_shape) },\n        { \"min_max_linear_quantize\", make_shape_func(linear_quantize_shape) },\n        { \"zero_point_linear_quantize\", make_shape_func(zero_point_linear_quantize_shape) },\n        \n        { \"add\", make_shape_func(binary_shape) },\n        { \"sub\", make_shape_func(binary_shape) },\n        { \"mul\", make_shape_func(binary_shape) },\n        { \"div\", make_shape_func(binary_shape) },\n        { \"min\", make_shape_func(binary_shape) },\n        { \"max\", make_shape_func(binary_shape) },\n        { \"pow\", make_shape_func(binary_shape) },\n        { \"lt\",  make_shape_func(binary_shape) },\n        { \"le\",  make_shape_func(binary_shape) },\n        { \"gt\",  make_shape_func(binary_shape) },\n        { \"ge\",  make_shape_func(binary_shape) },\n        { \"eq\",  make_shape_func(binary_shape) },\n        { \"ne\",  make_shape_func(binary_shape) },\n        { \"and\", make_shape_func(binary_shape) },\n        { \"or\",  make_shape_func(binary_shape) },\n        \n        { \"conv\", make_shape_func(conv_shape) },\n        { \"deconv\", make_shape_func(deconv_shape) },\n        { \"separable_conv\", make_shape_func(separable_conv_shape) },\n        { \"separable_deconv\", make_shape_func(separable_deconv_shape) },\n        \n        { \"box\", make_shape_func(pool_shape) },\n        { \"max_pool\", make_shape_func(pool_shape) },\n        { \"argmax_pool\", make_shape_func(pool_shape) },\n        { \"max_pool_with_index\", make_shape_func(pool_shape) },\n        { \"avg_pool\", make_shape_func(pool_shape) },\n        { \"rms_pool\", make_shape_func(pool_shape) },\n        { \"debox\", make_shape_func(unpool_shape) },\n        { \"sample\", make_shape_func(sample_shape) },\n        { \"desample\", make_shape_func(desample_shape) },\n        \n        { \"sum_reduce\", make_shape_func(reduce_shape) },\n        { \"min_reduce\", make_shape_func(reduce_shape) },\n        { \"max_reduce\", make_shape_func(reduce_shape) },\n        { \"mean_reduce\", make_shape_func(reduce_shape) },\n        { \"argmax_reduce\", make_shape_func(reduce_shape) },\n        { \"argmin_reduce\", make_shape_func(reduce_shape) },\n        { \"any_reduce\", make_shape_func(reduce_shape) },\n        { \"all_reduce\", make_shape_func(reduce_shape) },\n        { \"moments\", make_shape_func(reduce_shape) },\n        \n        { \"nearest_downsample\", make_shape_func(downsample_shape) },\n        { \"area_downsample\", make_shape_func(downsample_shape) },\n        { \"nearest_upsample\", make_shape_func(upsample_shape) },\n        { \"multilinear_upsample\", make_shape_func(upsample_shape) },\n        \n        { \"local_response_normalization\", make_shape_func(normalize_shape_size) },\n        { \"local_mean_normalization\", make_shape_func(normalize_shape_size) },\n        { \"local_variance_normalization\", make_shape_func(normalize_shape_size) },\n        { \"local_contrast_normalization\", make_shape_func(normalize_shape_size) },\n        { \"l1_normalization\", make_shape_func(normalize_shape_axes) },\n        { \"l2_normalization\", make_shape_func(normalize_shape_axes) },\n        { \"batch_normalization\", make_shape_func(batchnorm_shape) },\n        \n        { \"avg_roi_pool\", make_shape_func(roi_shape) },\n        { \"max_roi_pool\", make_shape_func(roi_shape) },\n        { \"avg_roi_align\", make_shape_func(roi_shape) },\n        { \"max_roi_align\", make_shape_func(roi_shape) },\n        { \"roi_resample\", make_shape_func(roi_shape_resample) },\n        \n        { \"reshape\", make_shape_func(reshape_shape) },\n        { \"transpose\", make_shape_func(transpose_shape) },\n        { \"split\", make_shape_func(split_shape) },\n        { \"concat\", make_shape_func(concat_shape) },\n        { \"slice\", make_shape_func(slice_shape) },\n        { \"stack\", make_shape_func(stack_shape) },\n        { \"unstack\", make_shape_func(unstack_shape) },\n        { \"squeeze\", make_shape_func(squeeze_shape) },\n        { \"unsqueeze\", make_shape_func(unsqueeze_shape) },\n        { \"tile\", make_shape_func(tile_shape) },\n        { \"pad\", make_shape_func(pad_shape) },\n        { \"cast\", make_shape_func(unary_shape) },\n        { \"gather\", make_shape_func(gather_shape) },\n        { \"matmul\", make_shape_func(matmul_shape) },\n        { \"linear\", make_shape_func(linear_shape) },\n        { \"update\", make_shape_func(update_shape) },\n        { \"softmax\", make_shape_func(softmax_shape) },\n        { \"copy_n\", make_shape_func(copy_n_shape) },\n        { \"add_n\", make_shape_func(add_n_shape) },\n        { \"select\", make_shape_func(ternary_shape) },\n        { \"clamp\", make_shape_func(ternary_shape) },\n    };\n    \n    \n    bool infer_shapes( Graph& graph, std::string& error, const std::map<std::string,Shape>& input_shapes,\n                      const std::map<std::string,ShapeFunc>& custom_shapes ) noexcept\n    {\n        for ( auto& op : graph.operations )\n        {\n            auto it = StandardShapeFuncs.find(op.name);\n            if ( it == StandardShapeFuncs.end() )\n            {\n                it = custom_shapes.find(op.name);\n\t\t\t\tif ( it == custom_shapes.end() )\n\t\t\t\t{\n\t\t\t\t\terror = \"Shape function for operation '\" + op.name + \"' is not provided\";\n\t\t\t\t\treturn false;\n\t\t\t\t}\n            }\n            auto func = it->second;\n            \n            if ( op.name == \"external\" )\n            {\n                auto& id = op.outputs.get(\"output\").identifier();\n                auto it = input_shapes.find(id);\n                if ( it != input_shapes.end() )\n                {\n                    auto& original = op.attribs.get(\"shape\");\n                    if ( it->second.size() != original.size() )\n                    {\n                        error = \"Overridden external shape rank (\" + std::to_string(it->second.size()) +\n                                \") does not match original rank (\" + std::to_string(original.size()) + \")\";\n                        return false;\n                    }\n                    graph.tensors.at(id).shape = it->second;\n                    continue;\n                }\n            }\n            \n            try\n            {\n                func(op, graph);\n            }\n            catch ( const std::exception& e )\n            {\n                auto& output = op.outputs.front().second;\n                auto& id = output.kind() == Value::Identifier ? output.identifier() : output[0].identifier();\n                error = \"Shape error while inferring shape of tensor '\" + id +\n                        \"' (operation '\" + op.name + \"'): \" + e.what();\n                return false;\n            }\n        }\n        return true;\n    }\n    \n    \n    bool allocate_buffers( Graph& graph, std::string& error ) noexcept\n    {\n        for ( auto& item : graph.tensors )\n        {\n            auto& tensor = item.second;\n            tensor.data.resize(volume_of(tensor.shape) * item_bytes(tensor.dtype));\n        }\n        return true;\n    }\n    \n\n    bool execute( Graph& graph, std::string& error ) noexcept\n    {\n        try\n        {\n            for ( auto& op : graph.operations )\n            {\n                auto it = rt::Executors.find(op.name);\n                if ( it == rt::Executors.end() )\n                {\n                    throw std::runtime_error(\"operation not implemented: \" + op.name);\n                }\n                auto& func = it->second;\n                func(op, graph.tensors);\n            }\n            return true;\n        }\n        catch ( const std::runtime_error& e )\n        {\n            error = \"Runtime error: \" + std::string(e.what());\n            return false;\n        }\n    }\n    \n}   // namespace nnef\n"
  },
  {
    "path": "nnef-pyproject/nnef/nnef.cpp",
    "content": "/*\n * Copyright (c) 2017 The Khronos Group Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"Python.h\"\n#include \"numpy/arrayobject.h\"\n#include \"nnef/flat/flat_parser.h\"\n#include \"nnef/comp/comp_parser.h\"\n#include \"nnef/flat/quant_parser.h\"\n#include \"nnef.h\"\n#include <initializer_list>\n#include <exception>\n#include <fstream>\n#include <sstream>\n#include <string>\n#include <locale>\n#include <memory>\n\n\nstatic PyObject* NNEF_Error;\n\n\n#if PY_MAJOR_VERSION >= 3\n#define PY_STRING_OBJECT PyUnicodeObject\n#define PY_STRING_TYPE PyUnicode_Type\n#define PY_STRING_CHECK PyUnicode_Check\n#define PY_STRING_FROM_CSTR PyUnicode_FromString\n#define PY_STRING_AS_CSTR PyUnicode_AsUTF8\n#define PY_INTEGER_CHECK PyLong_Check\n#define PY_INTEGER_AS_LONG PyLong_AsLong\n#else\n#define PY_STRING_OBJECT PyStringObject\n#define PY_STRING_TYPE PyString_Type\n#define PY_STRING_CHECK PyString_Check\n#define PY_STRING_FROM_CSTR PyString_FromString\n#define PY_STRING_AS_CSTR PyString_AsString\n#define PY_INTEGER_CHECK PyInt_Check\n#define PY_INTEGER_AS_LONG PyInt_AsLong\n#endif\n\n\nstruct NNEF_Identifier\n{\n    PY_STRING_OBJECT str;\n};\n\nstatic PyTypeObject NNEF_Identifier_Type =\n{\n    PyVarObject_HEAD_INIT(NULL, 0)\n    \"_nnef.Identifier\",      /* tp_name */\n    sizeof(NNEF_Identifier), /* tp_basicsize */\n    0,                       /* tp_itemsize */\n    0,                       /* tp_dealloc */\n    0,                       /* tp_print */\n    0,                       /* tp_getattr */\n    0,                       /* tp_setattr */\n    0,                       /* tp_reserved */\n    0,                       /* tp_repr */\n    0,                       /* tp_as_number */\n    0,                       /* tp_as_sequence */\n    0,                       /* tp_as_mapping */\n    0,                       /* tp_hash */\n    0,                       /* tp_call */\n    0,                       /* tp_str */\n    0,                       /* tp_getattro */\n    0,                       /* tp_setattro */\n    0,                       /* tp_as_buffer */\n    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */\n};\n\n\nstatic PyObject* OrderedDict;\nstatic PyObject* NamedTuple;\n\nstatic PyObject* Tensor;\nstatic PyObject* Operation;\nstatic PyObject* Graph;\n\n\n// make tuple by STEALING references to args\ntemplate<typename... Args>\nstatic PyObject* makePyTuple( Args&& ...args )\n{\n    PyObject* tuple = PyTuple_Pack(sizeof...(args), args...);\n    for ( auto& arg : { args... } )\n    {\n        Py_DECREF(arg);\n    }\n    return tuple;\n}\n\n// make object by STEALING references to args\ntemplate<typename... Args>\nstatic PyObject* makePyObject( PyObject* type, Args&& ...args )\n{\n    PyObject* argsTuple = makePyTuple(std::forward<Args>(args)...);\n    PyObject* obj = PyObject_CallObject(type, argsTuple);\n    Py_DECREF(argsTuple);\n    return obj;\n}\n\nstatic PyObject* makeNamedTuple( const char* name, std::initializer_list<const char*> fields )\n{\n    PyObject* pyName = PY_STRING_FROM_CSTR(name);\n\n    PyObject* pyFields = PyList_New(fields.size());\n    size_t i = 0;\n    for ( auto& field : fields )\n    {\n        PyList_SetItem(pyFields, i++, PY_STRING_FROM_CSTR(field));\n    }\n\n    return makePyObject(NamedTuple, pyName, pyFields);\n}\n\n\nstatic PyObject* buildPyBoolean( bool value )\n{\n    if ( value )\n    {\n        Py_RETURN_TRUE;\n    }\n    else\n    {\n        Py_RETURN_FALSE;\n    }\n}\n\nstatic PyObject* buildPyNone()\n{\n    Py_RETURN_NONE;\n}\n\nstatic PyObject* buildPyObjectFromValue( const nnef::Value& value )\n{\n    switch ( value.kind() )\n    {\n        case nnef::Value::None:\n        {\n            return buildPyNone();\n        }\n        case nnef::Value::Integer:\n        {\n            return Py_BuildValue(\"i\", value.integer());\n        }\n        case nnef::Value::Scalar:\n        {\n            return Py_BuildValue(\"f\", value.scalar());\n        }\n        case nnef::Value::Logical:\n        {\n            return buildPyBoolean(value.logical());\n        }\n        case nnef::Value::String:\n        {\n            return PY_STRING_FROM_CSTR(value.string().c_str());\n        }\n        case nnef::Value::Identifier:\n        {\n            PyObject* arg = PY_STRING_FROM_CSTR(value.identifier().c_str());\n            return makePyObject((PyObject*)&NNEF_Identifier_Type, arg);\n        }\n        case nnef::Value::Array:\n        {\n            PyObject* list = PyList_New(value.size());\n            for ( size_t i = 0; i < value.size(); ++i )\n            {\n                PyList_SetItem(list, i, buildPyObjectFromValue(value[i]));\n            }\n            return list;\n        }\n        case nnef::Value::Tuple:\n        {\n            PyObject* tuple = PyTuple_New(value.size());\n            for ( size_t i = 0; i < value.size(); ++i )\n            {\n                PyTuple_SetItem(tuple, i, buildPyObjectFromValue(value[i]));\n            }\n            return tuple;\n        }\n    }\n    return nullptr;\n}\n\nstatic int numpy_type_num( const nnef::Typename& dtype )\n{\n    switch ( dtype )\n    {\n        case nnef::Typename::Scalar:\n            return NPY_FLOAT32;\n        case nnef::Typename::Integer:\n            return NPY_INT32;\n        case nnef::Typename::Logical:\n            return NPY_BOOL;\n        default:\n            return NPY_VOID;\n    }\n}\n\nstatic PyArray_Descr* numpy_dtype( const nnef::Typename& dtype )\n{\n    switch ( dtype )\n    {\n        case nnef::Typename::Scalar:\n            return PyArray_DescrFromType(NPY_FLOAT32);\n        case nnef::Typename::Integer:\n            return PyArray_DescrFromType(NPY_INT32);\n        case nnef::Typename::Logical:\n            return PyArray_DescrFromType(NPY_BOOL);\n        default:\n            return NULL;\n    }\n}\n\nstatic std::string buildErrorString( nnef::Error e )\n{\n    std::string str = \"Parse error in '\" + std::string(e.position().filename) + \"' [\" + std::to_string(e.position().line) + \":\" + std::to_string(e.position().column) + \"] \" + e.what();\n\n    auto origin = e.position().origin;\n    while ( origin )\n    {\n        str += \"\\n... evaluated from '\" + std::string(e.position().filename) + \"' [\" + std::to_string(e.position().line) + \":\" + std::to_string(e.position().column) + \"]\";\n        origin = origin->origin;\n    }\n\n    return str;\n}\n\n\nstruct GraphCallback : public nnef::Parser::Callback\n{\n    GraphCallback( std::istream& qis, const char* qfn )\n    : qis(qis), qfn(qfn), tensors(NULL), operations(NULL), graph(NULL), version(NULL), extensions(NULL)\n    {\n    }\n\n    ~GraphCallback()\n    {\n        if ( tensors )\n            Py_DECREF(tensors);\n        if ( operations )\n            Py_DECREF(operations);\n        if ( graph )\n            Py_DECREF(graph);\n        if ( version )\n            Py_DECREF(version);\n        if ( extensions )\n            Py_DECREF(extensions);\n    }\n\n    virtual void beginDocument( const std::string& filename, const nnef::Parser::version_t& version )\n    {\n        this->version = makePyTuple(Py_BuildValue(\"i\", version.first), Py_BuildValue(\"i\", version.second));\n        this->extensions = PyList_New(0);\n    }\n\n    virtual bool handleExtension( const std::string& ext )\n    {\n        PyObject* pyStr = PY_STRING_FROM_CSTR(ext.c_str());\n        PyList_Append(this->extensions, pyStr);\n        Py_DECREF(pyStr);\n        return false;\n    }\n\n    virtual void beginGraph( const nnef::Prototype& proto, const nnef::Dictionary<nnef::Prototype>& fragments )\n    {\n        PyObject* name = PY_STRING_FROM_CSTR(proto.name().c_str());\n\n        this->protos = &fragments;\n        this->tensors = PyDict_New();\n        this->operations = PyList_New(0);\n        \n        PyObject* inputs = PyList_New(proto.paramCount());\n        for ( size_t i = 0; i < proto.paramCount(); ++i )\n        {\n            PyList_SetItem(inputs, i, PY_STRING_FROM_CSTR(proto.param(i).name().c_str()));\n        }\n        \n        PyObject* outputs = PyList_New(proto.resultCount());\n        for ( size_t i = 0; i < proto.resultCount(); ++i )\n        {\n            PyList_SetItem(outputs, i, PY_STRING_FROM_CSTR(proto.result(i).name().c_str()));\n        }\n\n        Py_INCREF(this->tensors);\n        Py_INCREF(this->operations);\n        this->graph = makePyObject(Graph, name, tensors, operations, inputs, outputs);\n        \n        if ( qis )\n        {\n            quant = nnef::QuantParser::parse(qis, qfn, fragments);\n        }\n    }\n\n    virtual void endGraph( const nnef::Prototype& proto, const nnef::Dictionary<nnef::Typename>& dtypes )\n    {\n        for ( auto& it : dtypes )\n        {\n            PyObject* name = PY_STRING_FROM_CSTR(it.first.c_str());\n            PyObject* shape = buildPyNone();\n            PyObject* dtype = PY_STRING_FROM_CSTR(nnef::toString(it.second));\n            PyObject* data = buildPyNone();\n            PyObject* quantization = PyDict_New();\n            if ( quant.count(it.first) )\n            {\n                auto& attribs = quant.at(it.first);\n                auto& op_name = attribs.at(\"op-name\").string();\n                auto& op_proto = protos->at(op_name);\n                for ( auto& qit : attribs )\n                {\n                    auto obj = buildPyObjectFromValue(qit.second);\n                    auto param = op_proto.param(qit.first.c_str());\n                    if ( param && param->type()->kind() == nnef::Type::Tensor )\n                    {\n                        auto tensor_type = (const nnef::TensorType*)param->type();\n                        auto data_type = (const nnef::PrimitiveType*)tensor_type->dataType();\n                        PyArray_Descr* array_dtype = numpy_dtype(data_type->name());\n                        PyObject* array = PyArray_FromAny(obj, array_dtype, 0, 0, 0, NULL); // steals reference to dtype\n                        Py_DECREF(obj);\n                        obj = array;\n                    }\n                    PyDict_SetItemString(quantization, qit.first.c_str(), obj);\n                    Py_DECREF(obj);\n                }\n            }\n            \n            PyObject* tensor = makePyObject(Tensor, name, dtype, shape, data, quantization);\n            PyDict_SetItemString(tensors, it.first.c_str(), tensor);\n            Py_DECREF(tensor);\n        }\n    }\n\n    virtual void operation( const nnef::Prototype& proto, const nnef::Dictionary<nnef::Value>& args,\n                           const nnef::Dictionary<nnef::Typename>& dtypes )\n    {\n        PyObject* attribs = PyList_New(0);\n        PyObject* inputs = PyList_New(0);\n        PyObject* outputs = PyList_New(0);\n        PyObject* dtype = args.count(\"?\") ? PY_STRING_FROM_CSTR(args.at(\"?\").string().c_str()) : buildPyNone();\n        \n        for ( size_t i = 0; i < proto.paramCount(); ++i )\n        {\n            auto& param = proto.param(i);\n            auto& value = args.at(param.name());\n            PyObject* item = makePyTuple(PY_STRING_FROM_CSTR(param.name().c_str()), buildPyObjectFromValue(value));\n            PyList_Append(param.type()->isAttribute() ? attribs : inputs, item);\n            Py_DECREF(item);\n        }\n        for ( size_t i = 0; i < proto.resultCount(); ++i )\n        {\n            auto& result = proto.result(i);\n            auto& value = args.at(result.name());\n            PyObject* item = makePyTuple(PY_STRING_FROM_CSTR(result.name().c_str()), buildPyObjectFromValue(value));\n            PyList_Append(outputs, item);\n            Py_DECREF(item);\n        }\n\n        PyObject* name = PY_STRING_FROM_CSTR(proto.name().c_str());\n        attribs = makePyObject(OrderedDict, attribs);\n        inputs = makePyObject(OrderedDict, inputs);\n        outputs = makePyObject(OrderedDict, outputs);\n\n        PyObject* operation = makePyObject(Operation, name, attribs, inputs, outputs, dtype);\n        PyList_Append(operations, operation);\n        Py_DECREF(operation);\n    }\n\n    std::istream& qis;\n    const char* qfn;\n    nnef::Dictionary<nnef::Dictionary<nnef::Value>> quant;\n    const nnef::Dictionary<nnef::Prototype>* protos;\n\n    PyObject* tensors;\n    PyObject* operations;\n    PyObject* graph;\n    PyObject* version;\n    PyObject* extensions;\n};\n\n\nstatic PyObject* parse( PyObject* self, PyObject* args, PyObject* kwargs, bool isFile )\n{\n\tconst char* input = nullptr;\n    const char* quant = nullptr;\n    const char* stdlib = nullptr;\n    PyObject* lower = nullptr;\n\n    static const char* kwlist[] = { \"\", \"quantization\", \"stdlib\", \"lowered\", NULL };\n\n\tif ( !PyArg_ParseTupleAndKeywords(args, kwargs, \"s|zzO!\", (char**)kwlist, &input, &quant, &stdlib, &PyList_Type, &lower) )\n    {\n        return NULL;\n    }\n\n    if ( !stdlib )\n    {\n        stdlib = \"\";\n    }\n\n    std::ifstream gfs, qfs;\n    std::stringstream gss, qss;\n\n    if ( isFile )\n    {\n        gfs.open(input);\n        if ( !gfs )\n        {\n            const std::string message = \"Could not open file: \" + std::string(input);\n            PyErr_SetString(NNEF_Error, message.c_str());\n            return NULL;\n        }\n\n        if ( quant )\n        {\n            qfs.open(quant);\n            if ( !qfs )\n            {\n                const std::string message = \"Could not open file: \" + std::string(quant);\n                PyErr_SetString(NNEF_Error, message.c_str());\n                return NULL;\n            }\n        }\n    }\n    else\n    {\n        gss.str(input);\n        if ( quant )\n        {\n            qss.str(quant);\n        }\n    }\n\n    std::istream& gis = isFile ? (std::istream&)gfs : (std::istream&)gss;\n    std::istream& qis = isFile ? (std::istream&)qfs : (std::istream&)qss;\n    \n    std::set<std::string> lowered;\n    if ( lower )\n    {\n        for ( Py_ssize_t i = 0; i < PyList_Size(lower); ++i )\n        {\n            PyObject* item = PyList_GetItem(lower, i);\n            if ( !PY_STRING_CHECK(item) )\n            {\n                const std::string message = \"Paremeter 'lowered' must be a list of strings\";\n                PyErr_SetString(NNEF_Error, message.c_str());\n                return NULL;\n            }\n            lowered.insert(PY_STRING_AS_CSTR(item));\n        }\n    }\n    \n    nnef::CompParser parser(stdlib, lowered);\n\n    GraphCallback callback(qis, isFile ? quant : \"quantization\");\n\n\ttry\n    {\n        parser.parse(gis, isFile ? input : \"input\", callback);\n        Py_INCREF(callback.graph);\n        return callback.graph;\n    }\n    catch ( const nnef::Error& e )\n    {\n        PyErr_SetString(NNEF_Error, buildErrorString(e).c_str());\n\t\treturn NULL;\n    }\n    catch ( const std::invalid_argument& e )\n    {\n        PyErr_SetString(PyExc_ValueError, e.what());\n        return NULL;\n    }\n    catch ( const std::exception& e )\n    {\n        PyErr_SetString(PyExc_Exception, e.what());\n        return NULL;\n    }\n}\n\nstatic PyObject* parseFile( PyObject* self, PyObject* args, PyObject* kwargs )\n{\n    return parse(self, args, kwargs, true);\n}\n\nstatic PyObject* parseString( PyObject* self, PyObject* args, PyObject* kwargs )\n{\n    return parse(self, args, kwargs, false);\n}\n\nstatic PyObject* createSession( PyObject* self, PyObject* args, PyObject* kwargs )\n{\n    static const char* kwlist[] = { \"\", \"stdlib\", \"lowered\", NULL };\n\n    const char* path = nullptr;\n    const char* stdlib = nullptr;\n    PyObject* lower = nullptr;\n\n\tif ( !PyArg_ParseTupleAndKeywords(args, kwargs, \"s|zO!\", (char**)kwlist, &path, &stdlib, &PyList_Type, &lower) )\n    {\n        return NULL;\n    }\n\n    if ( !stdlib )\n    {\n        stdlib = \"\";\n    }\n\n    std::set<std::string> lowered;\n    if ( lower )\n    {\n        for ( Py_ssize_t i = 0; i < PyList_Size(lower); ++i )\n        {\n            PyObject* item = PyList_GetItem(lower, i);\n            if ( !PY_STRING_CHECK(item) )\n            {\n                const std::string message = \"Paremeter 'lowered' must be a list of strings\";\n                PyErr_SetString(NNEF_Error, message.c_str());\n                return NULL;\n            }\n            lowered.insert(PY_STRING_AS_CSTR(item));\n        }\n    }\n\n    std::unique_ptr<nnef::Graph> graph(new nnef::Graph());\n    std::string error;\n\n    if ( !nnef::load_graph(path, *graph, error, stdlib, lowered) )\n    {\n        PyErr_SetString(PyExc_ValueError, error.c_str());\n        return NULL;\n    }\n\n    if ( !nnef::infer_shapes(*graph, error) )\n    {\n        PyErr_SetString(PyExc_ValueError, error.c_str());\n        return NULL;\n    }\n\n    if ( !nnef::allocate_buffers(*graph, error) )\n    {\n        PyErr_SetString(PyExc_ValueError, error.c_str());\n        return NULL;\n    }\n\n    const size_t handle = reinterpret_cast<size_t>(graph.release());\n    return PyLong_FromSize_t(handle);\n}\n\nstatic PyObject* cleanupSession( PyObject* self, PyObject* args, PyObject* kwargs )\n{\n    PyObject* handle;\n    static const char* kwlist[] = { \"\", NULL };\n\n\tif ( !PyArg_ParseTupleAndKeywords(args, kwargs, \"O\", (char**)kwlist, &handle) )\n    {\n        return NULL;\n    }\n\n    nnef::Graph* graph = reinterpret_cast<nnef::Graph*>(PyLong_AsSize_t(handle));\n    delete graph;\n\n    Py_RETURN_NONE;\n}\n\nstatic PyObject* executeSession( PyObject* self, PyObject* args, PyObject* kwargs )\n{\n    PyObject* handle;\n    PyObject* inputs;\n    static const char* kwlist[] = { \"\", \"\", NULL };\n\n\tif ( !PyArg_ParseTupleAndKeywords(args, kwargs, \"OO!\", (char**)kwlist, &handle, &PyTuple_Type, &inputs) )\n    {\n        return NULL;\n    }\n\n    nnef::Graph* graph = reinterpret_cast<nnef::Graph*>(PyLong_AsSize_t(handle));\n\n    if ( PyTuple_Size(inputs) != graph->inputs.size() )\n    {\n        PyErr_Format(PyExc_ValueError, \"number of inputs (%d) does not match number of graph inputs (%d)\",\n                     (int)PyTuple_Size(inputs), (int)graph->inputs.size());\n        return NULL;\n    }\n\n    for ( size_t i = 0; i < PyTuple_Size(inputs); ++i )\n    {\n        PyObject* input = PyTuple_GetItem(inputs, i);\n        if ( !PyArray_Check(input) )\n        {\n            PyErr_SetString(PyExc_ValueError, \"inputs must be numpy arrays\");\n            return NULL;\n        }\n        PyArrayObject* array = (PyArrayObject*)input;\n\n        nnef::Tensor& tensor = graph->tensors.at(graph->inputs[i]);\n        nnef::Typename dtype = nnef::fromString(tensor.dtype);\n\n        if ( PyArray_TYPE(array) != numpy_type_num(dtype) )\n        {\n            PyErr_Format(PyExc_ValueError, \"dtype of input %d does not match input dtype in graph\", (int)i+1);\n            return NULL;\n        }\n\n        if ( PyArray_NDIM(array) != tensor.shape.size() || !std::equal(tensor.shape.begin(), tensor.shape.end(), PyArray_SHAPE(array)) )\n        {\n            PyErr_Format(PyExc_ValueError, \"shape of input %d does not match input shape in graph\", (int)i+1);\n            return NULL;\n        }\n\n        std::copy_n(PyArray_BYTES(array), tensor.data.size(), tensor.data.data());\n    }\n\n    std::string error;\n    if ( !nnef::execute(*graph, error) )\n    {\n        PyErr_SetString(PyExc_ValueError, error.c_str());\n        return NULL;\n    }\n\n    PyObject* outputs = PyTuple_New(graph->outputs.size());\n    for ( size_t i = 0; i < graph->outputs.size(); ++i )\n    {\n        nnef::Tensor& tensor = graph->tensors.at(graph->outputs[i]);\n        std::vector<npy_intp> shape(tensor.shape.begin(), tensor.shape.end());\n        nnef::Typename dtype = nnef::fromString(tensor.dtype);\n\n        PyObject* output = PyArray_SimpleNew(shape.size(), shape.data(), numpy_type_num(dtype));\n        std::copy_n(tensor.data.data(), tensor.data.size(), PyArray_BYTES((PyArrayObject*)output));\n\n        PyTuple_SetItem(outputs, i, output);\n    }\n\n    return outputs;\n}\n\n\nstatic PyMethodDef NNEF_Methods[] = \n{\n    { \"parse_file\", (PyCFunction)parseFile, METH_VARARGS | METH_KEYWORDS, \"Parse the contents of a file\" },\n    { \"parse_string\", (PyCFunction)parseString, METH_VARARGS | METH_KEYWORDS, \"Parse the contents of a string\" },\n    { \"create_session\", (PyCFunction)createSession, METH_VARARGS | METH_KEYWORDS, \"Create session for executing a graph\" },\n    { \"cleanup_session\", (PyCFunction)cleanupSession, METH_VARARGS | METH_KEYWORDS, \"Cleanup session\" },\n    { \"execute_session\", (PyCFunction)executeSession, METH_VARARGS | METH_KEYWORDS, \"Execute graph in a session\" },\n \t{ NULL, NULL, 0, NULL }\n};\n\n\n#if PY_MAJOR_VERSION >= 3\n\nstatic struct PyModuleDef nnef_module = \n{\n    PyModuleDef_HEAD_INIT,\n    \"_nnef\",\n    \"_nnef module\",\n    -1,\n    NNEF_Methods,\n};\n\n#endif\n\n\n#if PY_MAJOR_VERSION >= 3\n#define INIT_FUNC_NAME PyInit__nnef\n#define RETURN_ERROR return NULL\n#else\n#define INIT_FUNC_NAME init_nnef\n#define RETURN_ERROR return\n#endif\n\nPyMODINIT_FUNC INIT_FUNC_NAME(void)\n{\n    NNEF_Identifier_Type.tp_base = &PY_STRING_TYPE;\n    if ( PyType_Ready(&NNEF_Identifier_Type) < 0 )\n    {\n        RETURN_ERROR;\n    }\n\n#if PY_MAJOR_VERSION >= 3\n\tPyObject* module = PyModule_Create(&nnef_module);\n#else\n    PyObject* module = Py_InitModule(\"_nnef\", NNEF_Methods);\n#endif\n\tif ( module == NULL )\n\t{\n\t\tRETURN_ERROR;\n\t}\n\n\tNNEF_Error = PyErr_NewException((char*)\"_nnef.Error\", NULL, NULL);\n\tPyModule_AddObject(module, \"Error\", NNEF_Error);\n\n    PyModule_AddObject(module, \"Identifier\", (PyObject*)&NNEF_Identifier_Type);\n\n    PyObject* collections = PyImport_ImportModule(\"collections\");\n    PyObject* dict = PyModule_GetDict(collections);\n    OrderedDict = PyDict_GetItemString(dict, \"OrderedDict\");\n    NamedTuple = PyDict_GetItemString(dict, \"namedtuple\");\n    Py_DECREF(collections);\n\n    Tensor = makeNamedTuple(\"Tensor\", { \"name\", \"dtype\", \"shape\", \"data\", \"quantization\" });\n    PyModule_AddObject(module, \"Tensor\", Tensor);\n\n    Operation = makeNamedTuple(\"Operation\", { \"name\", \"attribs\", \"inputs\", \"outputs\", \"dtype\" });\n    PyModule_AddObject(module, \"Operation\", Operation);\n    \n    Graph = makeNamedTuple(\"Graph\", { \"name\", \"tensors\", \"operations\", \"inputs\", \"outputs\" });\n    PyModule_AddObject(module, \"Graph\", Graph);\n\n    import_array();\n    \n#if PY_MAJOR_VERSION >= 3\n\treturn module;\n#endif\n}\n"
  },
  {
    "path": "nnef-pyproject/nnef/parser.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport _nnef\n\n\ndef parse_file(graph_fn, quant_fn=None, stdlib=None, lowered=None):\n    return _nnef.parse_file(graph_fn, quantization=quant_fn, stdlib=stdlib, lowered=lowered or [])\n\n\ndef parse_string(graph_str, quant_str=None, stdlib=None, lowered=None):\n    return _nnef.parse_string(graph_str, quantization=quant_str, stdlib=stdlib, lowered=lowered or [])\n"
  },
  {
    "path": "nnef-pyproject/nnef/printer.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport _nnef\n\n\ndef format_version(version):\n    major, minor = version\n    return 'version {}.{};'.format(major, minor)\n\n\ndef format_extensions(extensions):\n    string = str()\n    for i, ext in enumerate(extensions):\n        if i != 0:\n            string += '\\n'\n        string += 'extension {};'.format(ext)\n    return string\n\n\ndef format_argument(value):\n    if isinstance(value, _nnef.Identifier):\n        return value\n    elif isinstance(value, str):\n        return \"'\" + value + \"'\"\n    elif isinstance(value, bool):\n        return 'true' if value else 'false'\n    elif isinstance(value, (int, float)):\n        return str(value)\n    elif isinstance(value, list):\n        return '[' + ', '.join(format_argument(item) for item in value) + ']'\n    elif isinstance(value, tuple):\n        return '(' + ', '.join(format_argument(item) for item in value) + ')'\n    else:\n        raise TypeError('arguments must be of type int, float, str, nnef.Identifier or list/tuple of such, found: ' + str(type(value)))\n\n\ndef format_result(value):\n    if isinstance(value, list):\n        return '[' + ', '.join(format_result(item) for item in value) + ']'\n    elif isinstance(value, tuple):\n        return '(' + ', '.join(format_result(item) for item in value) + ')'\n    elif isinstance(value, _nnef.Identifier):\n        return value\n    else:\n        raise TypeError('results must be of type nnef.Identifier or list/tuple of such, found: ' + str(type(value)))\n\n\ndef format_shapes(result, tensors):\n    if isinstance(result, list):\n        return '[' + ', '.join(format_shapes(item, tensors) for item in result) + ']'\n    elif isinstance(result, tuple):\n        return '(' + ', '.join(format_shapes(item, tensors) for item in result) + ')'\n    elif isinstance(result, _nnef.Identifier):\n        return str(tensors[result].shape)\n    else:\n        raise TypeError('results must be of type nnef.Identifier or list/tuple of such, found: ' + str(type(result)))\n\n\ndef format_invocation(name, attribs, inputs, outputs=None, dtype=None):\n    string = str()\n\n    if outputs is not None:\n        string += ', '.join([format_result(output) for output in outputs])\n        string += ' = '\n\n    string += name\n\n    if dtype is not None:\n        string += '<' + dtype + '>'\n\n    string += '('\n    string += ', '.join([format_argument(input) for input in inputs])\n    if len(inputs) and len(attribs):\n        string += ', '\n    string += ', '.join(key + ' = ' + format_argument(value) for (key, value) in attribs.items())\n    string += ')'\n\n    return string\n\n\ndef format_graph(name, inputs, outputs, operations, tensors, annotate_shapes=False):\n    string = 'graph ' + name + '( ' + ', '.join(inputs) + ' ) -> ( ' + ', '.join(outputs) + ' )\\n'\n    string += '{\\n'\n    for operation in operations:\n        inputs = operation.inputs.values()\n        outputs = operation.outputs.values()\n        invocation = format_invocation(operation.name, operation.attribs, inputs, outputs, operation.dtype)\n        string += '\\t' + invocation + ';'\n        if annotate_shapes:\n            string += '\\t# ' + ', '.join(format_shapes(output, tensors) for output in outputs)\n        string += '\\n'\n    string += '}\\n'\n    return string\n"
  },
  {
    "path": "nnef-pyproject/nnef/shapes.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport nnef\nimport numpy as np\n\n\ndef _ceil_div(x, y):\n    return (x + y - 1) // y if y > 0 else (x + y + 1) // y\n\n\ndef _clamp(x, a, b):\n    return max(a, min(b, x))\n\n\ndef _ensure_rank(array, rank, value=1):\n    return array if len(array) == rank else array + [value] * rank\n\n\ndef _volume(shape):\n    volume = 1\n    for s in shape:\n        volume *= s\n    return volume\n\n\ndef _broadcast_compatible(x,y):\n    return all(xi == yi or xi == 1 or yi == 1 for xi, yi in zip(x, y))\n\n\ndef _broadcastable(x,y):\n    return all(xi == yi or xi == 1 for xi, yi in zip(x, y))\n\n\ndef _broadcast_shape(x, y):\n    assert _broadcast_compatible(x, y), \"arguments are not broadcast compatible ({} vs {})\".format(x, y)\n\n    rank = max(len(x), len(y))\n    return [max(xi,yi) for (xi, yi) in zip(_ensure_rank(x, rank), _ensure_rank(y, rank))]\n\n\ndef _downsize_shape(input, kernel, padding, stride, dilation):\n    return [(i + p + q - (k - 1) * d - 1) // s + 1 for i, k, (p, q), s, d in\n            zip(input, kernel, padding, stride, dilation)] \\\n        if padding else [(i + s - 1) // s for i, s in zip(input, stride)]\n\n\ndef _upsize_shape(input, kernel, padding, stride, dilation):\n    return [(i - 1) * s + (k - 1) * d + 1 - p - q for i, k, (p, q), s, d in\n            zip(input, kernel, padding, stride, dilation)] \\\n        if padding else [i * s for i, s in zip(input, stride)]\n\n\ndef nullary_shape(shape, **kwargs):\n    return shape\n\n\ndef unary_shape(arg, **kwargs):\n    return arg\n\n\ndef binary_shape(left, right, **kwargs):\n    return _broadcast_shape(left, right)\n\n\ndef asymmetric_binary_shape(left, right, **kwargs):\n    assert _broadcastable(right, left), \\\n        \"second argument shape ({}) cannot be broadcast to first argument shape ({})\".format(right, left)\n    return left\n\n\ndef ternary_shape(cond, left, right, **kwargs):\n    value = _broadcast_shape(left, right)\n    return _broadcast_shape(cond, value)\n\n\ndef pool_shape(input, size, border=None, padding=[], stride=[], dilation=[], output_shape=None, transposed=False, **kwargs):\n    rank = len(input)\n\n    assert len(size) == rank, \"expected kernel shape of rank {}, found {}\".format(rank, size)\n    assert not padding or len(padding) == rank, \"expected 'padding' of length {}, found {}\".format(rank, padding)\n    assert not stride or len(stride) == rank, \"expected 'stride' of length {}, found {}\".format(rank, stride)\n    assert not dilation or len(dilation) == rank, \"expected 'dilation' of length {}, found {}\".format(rank, dilation)\n    assert all(s > 0 for s in stride), \"'stride' must be positive, found {}\".format(stride)\n    assert all(d > 0 for d in dilation), \"'dilation' must be positive, found {}\".format(dilation)\n\n    stride = _ensure_rank(stride, rank)\n    dilation = _ensure_rank(dilation, rank)\n\n    if output_shape:\n        assert len(output_shape) == rank, \"expected 'output_shape' of length {}, found {}\".format(rank, output_shape)\n        assert all(s > 0 for s in output_shape), \"'output_shape' must be positive, found {}\".format(output_shape)\n\n        expected_shape = _downsize_shape(output_shape, size, padding, stride, dilation)\n        assert input == expected_shape, \\\n            \"expected input shape {} derived from 'output_shape' is incompatible with actual input shape {}\".\\\n                format(expected_shape, input)\n\n        return output_shape\n\n    if transposed:\n        return _upsize_shape(input, size, padding, stride, dilation)\n    else:\n        return _downsize_shape(input, size, padding, stride, dilation)\n\n\ndef pool_with_index_shape(input, size, border=None, padding=[], stride=[], dilation=[]):\n    shape = pool_shape(input, size, border, padding, stride, dilation)\n    return (shape, shape)\n\n\ndef unpool_shape(input, size, border=None, padding=[], stride=[], dilation=[], output_shape=None, **kwargs):\n    return pool_shape(input, size, border, padding, stride, dilation, output_shape, transposed=True, **kwargs)\n\n\ndef sample_shape(input, index, size, border=None, padding=[], stride=[], dilation=[], output_shape=None, transposed=False):\n    assert index == input, \"'index' shape {} does not match 'input' shape {}\".format(input, index)\n    return pool_shape(input, size, border, padding, stride, dilation, output_shape, transposed)\n\n\ndef desample_shape(input, index, size, border=None, padding=[], stride=[], dilation=[], output_shape=None):\n    return sample_shape(input, index, size, border, padding, stride, dilation, output_shape, transposed=True)\n\n\ndef conv_shape(input, filter, bias=[], border=None, padding=[], stride=[], dilation=[], groups=1, output_shape=None, transposed=False):\n    rank = len(input)\n\n    assert len(filter) == rank, \"expected filter shape of rank {}, found {}\".format(rank, filter)\n    assert not padding or len(padding) == rank - 2, \"expected 'padding' of length {}, found {}\".format(rank - 2, padding)\n    assert not stride or len(stride) == rank - 2, \"expected 'stride' of length {}, found {}\".format(rank - 2, stride)\n    assert not dilation or len(dilation) == rank - 2, \"expected 'dilation' of rank {}, found {}\".format(rank - 2, dilation)\n    assert all(s > 0 for s in stride), \"'stride' must be positive, found {}\".format(stride)\n    assert all(d > 0 for d in dilation), \"'dilation' must be positive, found {}\".format(dilation)\n    assert groups >= 0, \"'groups' must be non-negative, found {}\".format(groups)\n\n    if groups == 0:\n        groups = output_shape[1] if transposed and output_shape else input[1]\n\n    if transposed:\n        assert filter[0] == input[1], \"filter batch ({}) does not match input channels ({})\".format(filter[0], input[1])\n    else:\n        assert filter[1] * groups == input[1], \\\n            \"filter channels ({}) times groups ({}) does not match input channels ({})\".format(filter[1], groups, input[1])\n    assert filter[0] % groups == 0, \"'groups' ({}) does not divide filter batch ({})\".format(groups, filter[0])\n\n    assert len(bias) <= 2, \"expected bias shape of rank at most 2, found {}\".format(bias)\n    if len(bias) == 2:\n        assert bias[0] == 1, \"'bias' batch dimension must be singular\"\n    if len(bias):\n        channels = filter[1] * groups if transposed else filter[0]\n        assert bias[-1] == channels, \"'bias' channels ({}) does not match filter batch ({})\".format(bias[-1], channels)\n\n    stride = _ensure_rank(stride, rank - 2)\n    dilation = _ensure_rank(dilation, rank - 2)\n\n    if output_shape:\n        assert len(output_shape) == rank, \"expected 'output_shape' of length {}, found {}\".format(rank, output_shape)\n        assert all(s > 0 for s in output_shape), \"'output_shape' must be positive, found {}\".format(output_shape)\n        assert output_shape[0] == input[0], \\\n            \"output batch ({}) does not match input batch ({})\".format(output_shape[0], input[0])\n        assert output_shape[1] == filter[1] * groups, \\\n            \"output channels ({}) does not match input channels ({}) times groups ({})\".format(output_shape[1], input[1], groups)\n\n        expected_shape = [input[0], filter[0]] + _downsize_shape(output_shape[2:], filter[2:], padding, stride, dilation)\n        assert input == expected_shape, \\\n            \"expected input shape {} derived from 'output_shape' is incompatible with actual input shape {}\". \\\n                format(expected_shape, input)\n\n        return output_shape\n\n    if transposed:\n        return [input[0], filter[1] * groups] + _upsize_shape(input[2:], filter[2:], padding, stride, dilation)\n    else:\n        return [input[0], filter[0]] + _downsize_shape(input[2:], filter[2:], padding, stride, dilation)\n\n\ndef separable_conv_shape(input, plane_filter, point_filter, bias=[], border=None, padding=[], stride=[], dilation=[], groups=1,\n                         output_shape=None, transposed=False):\n    assert all(x == 1 for x in point_filter[2:]), \\\n        \"point-wise filter must be singular in spatial dimensions, found {}\".format(point_filter)\n    assert point_filter[1] == plane_filter[0], \\\n        \"channel dimension of point-wise filter ({}) does not equal batch dimension of depth-wise filter ({})\".\\\n            format(point_filter[1], plane_filter[0])\n    assert plane_filter[1] == 1, \"channel dimension of plane-wise filter must be singular, found {}\".format(plane_filter)\n\n    channels = point_filter[1] if transposed else input[1]\n    filter = [point_filter[0], channels] + plane_filter[2:]\n    return conv_shape(input, filter, bias, border, padding, stride, dilation, groups, output_shape, transposed)\n\n\ndef separable_deconv_shape(input, plane_filter, point_filter, bias=[], border=None, padding=[], stride=[], dilation=[],\n                           groups=1, output_shape=None):\n    return separable_conv_shape(input, plane_filter, point_filter, bias, border, padding, stride, dilation, groups,\n                                output_shape, transposed=True)\n\n\ndef deconv_shape(input, filter, bias=[], border=None, padding=[], stride=[], dilation=[], groups=1, output_shape=None):\n    return conv_shape(input, filter, bias, border, padding, stride, dilation, groups, output_shape, transposed=True)\n\n\ndef reduce_shape(input, axes, **kwargs):\n    rank = len(input)\n    assert all(0 <= axis < rank for axis in axes), \"axes must be in range [0,{}), found {}\".format(rank, axes)\n    return [1 if i in axes else input[i] for i in range(rank)]\n\n\ndef normalize_shape(input, **kwargs):\n    rank = len(input)\n    axes = kwargs.get('axes')\n    size = kwargs.get('size')\n    if axes:\n        assert all(0 <= axis < rank for axis in axes), \"axes must be in range [0,{}), found {}\".format(rank, axes)\n    if size:\n        assert len(size) == rank, \"expected 'size' of length {}, found {}\".format(rank, size)\n        assert all(s >= 1 for s in size), \"'size' must be positive, found {}\".format(size)\n\n    return input\n\n\ndef moments_shape(input, axes):\n    shape = reduce_shape(input, axes=axes)\n    return shape, list(shape)\n\n\ndef downsample_shape(input, factor, **kwargs):\n    rank = len(input)\n    assert len(factor) == rank - 2, \"expected 'factor' of length {}, found {}\".format(rank, factor)\n    assert all(i % f == 0 for i, f in zip(input[2:], factor)), \\\n        \"'factor' {} does not divide spatial input shape {}\".format(factor, input[2:])\n\n    return input[:2] + [i // f for i, f in zip(input[2:], factor)]\n\n\ndef upsample_shape(input, factor, **kwargs):\n    rank = len(input)\n    assert len(factor) == rank - 2, \"expected 'factor' of length {}, found {}\".format(rank, factor)\n\n    return input[:2] + [i * f for i, f in zip(input[2:], factor)]\n\n\ndef reshape_shape(input, shape, axis_start=0, axis_count=-1):\n    rank = len(input)\n    assert all(s >= -1 for s in shape), \"items in 'shape' must be >= -1, found {}\".format(shape)\n    assert sum(1 for s in shape if s == -1) <= 1, \"at most one item may be -1 in 'shape', found {}\".format(shape)\n    assert 0 <= axis_start <= rank, \"'axis_start' must be in range [0,{}], found {}\".format(rank, axis_start)\n    assert axis_count >= -1, \"'axis_count' must be non-negative or -1, found {}\".format(axis_count)\n\n    if axis_count == -1:\n        axis_count = rank - axis_start\n\n    axis_end = axis_start + axis_count\n\n    assert axis_end <= rank, \"'axis_start' + 'axis_count' ({}) must be in range [0,{}]\".format(axis_end, rank)\n\n    shape = list(shape)  # don't modify original list\n\n    for i in range(len(shape)):\n        if shape[i] == 0:\n            shape[i] = input[i + axis_start]\n\n    input_range = input[axis_start:axis_end]\n\n    if -1 in shape:\n        idx = shape.index(-1)\n        assert _volume(input_range) % _volume(shape) == 0, \\\n            \"volume of 'shape' ({}) does not divide volume of 'input[{}:{}]' ({})\".format(shape, axis_start, axis_end, input_range)\n        shape[idx] = _volume(input_range) // -_volume(shape)\n    else:\n        assert _volume(shape) == _volume(input_range), \\\n            \"volume of 'shape' ({}) does not equal volume of 'input[{}:{}]' ({})\".format(shape, axis_start, axis_end, input_range)\n    return input[:axis_start] + shape + input[axis_end:]\n\n\ndef transpose_shape(input, axes):\n    rank = len(axes)\n    assert sorted(axes) == list(range(rank)), \"axes must be a permutation of [0..{}], found {}\".format(rank-1, axes)\n    return [input[axis] for axis in axes] + input[rank:]\n\n\ndef squeeze_shape(input, axes):\n    rank = len(input)\n    assert all(0 <= axis < rank for axis in axes), \"axes must be in range [0,{}), found {}\".format(rank, axes)\n    return [input[i] for i in range(rank) if not i in axes]\n\n\ndef unsqueeze_shape(input, axes):\n    rank = len(input) + len(axes)\n    assert all(0 <= axis < rank for axis in axes), \"axes must be in range [0,{}), found {}\".format(rank, axes)\n\n    output = list(input)\n    for axis in axes:\n        output = output[:axis] + [1] + output[axis:]\n    return output\n\n\ndef concat_shape(values, axis):\n    assert len(values) != 0, \"'values' must be non-empty\"\n\n    shape = list(values[0])\n    rank = len(shape)\n\n    assert 0 <= axis < rank, \"'axis' must be in range [0,{}), found {}\".format(rank, axis)\n\n    for value in values:\n        assert len(value) == len(shape), \"'values' must have the same rank, found {}\".format(values)\n        assert all(value[i] == shape[i] for i in range(rank) if i != axis), \\\n            \"shapes of 'values' must be identical for all dimensions other than 'axis' ({}), found {}\".format(axis, values)\n\n    shape[axis] = sum(value[axis] for value in values)\n    return shape\n\n\ndef split_shape(value, axis, ratios):\n    rank = len(value)\n    assert 0 <= axis < rank, \"axis must be in range [0,{}), found {}\".format(rank, axis)\n    assert all(r > 0 for r in ratios), \"'ratios' must be positive, found {}\".format(ratios)\n    total = sum(ratios)\n    assert value[axis] % total == 0, \\\n        \"sum of 'ratios' ({}) does not divide input shape along dimension 'axis' ({})\".format(total, value[axis])\n\n    unit = value[axis] // total\n    return [[unit * r if i == axis else value[i] for i in range(rank)] for r in ratios]\n\n\ndef stack_shape(values, axis):\n    assert len(values) != 0, \"'values' must be non-empty\"\n\n    shape = values[0]\n    rank = len(shape) + 1\n\n    assert 0 <= axis < rank, \"'axis' must be in range [0,{}), found {}\".format(rank, axis)\n    assert all(value == shape for value in values), \"shapes of 'values' must be identical, found {}\".format(values)\n\n    return shape[:axis] + [len(values)] + shape[axis:]\n\n\ndef unstack_shape(value, axis):\n    rank = len(value)\n    assert 0 <= axis < rank, \"'axis' must be in range [0,{}), found {}\".format(rank, axis)\n\n    return [value[:axis] + value[axis+1:]] * value[axis]\n\n\ndef slice_shape(input, axes, begin, end, stride=[]):\n    rank = len(input)\n\n    if len(stride) == 0:\n        stride = [1] * len(axes)\n\n    if all(s == 1 for s in stride):\n        end = [input[axis] if offs == 0 else offs for axis, offs in zip(axes, end)]\n\n    assert len(begin) == len(axes), \\\n        \"length of 'begin' ({}) does not equal length of 'axes' ({})\".format(len(begin), len(axes))\n    assert len(end) == len(axes), \\\n        \"length of 'end' ({}) does not equal length of 'axes' ({})\".format(len(end), len(axes))\n    assert len(stride) == len(axes), \\\n        \"length of 'stride' ({}) does not equal length of 'axes' ({})\".format(len(begin), len(axes))\n    assert all(0 <= axis < rank for axis in axes), \"'axes' must be in range [0,{}), found {}\".format(rank, axes)\n\n    begin = [_clamp(offs + input[axis] if offs < 0 else offs, -1, input[axis]) for axis, offs in zip(axes, begin)]\n    end = [_clamp(offs + input[axis] if offs < 0 else offs, -1, input[axis]) for axis, offs in zip(axes, end)]\n\n    assert all(s != 0 for s in stride), \"'stride' must be non-zero\"\n\n    assert all(0 <= first <= last if str > 0 else last <= first < input[axis]\n               for axis, first, last, str in zip(axes, begin, end, stride)), \\\n        \"slice range ({}:{}:{}) is invalid\".format(begin, end, stride)\n\n    output = list(input)\n    for axis, first, last, str in zip(axes, begin, end, stride):\n        output[axis] = _ceil_div(last - first, str)\n    return output\n\n\ndef tile_shape(input, repeats):\n    rank = len(input)\n    assert len(repeats) == rank, \"expected 'repeats' of length {}, found {}\".format(rank, repeats)\n\n    return [i * r for i, r in zip(input, repeats)]\n\n\ndef pad_shape(input, padding, **kwargs):\n    rank = len(input)\n    assert len(padding) == rank, \"expected 'padding' of length {}, found {}\".format(rank, padding)\n\n    return [p + i + q for i, (p, q) in zip(input, padding)]\n\n\ndef gather_shape(input, indices, axis=0):\n    rank = len(input)\n    assert 0 <= axis < rank, \"'axis' must be in range [0,{}), found {}\".format(rank, axis)\n\n    return input[:axis] + indices + input[axis+1:]\n\n\ndef matmul_shape(A, B, transposeA=False, transposeB=False):\n    assert len(A) == len(B), \"argument rank mismatch ({} vs {})\".format(len(A), len(B))\n    assert len(A) >= 2, \"rank of arguments must be at least 2, found {}\".format(len(A))\n\n    m = A[-1] if transposeA else A[-2]\n    n = B[-2] if transposeB else B[-1]\n    kA = A[-2] if transposeA else A[-1]\n    kB = B[-1] if transposeB else B[-2]\n\n    assert kA == kB, \"inner dimensions must agree ({} vs {})\".format(kA, kB)\n    return _broadcast_shape(A[:-2], B[:-2]) + [m,n]\n\n\ndef linear_shape(input, filter, bias=[]):\n    assert len(input) == 2, \"rank of input must be 2, found {}\".format(len(input))\n    assert len(filter) == 2, \"rank of filter must be 2, found {}\".format(len(filter))\n    assert len(bias) <= 2, \"rank of bias must be at most 2, found {}\".format(len(bias))\n    assert input[1] == filter[1], \"input channels ({}) does not match filter channels ({})\".format(input[1], filter[1])\n\n    if len(bias) == 2:\n        assert bias[0] == 1, \"'bias' batch dimension must be singular\"\n    if len(bias):\n        c = len(bias) - 1\n        assert bias[c] == filter[0], \"'bias' channels ({}) does not match filter batch ({})\".format(bias[c], filter[0])\n\n    return [input[0], filter[0]]\n\n\ndef softmax_shape(input, axes=[1]):\n    rank = len(input)\n    assert all(0 <= axis < rank for axis in axes), \"axes must be in range [0,{}), found {}\".format(rank, axes)\n\n    return input\n\n\ndef batchnorm_shape(input, mean, variance, offset, scale, epsilon=0):\n    assert epsilon >= 0, \"'epsilon' must be non-negative, found {}\".format(epsilon)\n\n    assert _broadcastable(mean, input), \\\n        \"'mean' shape {} cannot be broadcast to 'input' shape {}\".format(mean, input)\n    assert _broadcastable(variance, input), \\\n        \"'variance' shape {} cannot be broadcast to 'input' shape {}\".format(variance, input)\n    assert _broadcastable(offset, input), \\\n        \"'offset' shape {} cannot be broadcast to 'input' shape {}\".format(offset, input)\n    assert _broadcastable(scale, input), \\\n        \"'scale' shape {} cannot be broadcast to 'input' shape {}\".format(scale, input)\n\n    return input\n\n\ndef roi_shape(input, rois, batch_index, output_size, **kwargs):\n    rank = len(input)\n\n    assert len(output_size) == rank - 2, \"expected 'output_size' of length {}, found {}\".format(rank - 2, output_size)\n    assert all(s > 0 for s in output_size), \"'output_size' must be positive, found {}\".format(output_size)\n\n    assert len(rois) == 2, \"'rois' must be of rank 2, found {}\".format(rois)\n    assert rois[1] == 4, \"'rois' must be of extent 4 along dimension 1, found {}\".format(rois)\n\n    assert len(batch_index) == 1, \"'batch_index' must be of rank 1, found {}\".format(batch_index)\n    assert batch_index[0] == rois[0], \\\n        \"'batch_index' must be of same length as dimension 0 of rois; found {} vs {}\".format(batch_index, rois)\n\n    rate = kwargs.get('sampling_rate')\n    if rate:\n        assert len(rate) == rank - 2, \"expected 'sampling_rate' of length {}, found {}\".format(rank - 2, rate)\n        assert all(r > 0 for r in rate), \"'rate' must be positive, found {}\".format(rate)\n\n    return [rois[0], input[1]] + output_size\n\n\ndef quantize_shape(input, *args, **kwargs):\n    for arg in args:\n        assert _broadcastable(arg, input), \\\n            \"'min/max' shape {} cannot be broadcast to 'input' shape {}\".format(arg, input)\n\n    bits = kwargs.get('bits')\n    if bits is not None:\n        assert bits > 0, \"'bits' must be positive, found {}\".format(bits)\n\n    return input\n\n\ndef update_shape(variable, value):\n    assert value == variable, \"shape of update value {} does not match shape of variable {}\".format(value, variable)\n    return variable\n\n\ndef copy_n_shape(value, times):\n    assert times > 0, \"'times' must be positive, found {}\".format(times)\n    return [value] * times\n\n\ndef add_n_shape(values):\n    assert len(values) != 0, \"values must be non-empty\"\n\n    shape = values[0]\n    assert all(value == shape for value in values), \"shapes of values must be identical, found {}\".format(values)\n\n    return shape\n\n\ndef _get_shape(graph, value):\n    if isinstance(value, nnef.Identifier):\n        return graph.tensors[value].shape\n    elif isinstance(value, np.ndarray):\n        return list(value.shape)\n    elif isinstance(value, list):\n        return [_get_shape(graph, v) for v in value]\n    else:\n        return []\n\n\ndef _set_shape(graph, value, shape):\n    if isinstance(value, nnef.Identifier):\n        tensor = graph.tensors[value]\n        graph.tensors[value] = nnef.Tensor(tensor.name, tensor.dtype, shape, tensor.data, tensor.quantization)\n    elif isinstance(value, list):\n        for v, s in zip(value, shape):\n            _set_shape(graph, v, s)\n\n\ndef infer_shapes(graph, external_shapes={}, custom_shapes={}):\n    # type: (nnef.Graph, dict)->None\n    for op in graph.operations:\n        func = _StandardShapeFuncs.get(op.name)\n        if func is None:\n            func = custom_shapes.get(op.name)\n        if func is None:\n            raise nnef.Error(\"shape inference function is not defined for operation '{}'\".format(op.name))\n\n        if op.name == 'external':\n            id = op.outputs['output']\n            override = external_shapes.get(id)\n            if override is not None:\n                override = list(override)\n                original = op.attribs['shape']\n                assert len(override) == len(original), \\\n                    \"overridden external shape rank ({}) does not match original rank ({})\".format(len(override), len(original))\n                _set_shape(graph, id, override)\n                continue\n\n        input_shapes = [_get_shape(graph, input) for input in op.inputs.values()]\n\n        try:\n            output_shapes = func(*input_shapes, **op.attribs)\n            if not isinstance(output_shapes, tuple):\n                output_shapes = (output_shapes,)\n\n            outputs = op.outputs.values()\n            assert len(outputs) == len(output_shapes), \\\n                \"number of shapes ({}) does not match number of outputs ({})\".format(len(outputs), len(output_shapes))\n\n            for output, shape in zip(outputs, output_shapes):\n                if isinstance(output, list):\n                    assert isinstance(shape, list), \"expected list of shapes\"\n                    assert len(output) == len(shape), \\\n                        \"number of shapes ({}) does not match number of outputs ({})\".format(len(output), len(shape))\n                _set_shape(graph, output, shape)\n\n        except AssertionError as e:\n            raise nnef.Error(\"while inferring shape of tensor(s) '{}' (operation '{}'): {}\".\n                             format(', '.join(op.outputs.values()), op.name, e))\n\n    for tensor in graph.tensors.values():\n        if tensor.quantization:\n            for key, value in tensor.quantization.items():\n                if isinstance(value, np.ndarray):\n                    assert _broadcastable(value.shape, tensor.shape)\n\n\ndef _infer_op_shapes(op_name, attribs, input_shapes, output_counts, custom_shapes={}):\n    func = _StandardShapeFuncs.get(op_name)\n    if func is None:\n        func = custom_shapes.get(op_name)\n    if func is None:\n        raise nnef.Error(\"shape inference function is not defined for operation '{}'\".format(op_name))\n\n    try:\n        output_shapes = func(*input_shapes, **attribs)\n        if not isinstance(output_shapes, tuple):\n            output_shapes = (output_shapes,)\n\n        assert len(output_counts) == len(output_shapes), \\\n            \"number of shapes ({}) does not match number of outputs ({})\".format(len(output_counts), len(output_shapes))\n\n        for count, shape in zip(output_counts, output_shapes):\n            if isinstance(count, list):\n                assert isinstance(shape, list), \"expected list of shapes\"\n                assert count == len(shape), \\\n                    \"number of shapes ({}) does not match number of outputs ({})\".format(count, len(shape))\n\n        return output_shapes\n    except AssertionError as e:\n        raise nnef.Error(\"while inferring output shape of operation '{}': {}\".format(op_name, e))\n\n\n_StandardShapeFuncs = {\n    'external': nullary_shape,\n    'variable': nullary_shape,\n    'constant': nullary_shape,\n    'copy': unary_shape,\n    'neg': unary_shape,\n    'not': unary_shape,\n    'rcp': unary_shape,\n    'exp': unary_shape,\n    'log': unary_shape,\n    'sin': unary_shape,\n    'cos': unary_shape,\n    'tan': unary_shape,\n    'asin': unary_shape,\n    'acos': unary_shape,\n    'atan': unary_shape,\n    'sinh': unary_shape,\n    'cosh': unary_shape,\n    'tanh': unary_shape,\n    'asinh': unary_shape,\n    'acosh': unary_shape,\n    'atanh': unary_shape,\n    'abs': unary_shape,\n    'sign': unary_shape,\n    'floor': unary_shape,\n    'ceil': unary_shape,\n    'round': unary_shape,\n    'sqr': unary_shape,\n    'sqrt': unary_shape,\n    'rsqr': unary_shape,\n    'rsqrt': unary_shape,\n    'log2': unary_shape,\n    'sigmoid': unary_shape,\n    'relu': unary_shape,\n    'elu': unary_shape,\n    'selu': unary_shape,\n    'gelu': unary_shape,\n    'silu': unary_shape,\n    'softabs': unary_shape,\n    'softplus': unary_shape,\n    'leaky_relu': unary_shape,\n    'prelu': asymmetric_binary_shape,\n    'add': binary_shape,\n    'sub': binary_shape,\n    'mul': binary_shape,\n    'div': binary_shape,\n    'pow': binary_shape,\n    'min': binary_shape,\n    'max': binary_shape,\n    'lt': binary_shape,\n    'le': binary_shape,\n    'gt': binary_shape,\n    'ge': binary_shape,\n    'eq': binary_shape,\n    'ne': binary_shape,\n    'and': binary_shape,\n    'or': binary_shape,\n    'select': ternary_shape,\n    'clamp': ternary_shape,\n    'conv': conv_shape,\n    'deconv': deconv_shape,\n    'separable_conv': separable_conv_shape,\n    'separable_deconv': separable_deconv_shape,\n    'box': pool_shape,\n    'debox': unpool_shape,\n    'sample': sample_shape,\n    'desample': desample_shape,\n    'avg_pool': pool_shape,\n    'max_pool': pool_shape,\n    'argmax_pool': pool_shape,\n    'rms_pool': pool_shape,\n    'max_pool_with_index': pool_with_index_shape,\n    'max_unpool': unpool_shape,\n    'avg_unpool': unpool_shape,\n    'sum_reduce': reduce_shape,\n    'min_reduce': reduce_shape,\n    'max_reduce': reduce_shape,\n    'mean_reduce': reduce_shape,\n    'argmin_reduce': reduce_shape,\n    'argmax_reduce': reduce_shape,\n    'any_reduce': reduce_shape,\n    'all_reduce': reduce_shape,\n    'local_response_normalization': normalize_shape,\n    'local_mean_normalization': normalize_shape,\n    'local_variance_normalization': normalize_shape,\n    'local_contrast_normalization': normalize_shape,\n    'l1_normalization': normalize_shape,\n    'l2_normalization': normalize_shape,\n    'moments': moments_shape,\n    'batch_normalization': batchnorm_shape,\n    'nearest_downsample': downsample_shape,\n    'area_downsample': downsample_shape,\n    'nearest_upsample': upsample_shape,\n    'multilinear_upsample': upsample_shape,\n    'reshape': reshape_shape,\n    'transpose': transpose_shape,\n    'squeeze': squeeze_shape,\n    'unsqueeze': unsqueeze_shape,\n    'stack': stack_shape,\n    'unstack': unstack_shape,\n    'split': split_shape,\n    'concat': concat_shape,\n    'slice': slice_shape,\n    'tile': tile_shape,\n    'pad': pad_shape,\n    'cast': unary_shape,\n    'gather': gather_shape,\n    'matmul': matmul_shape,\n    'linear': linear_shape,\n    'softmax': softmax_shape,\n    'linear_quantize': quantize_shape,\n    'logarithmic_quantize': quantize_shape,\n    'min_max_linear_quantize': quantize_shape,\n    'zero_point_linear_quantize': quantize_shape,\n    'avg_roi_pool': roi_shape,\n    'max_roi_pool': roi_shape,\n    'avg_roi_align': roi_shape,\n    'max_roi_align': roi_shape,\n    'roi_resample': roi_shape,\n    'update': update_shape,\n    'copy_n': copy_n_shape,\n    'add_n': add_n_shape,\n}\n"
  },
  {
    "path": "nnef-pyproject/nnef/validate.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport nnef\nimport argparse\n\n\nif __name__ == '__main__':\n\n    ap = argparse.ArgumentParser()\n    ap.add_argument('path', type=str, help='path to the model to validate')\n    ap.add_argument('--stdlib', type=str, help='file name of alternate standard operation definitions '\n                                               '(defaults to all-primitive definitions)', default='')\n    ap.add_argument('--lower', type=str, help='comma separated list of operations to lower (if defined as compound)',\n                    default='')\n    ap.add_argument('--shapes', action=\"store_true\", help='perform shape validation as well')\n    ap.add_argument('--input-shape', type=str, help='override input shapes contained in the model; '\n                                                    'must be a Python list (applied to all inputs) '\n                                                    'or dict expression (applied by input name)', default=None)\n    args = ap.parse_args()\n\n    stdlib = ''\n    if args.stdlib:\n        try:\n            with open(args.stdlib) as file:\n                stdlib = file.read()\n        except FileNotFoundError as e:\n            print('Could not open file: ' + args.stdlib)\n            exit(-1)\n\n    try:\n        graph = nnef.load_graph(args.path, stdlib=stdlib, lowered=args.lower.split(','))\n    except nnef.Error as err:\n        print(err)\n        exit(-1)\n\n    if args.input_shape:\n        input_shape = eval(args.input_shape)\n        if not isinstance(input_shape, (list, dict)):\n            print(\"input-shape must be Python list or dict expression\")\n            exit(-1)\n\n        for op in graph.operations:\n            if op.name == 'external':\n                if isinstance(input_shape, dict):\n                    name = op.outputs['output']\n                    if name in input_shape:\n                        op.attribs['shape'] = input_shape[name]\n                else:\n                    op.attribs['shape'] = input_shape\n\n    if args.shapes:\n        try:\n            nnef.infer_shapes(graph)\n        except nnef.Error as err:\n            print('Shape error: ' + str(err))\n            exit(-1)\n\n    print(nnef.format_graph(graph.name, graph.inputs, graph.outputs, graph.operations, graph.tensors,\n                            annotate_shapes=args.shapes))\n    print('Validation succeeded')\n"
  },
  {
    "path": "nnef-pyproject/package_info.md",
    "content": "NNEF Parser Project\n===================\n\nThis package contains a sample NNEF parser, using a C++ backend.\n\n\nUsing the module\n-----------------------\n\nIn the python interpreter, type\n\n    import nnef\n    graph = nnef.load_graph('example.nnef')\n\nIf the path (`example.nnef`) points to a folder (with a graph.nnef in it), the whole model with weights is loaded. \nIf it points to a file, it is interpreted as the graph description only, and it is loaded without weights.\n\nAlternatively, the methods\n\n    graph = nnef.parse_file(\"graph.nnef\", quantization = \"graph.quant\")\n\nand\n\n    graph = nnef.parse_string(\"version 1.0; graph ...\", quantization = \"...\")\n\ncan be used to parse a graph and optional quantization info from files or strings.\n\nAfter invocation, `graph` is a data structure (named tuple) containing the name, tensors, operations, inputs and outputs of the graph.\nIf shape information is also required, it can be obtained by calling `nnef.infer_shapes(graph)`, which updates the shape information on the graph structure in place.\n"
  },
  {
    "path": "nnef-pyproject/pyproject.toml",
    "content": "[project]\nname = \"nnef\"\nversion = \"1.0.10\"\ndescription = \"A package for parsing NNEF files\"\nrequires-python = \">=3.7\"\n\nclassifiers = [\n    'Development Status :: 4 - Beta',\n    'Intended Audience :: Developers',\n    'License :: OSI Approved :: Apache Software License',\n    'Programming Language :: Python :: 2',\n    'Programming Language :: Python :: 3',\n]\ndynamic = [\"readme\"]\nkeywords = [\"nnef\"]\n\nauthors = [\n    { name = \"Khronos Group\", email = \"nnef@lists.khronos.org\" },\n]\nmaintainers = [{ name = \"Viktor Gyenes\", email = \"viktor.gyenes@aimotive.com\" }]\n\ndependencies = [\"numpy\"]\n\n[build-system]\nrequires = [\"setuptools\", \"wheel\", \"numpy\", \"Cython\"]\nbuild-backend = \"setuptools.build_meta\"\n\n\n[project.urls]\n\"Homepage\" = \"https://www.khronos.org/nnef\"\n\"Repository\" = \"https://github.com/KhronosGroup/NNEF-Tools\"\n\n[tool.setuptools.dynamic]\nreadme = { file = [\"package_info.md\"], content-type = \"text/markdown\"  }\n\n[tool.setuptools.package-data]\n\"nnef.cpp\" = [\"**/*\"]\n\n[tool.cibuildwheel]\n# Skip PyPy wheels\nskip = \"pp*\"\n\ntest-command = \"python {package}/tests/test.py\"\n"
  },
  {
    "path": "nnef-pyproject/setup.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom setuptools import Extension, setup\nimport numpy\nfrom os import name as os_name\n\nsetup(\n    ext_modules=[\n        Extension(\n            \"_nnef\",\n            sources=[\"nnef/nnef.cpp\", \"nnef/cpp/src/nnef.cpp\"],\n            include_dirs=[\"nnef/cpp/include\", numpy.get_include()],\n            language=\"c++\",\n            extra_compile_args=[\"-std=c++11\"] if os_name != \"nt\" else [],\n        )\n    ],\n)\n"
  },
  {
    "path": "nnef-pyproject/stdlib.nnef",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n# tensor declaration operations\n\nfragment external<? = scalar>( shape: integer[] ) -> ( output: tensor<?> );\nfragment variable<? = scalar>( shape: integer[], label: string ) -> ( output: tensor<?> );\nfragment constant<? = scalar>( shape: integer[], value: ?[] ) ->  ( output: tensor<?> );\n\nfragment update<?>( variable: tensor<?>, value: tensor<?> ) -> ( result: tensor<?> );\n\n\n# tensor shape operations\n\nfragment reshape<?>( input: tensor<?>, shape: integer[], axis_start: integer = 0, axis_count: integer = -1 ) -> ( output: tensor<?> );\nfragment transpose<?>( input: tensor<?>, axes: integer[] ) -> ( output: tensor<?> );\nfragment concat<?>( values: tensor<?>[], axis: integer ) -> ( value: tensor<?> );\nfragment split<?>( value: tensor<?>, axis: integer, ratios: integer[] ) -> ( values: tensor<?>[] );\nfragment slice<?>( input: tensor<?>, axes: integer[], begin: integer[], end: integer[], stride: integer[] = [] ) -> ( output: tensor<?> );\nfragment squeeze<?>( input: tensor<?>, axes: integer[] ) -> ( output: tensor<?> );\nfragment unsqueeze<?>( input: tensor<?>, axes: integer[] ) -> ( output: tensor<?> );\nfragment stack<?>( values: tensor<?>[], axis: integer ) -> ( value: tensor<?> );\nfragment unstack<?>( value: tensor<?>, axis: integer ) -> ( values: tensor<?>[] );\nfragment tile<?>( input: tensor<?>, repeats: integer[] ) -> ( output: tensor<?> );\nfragment pad( input: tensor<scalar>, padding: (integer, integer)[], border: string = 'constant', value: scalar = 0.0 ) -> ( output: tensor<scalar> );\nfragment gather<?>( input: tensor<?>, indices: tensor<integer>, axis: integer = 0 ) -> ( output: tensor<?> );\nfragment cast<?>( input: tensor<> ) -> ( output: tensor<?> );\n\n\n# element-wise arithmetic operations\n\nfragment add( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\nfragment sub( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\nfragment mul( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\nfragment div( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\nfragment pow( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> );\n\nfragment exp( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment log( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment sin( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment cos( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment tan( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment sinh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment cosh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment tanh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment asin( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment acos( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment atan( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment asinh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment acosh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment atanh( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment abs( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment sign( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment rcp( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment neg( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment copy<?>( x: tensor<?> ) -> ( y: tensor<?> );\n\n# element-wise comparison operations\n\nfragment lt( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\nfragment gt( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\nfragment le( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\nfragment ge( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\nfragment eq( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\nfragment ne( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<logical> );\n\n# element-wise logical operations\n\nfragment and( x: tensor<logical>, y: tensor<logical> ) -> ( z: tensor<logical> );\nfragment or( x: tensor<logical>, y: tensor<logical> ) -> ( z: tensor<logical> );\nfragment not( x: tensor<logical> ) -> ( y: tensor<logical> );\n\n# element-wise rounding operations\n\nfragment floor( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment ceil( x: tensor<scalar> ) -> ( y: tensor<scalar> );\nfragment round( x: tensor<scalar> ) -> ( y: tensor<scalar> );\n\n# element-wise select operation\n\nfragment select<?>( condition: tensor<logical>, true_value: tensor<?>, false_value: tensor<?> ) -> ( output: tensor<?> );\n\n# simplifier operations\n\nfragment sqr( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = x ^ 2.0;\n}\n\nfragment sqrt( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = x ^ 0.5;\n}\n\nfragment rsqr( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = x ^ -2.0;\n}\n\nfragment rsqrt( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = x ^ -0.5;\n}\n\nfragment log2( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = log(x) / log(2.0);\n}\n\nfragment min( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> )\n{\n    z = select(x < y, x, y);\n}\n\nfragment max( x: tensor<scalar>, y: tensor<scalar> ) -> ( z: tensor<scalar> )\n{\n    z = select(x > y, x, y);\n}\n\nfragment clamp( x: tensor<scalar>, a: tensor<scalar>, b: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = max(min(x, b), a);\n}\n\n\n# matrix multiplication\n\nfragment matmul( A: tensor<scalar>, B: tensor<scalar>, transposeA: logical = false, transposeB: logical = false ) -> ( C: tensor<scalar> );\n\n\n# sliding-window operations\n\nfragment conv(\n    input: tensor<scalar>,\n    filter: tensor<scalar>,\n    bias: tensor<scalar> = 0.0,\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [],\n    groups: integer = 1 )\n-> ( output: tensor<scalar> );\n\nfragment deconv(\n    input: tensor<scalar>,\n    filter: tensor<scalar>,\n    bias: tensor<scalar> = 0.0,\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [],\n    output_shape: integer[] = [],\n    groups: integer = 1 )\n-> ( output: tensor<scalar> );\n\n\nfragment box(\n    input: tensor<scalar>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [],\n    normalize: logical = false )\n-> ( output: tensor<scalar> );\n\nfragment debox(\n    input: tensor<scalar>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [],\n    output_shape: integer[] = [],\n    normalize: logical = false )\n-> ( output: tensor<scalar> );\n\n\nfragment argmax_pool(\n    input: tensor<scalar>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [] )\n-> ( index: tensor<integer> );\n\n\nfragment sample(\n    input: tensor<scalar>,\n    index: tensor<integer>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [] )\n-> ( output: tensor<scalar> );\n\nfragment desample(\n    input: tensor<scalar>,\n    index: tensor<integer>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [],\n    output_shape: integer[] = [] )\n-> ( output: tensor<scalar> );\n\n\n# up/down-sampling operations\n\nfragment nearest_downsample( input: tensor<scalar>, factor: integer[] ) -> ( output: tensor<scalar> )\n{\n    dims = 2 + length_of(factor);\n    output = box(input, size = [1] * dims, stride = [1,1] + factor, padding = [(0,0)] * dims);\n}\n\nfragment area_downsample( input: tensor<scalar>, factor: integer[] ) -> ( output: tensor<scalar> )\n{\n    dims = 2 + length_of(factor);\n    output = box(input, size = [1,1] + factor, stride = [1,1] + factor, padding = [(0,0)] * dims, normalize = true);\n}\n\nfragment nearest_upsample( input: tensor<scalar>, factor: integer[] ) -> ( output: tensor<scalar> )\n{\n    dims = 2 + length_of(factor);\n    output = debox(input, size = [1,1] + factor, stride = [1,1] + factor, padding = [(0,0)] * dims);\n}\n\nfragment multilinear_upsample( input: tensor<scalar>, factor: integer[], method: string = 'symmetric', border: string = 'replicate' )\n-> ( output: tensor<scalar> );\n\n\n# reduce operations\n\nfragment sum_reduce( input: tensor<scalar>, axes: integer[], normalize: logical = false ) -> ( output: tensor<scalar> );\nfragment max_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<scalar> );\nfragment min_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<scalar> );\nfragment argmax_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<integer> );\nfragment argmin_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<integer> );\nfragment any_reduce( input: tensor<logical>, axes: integer[] ) -> ( output: tensor<logical> );\nfragment all_reduce( input: tensor<logical>, axes: integer[] ) -> ( output: tensor<logical> );\n\nfragment mean_reduce( input: tensor<scalar>, axes: integer[] ) -> ( output: tensor<scalar> )\n{\n    output = sum_reduce(input, axes = axes, normalize = true);\n}\n\nfragment moments( input: tensor<scalar>, axes: integer[] ) -> ( mean: tensor<scalar>, variance: tensor<scalar> )\n{\n    mean = mean_reduce(input, axes = axes);\n    variance = mean_reduce(sqr(input - mean), axes = axes);\n}\n\n\n# activation functions\n\nfragment relu( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = max(x, 0.0);\n}\n\nfragment sigmoid( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = 1.0 / (1.0 + exp(-x));\n}\n\nfragment softabs( x: tensor<scalar>, epsilon: scalar ) -> ( y: tensor<scalar> )\n{\n    y = sqrt(sqr(x) + epsilon);\n}\n\nfragment softmax( x: tensor<scalar>, axes: integer[] = [1] ) -> ( y: tensor<scalar> )\n{\n    m = max_reduce(x, axes = axes);\n    e = exp(x - m);\n    y = e / sum_reduce(e, axes = axes);\n}\n\nfragment softplus( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = log(exp(x) + 1.0);\n}\n\nfragment elu( x: tensor<scalar>, alpha: scalar = 1.0 ) -> ( y: tensor<scalar> )\n{\n    y = select(x < 0.0, alpha * (exp(x) - 1.0), x);\n}\n\nfragment selu( x: tensor<scalar>, alpha: scalar = 1.67326319, lambda: scalar = 1.05070102 ) -> ( y: tensor<scalar> )\n{\n    y = lambda * select(x < 0.0, alpha * (exp(x) - 1.0), x);\n}\n\nfragment gelu( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    # the exact definition of gelu is x * Phi(x) where Phi(x) is the\n    # CDF of the standard normal distribution, which can be approximated\n    # for example by sigmoid(1.702 * x)\n\n    y = x * sigmoid(1.702 * x);\n}\n\nfragment silu( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = x * sigmoid(x);\n}\n\nfragment prelu( x: tensor<scalar>, alpha: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = select(x < 0.0, alpha * x, x);\n}\n\nfragment leaky_relu( x: tensor<scalar>, alpha: scalar ) -> ( y: tensor<scalar> )\n{\n    y = prelu(x, alpha = alpha);\n}\n\n\n# pooling operations\n\nfragment max_pool_with_index(\n    input: tensor<scalar>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [] )\n-> ( output: tensor<scalar>, index: tensor<integer> )\n{\n    index = argmax_pool(input, size = size, border = border, padding = padding, stride = stride, dilation = dilation);\n    output = sample(input, index, size = size, border = border, padding = padding, stride = stride, dilation = dilation);\n}\n\nfragment max_pool(\n    input: tensor<scalar>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [] )\n-> ( output: tensor<scalar> )\n{\n    output, index = max_pool_with_index(input, size = size, border = border, padding = padding, stride = stride, dilation = dilation);\n}\n\nfragment avg_pool(\n    input: tensor<scalar>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [] )\n-> ( output: tensor<scalar> )\n{\n    output = box(input, size = size, border = border, padding = padding, stride = stride, dilation = dilation, normalize = true);\n}\n\nfragment rms_pool(\n    input: tensor<scalar>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [] )\n-> ( output: tensor<scalar> )\n{\n    output = sqrt(avg_pool(sqr(input), size = size, border = border, padding = padding, stride = stride, dilation = dilation));\n}\n\n\n# linear operations\n\nfragment linear(\n    input: tensor<scalar>,\n    filter: tensor<scalar>,\n    bias: tensor<scalar> = 0.0 )\n-> ( output: tensor<scalar> )\n{\n    output = matmul(input, filter, transposeB = true) + bias;\n}\n\nfragment separable_conv(\n    input: tensor<scalar>,\n    plane_filter: tensor<scalar>,\n    point_filter: tensor<scalar>,\n    bias: tensor<scalar> = 0.0,\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [],\n    groups: integer = 1 )\n-> ( output: tensor<scalar> )\n{\n    filtered = conv(input, plane_filter, border = border, padding = padding,\n                    stride = stride, dilation = dilation, groups = 0);\n    output = conv(filtered, point_filter, bias, groups = groups);\n}\n\nfragment separable_deconv(\n    input: tensor<scalar>,\n    plane_filter: tensor<scalar>,\n    point_filter: tensor<scalar>,\n    bias: tensor<scalar> = 0.0,\n    border: string = 'constant',\n    padding: (integer,integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [],\n    output_shape: integer[] = [],\n    groups: integer = 1 )\n-> ( output: tensor<scalar> )\n{\n    filtered = deconv(input, point_filter, groups = groups);\n    output = deconv(filtered, plane_filter, bias, border = border, padding = padding,\n                    stride = stride, dilation = dilation, output_shape = output_shape, groups = 0);\n}\n\n\n# normalization operations\n\nfragment local_response_normalization(\n    input: tensor<scalar>,\n    size: integer[],\n    alpha: scalar = 1.0,\n    beta: scalar = 0.5,\n    bias: scalar = 1.0 )\n-> ( output: tensor<scalar> )\n{\n    sigma = bias + alpha * box(sqr(input), size = size, normalize = true);\n    output = input / (sigma ^ beta);\n}\n\nfragment local_mean_normalization( input: tensor<scalar>, size: integer[] ) -> ( output: tensor<scalar> )\n{\n    mean = box(input, size = size, normalize = true);\n    output = sub(input, mean);\n}\n\nfragment local_variance_normalization( input: tensor<scalar>, size: integer[], bias: scalar = 0.0, epsilon: scalar = 0.0 ) -> ( output: tensor<scalar> )\n{\n    sigma = sqrt(box(sqr(input), size = size, normalize = true));\n    output = input / max(sigma + bias, epsilon);\n}\n\nfragment local_contrast_normalization( input: tensor<scalar>, size: integer[], bias: scalar = 0.0, epsilon: scalar = 0.0 ) -> ( output: tensor<scalar> )\n{\n    centered = local_mean_normalization(input, size = size);\n    output = local_variance_normalization(centered, size = size, bias = bias, epsilon = epsilon);\n}\n\nfragment l1_normalization( input: tensor<scalar>, axes: integer[], bias: scalar = 0.0, epsilon: scalar = 0.0 ) -> ( output: tensor<scalar> )\n{\n    sigma = sum_reduce(abs(input), axes = axes);\n    output = input / max(sigma + bias, epsilon);\n}\n\nfragment l2_normalization( input: tensor<scalar>, axes: integer[], bias: scalar = 0.0, epsilon: scalar = 0.0 ) -> ( output: tensor<scalar> )\n{\n    sigma = sqrt(sum_reduce(sqr(input), axes = axes));\n    output = input / max(sigma + bias, epsilon);\n}\n\nfragment batch_normalization( input: tensor<scalar>, mean: tensor<scalar>, variance: tensor<scalar>, offset: tensor<scalar>, scale: tensor<scalar>, epsilon: scalar )\n-> ( output: tensor<scalar> )\n{\n    output = offset + scale * (input - mean) / sqrt(variance + epsilon);\n}\n\n\n# roi operations\n\nfragment avg_roi_pool(\n    input: tensor<scalar>,\n    rois: tensor<scalar>,\n    batch_index: tensor<integer>,\n    output_size: integer[] )\n-> ( output: tensor<scalar> );\n\nfragment max_roi_pool(\n    input: tensor<scalar>,\n    rois: tensor<scalar>,\n    batch_index: tensor<integer>,\n    output_size: integer[] )\n-> ( output: tensor<scalar> );\n\nfragment roi_resample(\n    input: tensor<scalar>,\n    rois: tensor<scalar>,\n    batch_index: tensor<integer>,\n    output_size: integer[],\n    method: string = 'symmetric' )\n-> ( output: tensor<scalar> );\n\nfragment avg_roi_align(\n    input: tensor<scalar>,\n    rois: tensor<scalar>,\n    batch_index: tensor<integer>,\n    output_size: integer[],\n    sampling_rate: integer[],\n    resize_method: string = 'symmetric' )\n-> ( output: tensor<scalar> )\n{\n    size = [for i in range_of(output_size) yield output_size[i] * sampling_rate[i]];\n    resized = roi_resample(input, rois, batch_index, output_size = size,\n                         method = resize_method);\n    output = avg_pool(resized, size = sampling_rate, stride = sampling_rate);\n}\n\nfragment max_roi_align(\n    input: tensor<scalar>,\n    rois: tensor<scalar>,\n    batch_index: tensor<integer>,\n    output_size: integer[],\n    sampling_rate: integer[],\n    resize_method: string = 'symmetric' )\n-> ( output: tensor<scalar> )\n{\n    size = [for i in range_of(output_size) yield output_size[i] * sampling_rate[i]];\n    resized = roi_resample(input, rois, batch_index, output_size = size,\n                         method = resize_method);\n    output = max_pool(resized, size = sampling_rate, stride = sampling_rate);\n}\n\n\n# quantization operations\n\nfragment min_max_linear_quantize(\n    x: tensor<scalar>,\n    min: tensor<scalar>,\n    max: tensor<scalar>,\n    bits: integer,\n    signed: logical,\n    symmetric: logical )\n-> ( y: tensor<scalar> )\n{\n    r = scalar(2 ^ bits - 1 - integer(signed && symmetric));\n    z = clamp(x, min, max);\n    p = scalar(2 ^ (bits - 1) - integer(symmetric) if signed else 0);\n    q = round((z - min) / (max - min) * r) - p;\n    y = (q + p) / r * (max - min) + min;\n}\n\nfragment zero_point_linear_quantize(\n    x: tensor<scalar>,\n    zero_point: tensor<integer>,\n    scale: tensor<scalar>,\n    bits: integer,\n    signed: logical,\n    symmetric: logical )\n-> ( y: tensor<scalar> )\n{\n    z = cast<scalar>(zero_point);\n    s = round(x / scale) + z;\n    r = scalar(2 ^ (bits - 1) - 1 if signed else 2 ^ bits - 1);\n    q = clamp(s, 0.0 if !signed else -r if symmetric else -r - 1.0, r);\n    y = (q - z) * scale;\n}\n\nfragment linear_quantize(\n    x: tensor<scalar>,\n    min: tensor<scalar>,\n    max: tensor<scalar>,\n    bits: integer )\n-> ( y: tensor<scalar> )\n{\n    y = min_max_linear_quantize(x, min = min, max = max, bits = bits,\n                                signed = false, symmetric = false);\n}\n\nfragment logarithmic_quantize(\n    x: tensor<scalar>,\n    max: tensor<scalar>,\n    bits: integer )\n-> ( y: tensor<scalar> )\n{\n    m = ceil(log2(max));\n    r = scalar(2 ^ bits - 1);\n    q = round(clamp(log2(abs(x)), m - r, m));\n    y = sign(x) * 2.0 ^ q;\n}\n\n\n# misc operations\n\nfragment copy_n<?>( x: tensor<?>, times: integer ) -> ( y: tensor<?>[] )\n{\n    y = [x] * times;\n}\n\nfragment add_n( x: tensor<scalar>[] ) -> ( y: tensor<scalar> )\n{\n    y = x[0] + add_n(x[1:]) if length_of(x) > 0 else constant(shape = [1], value = [0.0]);\n}\n"
  },
  {
    "path": "nnef-pyproject/tests/test.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nimport unittest\nimport nnef\n\n\nclass ParserTest(unittest.TestCase):\n\n    def test_empty_document(self):\n        with self.assertRaises(nnef.Error):\n            nnef.parse_string(\"\")\n\n    def test_empty_body(self):\n        with self.assertRaises(nnef.Error):\n            nnef.parse_string(\"version 1.0; graph G( input ) -> ( output ) {}\")\n\n    def test_minimal(self):\n        nnef.parse_string(\"\"\"\n            version 1.0;\n            graph G( input ) -> ( output )\n            {\n                input = external(shape = []);\n                output = copy(input);\n            }\n            \"\"\")\n\n    def test_empty_input_not_declared(self):\n        with self.assertRaises(nnef.Error):\n            nnef.parse_string(\"\"\"\n                version 1.0;\n                graph G( input ) -> ( output )\n                {\n                    output = copy(input);\n                }\n                \"\"\")\n\n    def test_input_not_external(self):\n        with self.assertRaises(nnef.Error):\n            nnef.parse_string(\"\"\"\n                version 1.0;\n                graph G( input ) -> ( output )\n                {\n                    input = constant(shape = [], value = [1.0]);\n                    output = copy(input);\n                }\n                \"\"\")\n\n    def test_external_not_input(self):\n        with self.assertRaises(nnef.Error):\n            nnef.parse_string(\"\"\"\n                version 1.0;\n                graph G( input ) -> ( output )\n                {\n                    input = external(shape = []);\n                    other = external(shape = []);\n                    output = add(input, other);\n                }\n                \"\"\")\n\n    def test_empty_output_not_declared(self):\n        with self.assertRaises(nnef.Error):\n            nnef.parse_string(\"\"\"\n                version 1.0;\n                graph G( input ) -> ( output )\n                {\n                    input = external(shape = []);\n                }\n                \"\"\")\n\n    def test_variable_update(self):\n        nnef.parse_string(\"\"\"\n            version 1.0;\n            graph G( input ) -> ( output )\n            {\n                input = external(shape = []);\n                var = variable(shape = [], label = 'var');\n                output = update(var, input);\n            }\n            \"\"\")\n\n    def test_non_variable_update(self):\n        with self.assertRaises(nnef.Error):\n            nnef.parse_string(\"\"\"\n                version 1.0;\n                graph G( input ) -> ( output )\n                {\n                    input = external(shape = []);\n                    output = update(input, input);\n                }\n                \"\"\")\n\n    def test_custom_fragment(self):\n        nnef.parse_string(\"\"\"\n            version 1.0;\n            extension KHR_enable_fragment_definitions, KHR_enable_operator_expressions;\n            \n            fragment op( input: tensor<scalar> ) -> ( output: tensor<scalar> )\n            {\n                output = input;\n            }\n            \n            graph G( input ) -> ( output )\n            {\n                input = external(shape = []);\n                output = op(input);\n            }\n            \"\"\")\n    \n    def test_reshape(self):\n        graph = nnef.parse_string(\"\"\"\n            version 1.0;\n            graph G( input ) -> ( output )\n            {\n                input = external(shape = [1,2,3,4]);\n                output = reshape(input, axis_start = 1, axis_count = 2, shape = [6]);\n            }\n            \"\"\")\n        nnef.infer_shapes(graph)\n\n    # def test_alexnet(self):\n    #     nnef.parse_file(\"../examples/alexnet.txt\")\n    #\n    # def test_googlenet(self):\n    #     nnef.parse_file(\"../examples/googlenet.txt\")\n    #\n    # def test_resnet(self):\n    #     nnef.parse_file(\"../examples/resnet.txt\")\n    #\n    # def test_vggnet(self):\n    #     nnef.parse_file(\"../examples/vgg.txt\")\n\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "nnef_tools-pyproject/LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/README.md",
    "content": "# NNEF Tools\n\nThis package contains a set of tools for converting and transforming machine learning models.\n\n## Dependencies\n\nThe python package supports extras for different functionalities:\n\n| Functionality                  | Extra               | Additional packages                          |\n|--------------------------------|---------------------|----------------------------------------------|\n| TensorFlow Protobuf conversion | tensorflow-protobuf | tensorflow                                   |\n| TensorFlow Lite conversion     | tensorflow-lite     | tensorflow, flatbuffers                      |\n| ONNX conversion                | onnx                | protobuf, onnx, onnx-simplifier, onnxruntime |\n| Caffe and Caffe2 conversion    | caffe               | protobuf, torch                              |\n| Visualization of NNEF models   | visualization       | graphviz                                     |\n| Full install                   | full                | _all packages listed above_                  |\n\nInstalling ONNX and Caffe dependencies (for reference):\n```\npip install nnet_tools[onnx, caffe] \n```\n\n## Usage\n\n[Python package usage](package_info.md)\n"
  },
  {
    "path": "nnef_tools-pyproject/custom/composite_export_example.py",
    "content": "import numpy as np\nimport src.nnef_tools.io.tf.graphdef as graphdef\ntry:\n    import tensorflow.compat.v1 as tf\n    tf.disable_v2_behavior()\nexcept ImportError:\n    import tensorflow as tf\n\n\n# define composite operators as decorated python functions\n\n@graphdef.composite_function\ndef lp_norm(x, p=2, axis=None, keepdims=False, name=None):\n    return tf.pow(tf.reduce_sum(tf.pow(tf.abs(x), p), axis=axis, keepdims=keepdims), 1 / p)\n\n\n@graphdef.composite_function\ndef sum_pool2d(input, ksize, strides, padding, data_format='NHWC', name=None):\n    pooled = tf.nn.avg_pool2d(input, ksize=ksize, strides=strides, padding=padding, data_format=data_format)\n    return pooled * float(np.prod(ksize))\n\n\n# reset tractking of composite functions\n\ngraphdef.reset_composites()\n\n\n# define the TF graph\n\nx = tf.placeholder(shape=(None, 32, 32, 3), dtype=tf.float32, name='input')\nw = tf.get_variable('w', shape=(5, 5, 3, 16), dtype=tf.float32, initializer=tf.zeros_initializer)\nx = tf.nn.conv2d(x, w, strides=1, padding='SAME')\nx = sum_pool2d(x, ksize=(1, 3, 3, 1), strides=1, padding='SAME')\nx = lp_norm(x, axis=3, keepdims=True)\n\n\n# export the graph to protobuf\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    graphdef.save_default_graph('test.pb', session=sess, outputs={x: 'output'},\n                                input_shapes={'input': (1, 32, 32, 3)})\n"
  },
  {
    "path": "nnef_tools-pyproject/custom/custom_operators_example.py",
    "content": "import torch\n\n\n# define how the PyTorch interpreter should execute the op\n\ndef shuffle(input, groups):\n    shape = list(input.shape)\n    reshaped = input.reshape([shape[0], groups, shape[1] / groups] + shape[2:])\n    transposed = reshaped.permute(0, 2, 1, *list(range(3, len(shape) + 1)))\n    return transposed.reshape(shape)\n\n\n# mapping from op names to executor functions\n\nCUSTOM_OPERATORS = {\n    'shuffle': shuffle,\n}\n"
  },
  {
    "path": "nnef_tools-pyproject/custom/custom_optimizers_example.py",
    "content": "# Define how a sequence of ops is replaced by a new sequence.\n# First test if the sequence of ops matched should be really replaced; return False if not.\n# If yes, create new Tensors and Operations in the graph with the Tensor() and  Operation() constructors.\n# DO NOT perform modifications to the graph before all checks passed!\n\ndef replace_shuffle(reshape1, transpose, reshape2):\n    if reshape2.output.shape != reshape1.input.shape:\n        return False\n\n    if len(reshape1.output.shape) != len(reshape1.input.shape) + 1 or \\\n            reshape1.output.shape[0] != reshape1.input.shape[0] or \\\n            reshape1.output.shape[3:] != reshape1.input.shape[2:]:\n        return False\n\n    axes = transpose.attribs['axes']\n    if axes[:3] != [0, 2, 1] or axes[3:] != list(range(3, len(axes))):\n        return False\n\n    groups = reshape1.output.shape[1]\n\n    Operation(reshape1.graph, type='shuffle', attribs={'groups': groups},\n              inputs=reshape1.input, outputs=reshape2.output, custom=True)\n\n\n# List sequences of op types that should be matched and replaced if the replacer function does not return False\n# An item in the list may be a set as well, in which case any of its items can count as a match in the sequence\n# Use a tuple for the key sequence, because list is not hashable\n\nCUSTOM_OPTIMIZERS = {\n    ('reshape', 'transpose', 'reshape'): replace_shuffle,\n}\n"
  },
  {
    "path": "nnef_tools-pyproject/custom/custom_transforms_example.py",
    "content": "from src.nnef_tools import Transform\n\n\n# define mapping from custom op names to converter transforms that maps them in this case to NNEF ops\n\nCUSTOM_TRANSFORMS = {\n    'sum_pool2d':\n        Transform(\n            type='box',\n            inputs=(\n                '!transpose_input(I[0], data_format)',\n            ),\n            outputs=(\n                '!transpose_output(O[0], data_format)',\n            ),\n            attribs={\n                'size': '!nxc_to_ncx(ensure_list(ksize), cond=is_nxc(data_format))',\n                'stride': '!nxc_to_ncx(ensure_list(strides), cond=is_nxc(data_format))',\n                'padding': '!convert_padding(padding, I[0].rank)',\n                'normalize': False,\n            }\n        ),\n    'lp_norm':\n        Transform(\n            type='!\"l1_normalization\" if p == 1 else \"l2_normalization\"',\n            cond='!p == 1 or p == 2',\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], ref=I[0])',\n            attribs={\n                'axes': '!ensure_list(transpose_axis_like(axis, ref=I[0]))',\n            }\n        ),\n}\n"
  },
  {
    "path": "nnef_tools-pyproject/custom/onnx_custom_export_example.py",
    "content": "import torch\nimport torch.nn.functional as F\nfrom torch.onnx import register_custom_op_symbolic\n\n\ndef aim_affine_grid(g, trans, shape, align_corners):\n    return g.op(\"com.example::affine_grid\", trans, shape, align_corners)\n\n\ndef aim_grid_sample(g, input, grid, mode, padding, align_corners):\n    return g.op(\"com.example::grid_sample\", input, grid, mode, padding, align_corners)\n\n\nregister_custom_op_symbolic('::affine_grid_generator', aim_affine_grid, 1)\nregister_custom_op_symbolic('::grid_sampler', aim_grid_sample, 1)\n\n\nclass AffineTransform(torch.nn.Module):\n\n    def __init__(self, width, height):\n        super(AffineTransform, self).__init__()\n        self.width = width\n        self.height = height\n\n    def forward(self, input, theta):\n        batch = int(input.shape[0])     # int() forces static shape instead of dynamic Shape() op\n        channel = int(input.shape[1])\n        grid = F.affine_grid(size=[batch, channel, self.height, self.width], theta=theta)\n        return F.grid_sample(input, grid)\n\n\nclass Model(torch.nn.Module):\n\n    def __init__(self, grid_size):\n        super(Model, self).__init__()\n        self.conv1 = torch.nn.Conv2d(in_channels=3, out_channels=16, kernel_size=(4, 4), stride=(2, 2))\n        self.conv2 = torch.nn.Conv2d(in_channels=16, out_channels=8, kernel_size=(3, 3), stride=(1, 1))\n        self.pool = torch.nn.MaxPool2d(kernel_size=2, stride=2)\n        self.affine = AffineTransform(width=grid_size[0], height=grid_size[1])\n\n    def forward(self, x, t):\n        x = self.conv1(x)\n        x = self.pool(x)\n        x = self.affine(x, t)\n        x = self.conv2(x)\n        return x\n\n\nmodel = Model(grid_size=(100, 100))\nmodel.eval()\n\nx = torch.randn(1, 3, 224, 224, requires_grad=False)\ny = torch.zeros(size=(1, 2, 3))\n\ntorch.onnx.export(model, (x, y), \"test.onnx\", opset_version=10)\n"
  },
  {
    "path": "nnef_tools-pyproject/custom/onnx_custom_transforms_example.py",
    "content": "from src.nnef_tools import Transform\n\n\ndef affine_grid_shape(theta, shape):\n    return shape\n\n\nCUSTOM_SHAPES = {\n    'grid_sample': lambda input, grid: input,\n    'affine_grid': affine_grid_shape,\n}\n\n\nCUSTOM_TRANSFORMS = {\n    'affine_grid':\n        Transform(\n            type='affine_grid',\n            using={\n                'size': '!as_const(I[1])',\n                'align': '!as_const(I[2])',\n            },\n            cond={\n                '!align == 0': 'align_corners must be 0 (false)',\n            },\n            inputs=(\n                '!I[0]',\n            ),\n            outputs=(\n                '!O[0]',\n            ),\n            attribs={\n                'shape': '!size',\n            }\n        ),\n    'grid_sample':\n        Transform(\n            type='grid_sample',\n            using={\n                'mode': '!as_const(I[2])',\n                'padding': '!as_const(I[3])',\n                'align': '!as_const(I[4])',\n            },\n            cond={\n                '!mode == 0': 'mode must be 0 (bilinear)',\n                '!padding == 0': 'padding_mode must be 0 (zeros)',\n                '!align == 0': 'align_corners must be 0 (false)',\n            },\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n            ),\n            outputs=(\n                '!O[0]',\n            ),\n        ),\n}\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/conversion/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .converter import Transform, Converter, ConversionError"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/conversion/converter.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nfrom ..model import *\nfrom ..utils import types\nimport numpy as np\nimport functools\nimport inspect\nimport math\nimport copy\nimport six\nimport re\n\n\nclass Transform:\n\n    def __init__(self, type, name=None, inputs=None, outputs=None, attribs=None,\n                 defaults=None, using=None, cond=None, custom=False):\n        self.type = type\n        self.name = name or '!_name_'\n        self.inputs = inputs or ()\n        self.outputs = outputs or ()\n        self.attribs = attribs or {}\n        self.defaults = defaults\n        self.using = using or {}\n        self.cond = cond\n        self.custom = custom\n\n    def with_type(self, type):\n        return Transform(type=type, name=self.name, inputs=self.inputs, outputs=self.outputs, attribs=self.attribs,\n                         defaults=self.defaults, using=self.using, cond=self.cond, custom=self.custom)\n\n\nclass ConversionError(Exception):\n\n    def __init__(self, message, details=None):\n        Exception.__init__(self, message)\n        self.details = details\n\n\nclass Converter:\n\n    @staticmethod\n    def find_public_methods(obj):\n        methods = inspect.getmembers(obj, predicate=inspect.ismethod)\n        return {name: func for name, func in methods if not name.startswith('_')}\n\n    @staticmethod\n    def find_public_functions(obj):\n        methods = inspect.getmembers(obj, predicate=inspect.isfunction)\n        return {name: func for name, func in methods if not name.startswith('_')}\n\n    @staticmethod\n    def decomposed_operations():\n        return []       # return list of decomposed NNEF ops in subclass if converting from NNEF\n\n    @staticmethod\n    def defined_operations():\n        return {}       # return dictionary of NNEF operator (fragment) definitions in subclass if converting to NNEF\n\n    @staticmethod\n    def defined_operation_dependencies():\n        return {}  # return dictionary of NNEF operator (fragment) dependencies in subclass if converting to NNEF\n\n    @staticmethod\n    def defined_shapes():\n        return {}       # return dictionary of shape functions for NNEF fragments defined by the converter\n\n    @staticmethod\n    def unpack_transforms(transforms):\n        unpacked = {}\n        for key, value in six.iteritems(transforms):\n            assert isinstance(value, Transform)\n\n            if isinstance(key, tuple):\n                if isinstance(value, Transform) and isinstance(value.type, tuple):\n                    for key_item, type_item in zip(key, value.type):\n                        value_item = copy.deepcopy(value)\n                        value_item.type = type_item\n                        unpacked[key_item] = value_item\n                else:\n                    for item in key:\n                        unpacked[item] = value\n            else:\n                unpacked[key] = value\n\n        return unpacked\n\n    @staticmethod\n    def merge_transforms(default_transforms, custom_transforms):\n        if custom_transforms is None:\n            return default_transforms\n\n        transforms = dict(default_transforms)\n        transforms.update(custom_transforms)\n        return transforms\n\n    def __init__(self, transforms, functions=None, mirror_unsupported=False, infer_shapes=False, custom_shapes=None):\n        self._graph = None\n        self._transforms = transforms\n        self._callables = self.find_public_methods(self)\n        if functions:\n            self._callables.update({name: functools.partial(func, self) for name, func in six.iteritems(functions)})\n        self._mirror_unsupported = mirror_unsupported\n        self._infer_shapes = infer_shapes\n        self._custom_shapes = custom_shapes or {}\n\n    def __call__(self, graph):\n        if not self._infer_shapes:\n            unknown_tensors = [tensor for tensor in graph.tensors if (tensor.shape is None or any(s is None for s in tensor.shape))\n                               and len(tensor.consumers)]\n            if len(unknown_tensors):\n                names = [\"'{}'\".format(tensor.name) for tensor in unknown_tensors if tensor.name]\n                raise ConversionError((\"Input graph contains tensors with dynamic shape: \" +\n                                      \", \".join(names) if len(names) else \"(no names)\") +\n                                      \"\\nTry the --fold-constants option to evaluate constant sub-graphs \"\n                                      \"or the --static-only option to convert only the static part of the graph\")\n\n        self._graph = Graph(name=graph.name)\n        self._tensor_map = {tensor: self._copy_tensor_(tensor) for tensor in graph.tensors}\n        self._tensor_map.update({val: key for key, val in six.iteritems(self._tensor_map)})\n        self._transposes = {}\n\n        self._prepare(self._graph)\n\n        if not self._infer_shapes:\n            errors = []\n            for op in graph.operations:\n                transform = self._transforms.get(op.type)\n                if transform is None and self._mirror_unsupported:\n                    continue\n\n                if isinstance(transform, Transform) and transform.type is None:\n                    continue\n\n                error = self._error_message(op, transform)\n                if error is not None:\n                    errors.append(error)\n\n            if len(errors):\n                raise ConversionError(\"Found {} operator(s) that cannot be converted\\n{}\".format(len(errors), \"\\n\".join(errors)))\n\n        for op in graph.operations:\n            transform = self._transforms.get(op.type)\n            if isinstance(transform, Transform) and transform.type is None:\n                continue\n\n            if self._infer_shapes:\n                error = self._error_message(op, transform)\n                if error is not None:\n                    raise ConversionError(error)\n\n            count = len(self._graph.operations)\n\n            if transform is not None:\n                self._convert(op, transform)\n            elif self._mirror_unsupported:\n                self._mirror(op)\n\n            if self._infer_shapes:\n                from nnef.shapes import _infer_op_shapes\n                for op in self._graph.operations[count:]:\n                    input_shapes = [list(tensor.shape) for tensor in op.inputs]\n                    if not isinstance(op.inputs, tuple):\n                        input_shapes = (input_shapes,)\n\n                    output_counts = [len(op.outputs)] if not isinstance(op.outputs, tuple) else [None] * len(op.outputs)\n                    output_shapes = _infer_op_shapes(op.type, op.attribs, input_shapes, output_counts,\n                                                     custom_shapes=self._custom_shapes)\n                    if not isinstance(op.outputs, tuple):\n                        output_shapes = output_shapes[0]\n\n                    for output, shape in zip(op.outputs, output_shapes):\n                        output.shape = tuple(shape)\n\n        for tensor, shape in self._transposes.items():\n            tensor.shape = shape\n\n        self._graph.remove_tensors([tensor for tensor in self._graph.tensors\n                                    if len(tensor.producers) == 0 and len(tensor.consumers) == 0])\n\n        self._graph.inputs = tuple(self._tensor_map[tensor] for tensor in graph.inputs if self._tensor_map[tensor].graph)\n        self._graph.outputs = tuple(self._tensor_map[tensor] for tensor in graph.outputs if self._tensor_map[tensor].graph)\n\n        return self._graph\n\n    def tensor_mapping(self):\n        return {key.name: value.name for key, value in six.iteritems(self._tensor_map)\n                if value.graph == self._graph and key.name is not None and value.name is not None}\n\n    def _global_attribs(self):\n        return {}\n\n    def _prepare(self, graph):\n        pass\n\n    def _check_conditions(self, op, transform):\n        op_inputs = list(self._tensor_map[tensor] for tensor in op.inputs)\n        op_outputs = list(self._tensor_map[tensor] for tensor in op.outputs)\n        op_attribs = self._add_default_attribs(op.attribs, transform.defaults, op_inputs, op_outputs, op.type, op.name)\n\n        using = {'_type_': op.type, '_name_': op.name, **self._global_attribs()}\n        for key, item in six.iteritems(transform.using):\n            value = self._evaluate(op_attribs, op_inputs, op_outputs, item, using)\n            self._check_value(value, 'using', key, op.type, op.name)\n            using[key] = value\n\n        error = None\n        if transform.cond is not None:\n            for condition, message in transform.cond.items():\n                if not self._evaluate(op_attribs, op_inputs, op_outputs, condition, using):\n                    error = message if error is None else error + ', ' + message\n\n        return error\n\n    def _error_message(self, op, transform):\n        if transform is None:\n            return \"Conversion of operator '{}' is not implemented\".format(op.type)\n\n        message = self._check_conditions(op, transform)\n        if message is not None:\n            attribs = {key: value for key, value in six.iteritems(op.attribs) if not key.startswith('_')}\n            input_shapes = \", \".join(str(tensor.shape) for tensor in op.inputs)\n            output_shapes = \", \".join(str(tensor.shape) for tensor in op.outputs)\n            return \"Conversion of operator '{}' is not possible: {}\"\\\n                   \"\\n  attributes: {}\\n  input-shapes: {}\\n  output-shapes: {}\"\\\n                .format(op.type, message, attribs, input_shapes, output_shapes)\n\n        return None\n\n    def _convert(self, op, transform):\n        op_inputs = list(self._tensor_map[tensor] for tensor in op.inputs)\n        op_outputs = list(self._tensor_map[tensor] for tensor in op.outputs)\n        op_attribs = self._add_default_attribs(op.attribs, transform.defaults, op_inputs, op_outputs, op.type, op.name)\n\n        using = {'_type_': op.type, '_name_': op.name, **self._global_attribs()}\n        for key, item in six.iteritems(transform.using):\n            value = self._evaluate(op_attribs, op_inputs, op_outputs, item, using)\n            self._check_value(value, 'using', key, op.type, op.name)\n            using[key] = value\n\n        type = self._evaluate(op_attribs, op_inputs, op_outputs, transform.type, using)\n        self._check_value(type, 'field', 'type', op.type, op.name)\n\n        name = self._evaluate(op_attribs, op_inputs, op_outputs, transform.name, using)\n        self._check_value(name, 'field', 'name', op.type, op.name)\n\n        attribs = {}\n        for key, item in six.iteritems(transform.attribs):\n            value = self._evaluate(op_attribs, op_inputs, op_outputs, item, using)\n            if value is not None:\n                attribs[key] = value\n\n        for key, value in six.iteritems(attribs):\n            self._check_value(value, 'attribute', key, op.type, op.name)\n\n        if isinstance(transform.inputs, list):\n            inputs = self._evaluate_tensor_list(op_attribs, op_inputs, op_outputs, transform.inputs, using)\n        elif isinstance(transform.inputs, tuple):\n            inputs = tuple(self._filter_none(self._evaluate(op_attribs, op_inputs, op_outputs, item, using)\n                                             for item in transform.inputs))\n        else:\n            inputs = (self._evaluate(op_attribs, op_inputs, op_outputs, transform.inputs, using),)\n\n        for idx, item in enumerate(inputs):\n            self._check_value(item, 'input', idx, op.type, op.name, tensor=True)\n\n        offset = len(self._graph.operations)\n\n        if isinstance(transform.outputs, list):\n            outputs = self._evaluate_tensor_list(op_attribs, op_inputs, op_outputs, transform.outputs, using)\n        elif isinstance(transform.outputs, tuple):\n            outputs = tuple(self._filter_none(self._evaluate(op_attribs, op_inputs, op_outputs, item, using)\n                                              for item in transform.outputs))\n        else:\n            outputs = (self._evaluate(op_attribs, op_inputs, op_outputs, transform.outputs, using),)\n\n        for idx, item in enumerate(outputs):\n            self._check_value(item, 'output', idx, op.type, op.name, tensor=True)\n\n        op = Operation(self._graph, type=type, name=name, attribs=attribs, inputs=inputs, outputs=outputs,\n                       custom=transform.custom)\n        self._graph.reverse(offset)\n        return op\n\n    def _mirror(self, op):\n        inputs_type = tuple if isinstance(op.inputs, tuple) else list\n        outputs_type = tuple if isinstance(op.outputs, tuple) else list\n\n        op_inputs = inputs_type(self._tensor_map[tensor] for tensor in op.inputs)\n        op_outputs = outputs_type(self._tensor_map[tensor] for tensor in op.outputs)\n\n        return Operation(self._graph, type=op.type, name=op.name, attribs=op.attribs,\n                         inputs=op_inputs, outputs=op_outputs, custom=True)\n\n    def _add_default_attribs(self, attribs, defaults, inputs, outputs, op_type, op_name):\n        if defaults is None:\n            return attribs\n\n        attribs = dict(attribs)\n        for key, value in six.iteritems(defaults):\n            if key not in attribs:\n                value = self._evaluate({}, inputs, outputs, value)\n                self._check_value(value, 'default', key, op_type, op_name)\n                attribs[key] = value\n\n        return attribs\n\n    def _evaluate(self, attribs, inputs, outputs, arg, using={}):\n        if isinstance(arg, str) and arg[0] == '!':\n            try:\n                return eval(arg[1:], {'I': inputs, 'O': outputs, **attribs, **using, **self._callables,\n                                      'np': np, 'math': math})\n            except Exception as e:\n                return e\n        else:\n            return arg\n\n    def _evaluate_tensor_list(self, attribs, inputs, outputs, arg, using):\n        values = []\n        for item in arg:\n            value = self._evaluate(attribs, inputs, outputs, item, using)\n            if isinstance(value, Tensor) or isinstance(value, Exception):\n                values.append(value)\n            else:\n                assert isinstance(value, (list, tuple))\n                values += list(value)\n        return values\n\n    def _filter_none(self, items):\n        return (item for item in items if item is not None)\n\n    def _check_value(self, value, kind, key, op_type, op_name, tensor=False):\n        if isinstance(value, Exception):\n            raise ConversionError(\"Could not evaluate {kind} '{key}' while converting operator '{type}'; {err}: {cause}\"\n                                  .format(kind=kind, key=key, type=op_type, name=op_name,\n                                          err=type(value).__name__, cause=str(value) or repr(value)))\n        if tensor and not isinstance(value, Tensor):\n            raise ConversionError(\"While converting operator '{op_type}', {kind} '{key}' must result in a tensor, \"\n                                  \"but found {value_type}\"\n                                  .format(kind=kind, key=key, op_type=op_type, value_type=type(value)))\n\n    def _copy_tensor_(self, tensor):\n        return Tensor(self._graph, name=tensor.name, dtype=tensor.dtype, shape=tensor.shape,\n                      data=tensor.data, quant=copy.deepcopy(tensor.quant))\n\n    def _read_constant(self, tensor, type):\n        raise NotImplementedError()\n\n    def _make_constant(self, graph, dtype, value, inline):\n        raise NotImplementedError()\n\n    def _const_operation(self, output, value):\n        raise NotImplementedError()\n\n    def _transpose_operation(self, input, output, perm):\n        raise NotImplementedError()\n\n    def _reshape_operation(self, input, output, shape):\n        raise NotImplementedError()\n\n    def _squeeze_operation(self, input, output, axes):\n        raise NotImplementedError()\n\n    def _unsqueeze_operation(self, input, output, axes):\n        raise NotImplementedError()\n\n    def _scale_operation(self, input, output, scalar):\n        raise NotImplementedError()\n\n    @staticmethod\n    def _permute(items, perm):\n        permuted = list(items)\n        for i in range(len(perm)):\n            permuted[i] = items[perm[i]]\n        return type(items)(permuted)\n\n    @staticmethod\n    def _inverse_permute(items, perm):\n        permuted = list(items)\n        for i in range(len(perm)):\n            permuted[perm[i]] = items[i]\n        return type(items)(permuted)\n\n    def _working_shape(self, tensor):\n        return self._transposes.get(tensor) or tensor.shape\n\n    def _pre_transpose(self, input, perm):\n        shape = self._permute(self._working_shape(input), perm)\n        output = Tensor(input.graph, dtype=input.dtype, shape=shape, quant=copy.deepcopy(input.quant))\n        self._transpose_operation(input, output, perm)\n        return output\n\n    def _post_transpose(self, output, perm):\n        shape = self._inverse_permute(self._working_shape(output), perm)\n        input = Tensor(output.graph, dtype=output.dtype, shape=shape, quant=copy.deepcopy(output.quant))\n        self._transpose_operation(input, output, perm)\n        return input\n\n    def _pre_squeeze(self, input, axes):\n        shape = self.squeeze_shape(self._working_shape(input), axes)\n        output = Tensor(input.graph, dtype=input.dtype, shape=shape, quant=copy.deepcopy(input.quant))\n        self._squeeze_operation(input, output, axes)\n        return output\n\n    def _pre_unsqueeze(self, input, axes):\n        shape = self.unsqueeze_shape(self._working_shape(input), axes)\n        output = Tensor(input.graph, dtype=input.dtype, shape=shape, quant=copy.deepcopy(input.quant))\n        self._unsqueeze_operation(input, output, axes)\n        return output\n\n    def _post_squeeze(self, output, axes):\n        shape = self.unsqueeze_shape(self._working_shape(output), axes)\n        input = Tensor(output.graph, dtype=output.dtype, shape=shape, quant=copy.deepcopy(output.quant))\n        self._squeeze_operation(input, output, axes)\n        return input\n\n    def _post_unsqueeze(self, output, axes):\n        shape = self.squeeze_shape(self._working_shape(output), axes)\n        input = Tensor(output.graph, dtype=output.dtype, shape=shape, quant=copy.deepcopy(output.quant))\n        self._unsqueeze_operation(input, output, axes)\n        return input\n\n    def _reshape(self, input, shape):\n        output = Tensor(input.graph, dtype=input.dtype, shape=shape, quant=copy.deepcopy(input.quant))\n        self._reshape_operation(input, output, shape)\n        return output\n\n    def _shape_of(self, value):\n        if isinstance(value, (list, tuple)):\n            length = len(value)\n            return (length,) + self._shape_of(value[0]) if length > 0 else (0,)\n        elif isinstance(value, np.ndarray):\n            return value.shape\n        else:\n            return ()\n\n    def squeeze_shape(self, shape, axes):\n        return type(shape)(shape[i] for i in range(len(shape)) if i not in axes)\n\n    def unsqueeze_shape(self, shape, axes):\n        for axis in axes:\n            shape = shape[:axis] + (1,) + shape[axis:]\n        return shape\n\n    def transposing(self, tensor):\n        return tensor in self._transposes\n\n    def nxc_to_ncx(self, items, cond=True):\n        return items[0:1] + items[-1:] + items[1:-1] if cond else items\n\n    def ncx_to_nxc(self, items, cond=True):\n        return items[0:1] + items[2:] + items[1:2] if cond else items\n\n    def xcn_to_ncx(self, items, cond=True):\n        return items[-1:] + items[-2:-1] + items[:-2] if cond else items\n\n    def ncx_to_xcn(self, items, cond=True):\n        return items[2:] + items[1:2] + items[0:1] if cond else items\n\n    def cxn_to_ncx(self, items, cond=True):\n        return items[-1:] + items[:-1] if cond else items\n\n    def ncx_to_cxn(self, items, cond=True):\n        return items[1:] + items[:1] if cond else items\n\n    def nxc_to_ncx_perm(self, rank):\n        return self.nxc_to_ncx(list(range(rank)))\n\n    def ncx_to_nxc_perm(self, rank):\n        return self.ncx_to_nxc(list(range(rank)))\n\n    def xcn_to_ncx_perm(self, rank):\n        return self.xcn_to_ncx(list(range(rank)))\n\n    def ncx_to_xcn_perm(self, rank):\n        return self.ncx_to_xcn(list(range(rank)))\n\n    def cxn_to_ncx_perm(self, rank):\n        return self.cxn_to_ncx(list(range(rank)))\n\n    def ncx_to_cxn_perm(self, rank):\n        return self.ncx_to_cxn(list(range(rank)))\n\n    def axis_nxc_to_ncx(self, value, rank):\n        if isinstance(value, (list, tuple)):\n            return type(value)(self.axis_nxc_to_ncx(v, rank) for v in value)\n        else:\n            if value < 0:\n                value += rank\n            return 0 if value == 0 else 1 if value == rank - 1 else value + 1\n\n    def axis_ncx_to_nxc(self, value, rank):\n        if isinstance(value, (list, tuple)):\n            return type(value)(self.axis_ncx_to_nxc(v, rank) for v in value)\n        else:\n            if value < 0:\n                value += rank\n            return 0 if value == 0 else rank - 1 if value == 1 else value - 1\n\n    def ensure_positive(self, axis, rank):\n        if isinstance(axis, (list, tuple)):\n            return type(axis)(self.ensure_positive(item, rank) for item in axis)\n        else:\n            return axis + rank if axis < 0 else axis\n\n    def as_const(self, tensor, type=None):\n        return self._read_constant(self._tensor_map[tensor], type=type)\n\n    def is_const(self, tensor, type=None):\n        tensor = self._tensor_map[tensor]\n        if tensor.data is not None:\n            return True\n        try:\n            self._read_constant(tensor, type=type)\n            return True\n        except ConversionError:\n            return False\n\n    def is_zero(self, tensor):\n        return self.is_const(tensor) and len(tensor.shape) == 0 and self.as_const(tensor) == 0\n\n    def as_tensor(self, arg, dtype, inline=None):\n        return self._make_constant(self._graph, dtype=dtype, value=arg, inline=inline)\n\n    def new_tensor(self, shape, dtype):\n        return Tensor(self._graph, dtype=dtype, shape=shape)\n\n    def is_integer_upsample(self, input_shape, output_shape):\n        return all(output % input == 0 for input, output in zip(input_shape, output_shape))\n\n    def is_integer_downsample(self, input_shape, output_shape):\n        return all(input % output == 0 for input, output in zip(input_shape, output_shape))\n\n    def upsample_factor(self, input_shape, output_shape):\n        return [output // input for input, output in zip(input_shape, output_shape)]\n\n    def downsample_factor(self, input_shape, output_shape):\n        return [input // output for input, output in zip(input_shape, output_shape)]\n\n    def from_numpy(self, array, type=None):\n        return types.from_numpy(array, type)\n\n    def to_numpy(self, value, dtype=None):\n        return types.to_numpy(value, dtype)\n\n    def flexible_batch(self, output_shape, batch):\n        return [0] + output_shape[1:] if output_shape[0] == batch else output_shape\n\n    def fixed_batch(self, output_shape, batch):\n        return [batch] + output_shape[1:] if output_shape[0] == 0 else output_shape\n\n\nclass ConverterToNNEF(Converter):\n\n    _DtypeFromNumpy = {\n        np.float16: 'scalar',\n        np.float32: 'scalar',\n        np.float64: 'scalar',\n        np.int8: 'integer',\n        np.uint8: 'integer',\n        np.int16: 'integer',\n        np.uint16: 'integer',\n        np.int32: 'integer',\n        np.uint32: 'integer',\n        np.int64: 'integer',\n        np.uint64: 'integer',\n        np.bool_: 'logical',\n    }\n\n    def __init__(self, transforms, functions=None, mirror_unsupported=False, infer_shapes=False, custom_shapes=None):\n        Converter.__init__(self, transforms, functions, mirror_unsupported, infer_shapes, custom_shapes)\n\n    def _insert_externals_and_constants(self, graph):\n        for tensor in graph.tensors:\n            mapped = self._tensor_map[tensor]\n            if mapped.producer is None and len(mapped.consumers) > 0:\n                if mapped.data is None:\n                    Operation(graph, type='external', inputs=(), outputs=tensor,\n                              attribs={'shape': list(tensor.shape), 'dtype': tensor.dtype})\n                else:\n                    Operation(graph, type='constant', inputs=(), outputs=tensor,\n                              attribs={'shape': list(tensor.shape), 'dtype': tensor.dtype, 'value': mapped.data})\n\n    def _ensure_valid_ids(self, graph):\n        if graph.name is not None:\n            graph.name = self.ensure_valid_id(graph.name)\n\n        for tensor in graph.tensors:\n            if tensor.name is not None:\n                tensor.name = self.ensure_valid_id(tensor.name)\n\n    def _make_constant(self, graph, dtype, value, inline):\n        if isinstance(value, tuple):\n            value = list(value)\n        shape = value.shape if isinstance(value, np.ndarray) else (len(value),) if isinstance(value, list) else ()\n        isarray = isinstance(value, np.ndarray) or isinstance(value, list)\n\n        tensor = Tensor(graph, dtype=dtype, shape=shape)\n        if inline:\n            tensor.data = types.to_numpy(value, dtype)\n        else:\n            self._const_operation(tensor, value=value if isarray else [value])\n        return tensor\n\n    def _const_operation(self, output, value):\n        Operation(output.graph, type='constant', inputs=(), outputs=output,\n                  attribs={'value': value, 'dtype': output.dtype, 'shape': list(output.shape)})\n\n    def _transpose_operation(self, input, output, perm):\n        Operation(input.graph, type='transpose', inputs=input, outputs=output, attribs={'axes': perm})\n\n    def _reshape_operation(self, input, output, shape):\n        Operation(input.graph, type='reshape', inputs=input, outputs=output, attribs={'shape': list(shape)})\n\n    def _squeeze_operation(self, input, output, axes):\n        Operation(input.graph, type='squeeze', inputs=input, outputs=output, attribs={'axes': axes})\n\n    def _unsqueeze_operation(self, input, output, axes):\n        Operation(input.graph, type='unsqueeze', inputs=input, outputs=output, attribs={'axes': axes})\n\n    def _scale_operation(self, input, output, scalar):\n        if not isinstance(scalar, Tensor):\n            scalar = self.as_tensor(scalar, np.float32)\n\n        Operation(input.graph, type='mul', inputs=(input, scalar), outputs=output)\n\n    def _bias_operation(self, input, output, bias):\n        if not isinstance(bias, Tensor):\n            bias = self.as_tensor(bias, np.float32)\n\n        Operation(input.graph, type='add', inputs=(input, bias), outputs=output)\n\n    def _transform_constant(self, tensor, func):\n        data = func(tensor.producer.attribs['value'])\n        tensor.shape = data.shape\n        tensor.producer.attribs['value'] = data\n        tensor.producer.attribs['shape'] = list(data.shape)\n\n    @staticmethod\n    def remove_unused_constants(graph):\n        ops = [op for op in graph.operations if op.type == 'constant' and not op.output.has_consumer]\n        tensors = [op.output for op in ops]\n        graph.outputs = [tensor for tensor in graph.outputs if tensor not in tensors]\n        graph.remove_operations(ops, unlink=True)\n        graph.remove_tensors(tensors)\n\n    @staticmethod\n    def inline_scalar_constants(graph):\n        for op in graph.operations:\n            if op.type == 'constant':\n                value = op.attribs['value']\n                if not isinstance(value, np.ndarray):\n                    value = np.array(value, op.output.dtype).reshape(op.output.shape)\n                if len(value.shape) == 0:\n                    op.output.data = value\n                    graph.remove_operation(op, unlink=True)\n\n    @staticmethod\n    def convert_constants_to_variables(graph):\n        variables = 0\n        for op in graph.operations:\n            if op.type == 'constant':\n                value = op.attribs['value']\n                if isinstance(value, np.ndarray):\n                    variables += 1\n                    op.type = 'variable'\n                    op.attribs['label'] = op.name if op.name else 'variable' + str(variables)\n                    op.output.data = value\n                    del op.attribs['value']\n\n    @staticmethod\n    def ensure_valid_id(name):\n        return re.sub('[^_0-9a-zA-Z]+', '_', name)\n\n    def nnef_dtype(self, dtype):\n        return ConverterToNNEF._DtypeFromNumpy[dtype]\n\n\nclass ConverterFromNNEF(Converter):\n\n    @staticmethod\n    def decomposed_operations():\n        return ['separable_conv', 'separable_deconv', 'rms_pool',\n                'local_mean_normalization', 'local_variance_normalization', 'local_contrast_normalization',\n                'l1_normalization', 'moments']\n\n    def __init__(self, transforms, functions=None, mirror_unsupported=False):\n        Converter.__init__(self, transforms, functions, mirror_unsupported)\n\n    @staticmethod\n    def convert_variables_to_constants(graph):\n        for op in graph.operations:\n            if op.type == 'variable':\n                op.type = 'constant'\n                op.attribs['value'] = op.output.data\n                del op.attribs['label']\n\n    @staticmethod\n    def fill_data_in_constants(graph):\n        for op in graph.operations:\n            if op.type == 'constant':\n                op.output.data = op.attribs['value']\n\n    def _is_constant(self, tensor):\n        if tensor.producer:\n            return tensor.producer.type == 'constant'\n        else:\n            return tensor.data is not None\n\n    def _read_constant(self, tensor, type):\n        if tensor.data is not None:\n            value = tensor.data\n        elif tensor.producer and tensor.producer.type == 'constant':\n            value = tensor.producer.attribs['value']\n        else:\n            raise ConversionError('trying to evaluate non-constant tensor')\n\n        return types.from_numpy(value, type=type) if isinstance(value, np.ndarray) else \\\n            types.cast(value, type=type) if type is not None else value\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/conversion/nnef_to_onnx.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\nfrom .converter import ConverterFromNNEF as _Converter, Transform\nfrom ..model import Tensor, Operation\nfrom ..utils import types\nimport numpy as np\nfrom nnef.shapes import pool_shape, reduce_shape, deconv_shape\n\n\nclass Converter(_Converter):\n\n    @staticmethod\n    def defined_shapes():\n        return {\n            'lp_pool': pool_shape,\n            'lp_reduce': reduce_shape,\n            'mean_variance_normalization': lambda input, scale, offset, **kwargs: input,\n            'lstm_step': lambda x, h, c, W, R, B: (h, c),\n            'lstm_loop': lambda X, W, R, B, h, c, **kwargs: (h, c),\n            'erf': lambda x: x,\n            'mish': lambda x: x,\n            'depth_to_space': lambda x, block_size, **kwargs: [x[0], x[1] // block_size ** 2, x[2] * block_size, x[3] * block_size],\n            'space_to_depth': lambda x, block_size, **kwargs: [x[0], x[1] * block_size ** 2, x[2] // block_size, x[3] // block_size],\n        }\n\n    @staticmethod\n    def decomposed_operations():\n        return ['lstm_step', 'lstm_loop']\n\n    def __init__(self, custom_transforms=None, custom_functions=None, mirror_unsupported=False):\n        _Converter.__init__(self, transforms=self.merge_transforms(_Transforms, custom_transforms),\n                            functions=custom_functions, mirror_unsupported=mirror_unsupported)\n\n    def __call__(self, graph):\n        self.fill_data_in_constants(graph)\n        self.convert_variables_to_constants(graph)\n        graph = _Converter.__call__(self, graph)\n        self._fix_inline_constants(graph)\n        return graph\n\n    def _fix_inline_constants(self, graph):\n        constants = 0\n        for tensor in graph.tensors:\n            if tensor.name is None:\n                constants += 1\n                tensor.name = '$' + str(constants)\n\n    def _make_constant(self, graph, dtype, value, inline):\n        return Tensor(graph, dtype=dtype, shape=self._shape_of(value), data=types.to_numpy(value, dtype=dtype))\n\n    def _const_operation(self, output, value):\n        Operation(output.graph, type='Constant', inputs=(), outputs=output,\n                  attribs={'value': types.to_numpy(value, dtype=output.dtype)})\n\n    def _transform_constant(self, tensor, func):\n        if tensor.producer:\n            data = func(tensor.producer.attribs['value'] if tensor.producer else tensor.data)\n            tensor.shape = data.shape\n            tensor.producer.attribs['value'] = data\n        else:\n            tensor.data = func(tensor.data)\n            tensor.shape = tensor.data.shape\n\n    def _squeeze_operation(self, input, output, axes):\n        Operation(input.graph, type='Squeeze', inputs=input, outputs=output, attribs={'axes': axes})\n\n    def _unsqueeze_operation(self, input, output, axes):\n        Operation(input.graph, type='Unsqueeze', inputs=input, outputs=output, attribs={'axes': axes})\n\n    def _interleave(self, items):\n        return [item[0] for item in items] + [item[1] for item in items]\n\n    def squeeze_input(self, tensor, axes):\n        return self._pre_squeeze(tensor, axes=axes) if len(axes) else tensor\n\n    def squeeze_output(self, tensor, axes):\n        return self._post_squeeze(tensor, axes=axes) if len(axes) else tensor\n\n    def unsqueeze_input(self, tensor, axes):\n        return self._pre_unsqueeze(tensor, axes=axes) if len(axes) else tensor\n\n    def unsqueeze_output(self, tensor, axes):\n        return self._post_unsqueeze(tensor, axes=axes) if len(axes) else tensor\n\n    def squeeze_vector(self, tensor):\n        if self._is_constant(tensor) and len(self._tensor_map[tensor].consumers) == 1:\n            self._transform_constant(tensor, lambda data: np.squeeze(data, 0))\n            return tensor\n        else:\n            return self.squeeze_input(tensor, axes=[0])\n\n    def convert_pads(self, padding, truncate=False):\n        return self._interleave(padding[2:] if truncate else padding) if padding != [] else None\n\n    def convert_auto_pad(self, padding):\n        return \"SAME_UPPER\" if padding == [] else \"NOTSET\"\n\n    def convert_output_padding(self, input_shape, filter_shape, output_shape, padding, stride, dilation, groups):\n        calculated_shape = deconv_shape(input_shape, filter_shape, padding=padding, stride=stride, dilation=dilation, groups=groups)\n        output_padding = [o - c for c, o in zip(calculated_shape[2:], output_shape[2:])]\n        return output_padding\n\n    def is_const(self, tensor, value=None):\n        return self._is_constant(self._tensor_map[tensor]) and value is None or self.as_const(tensor) == value\n\n    def broadcast(self, tensor, rank):\n        return self.unsqueeze_input(tensor, axes=list(range(tensor.rank, rank)))\n\n\n_Transforms = Converter.unpack_transforms({\n    ('external', 'constant'):\n        Transform(type=None),\n    ('conv', 'deconv'):\n        Transform(\n            type=('Conv', 'ConvTranspose'),\n            defaults={\n                'output_shape': None,\n            },\n            using={\n                'transposed': '!_type_ == \"deconv\"',\n                'group': '!groups if groups != 0 else O[0].shape[1] if transposed else I[0].shape[1]',\n            },\n            cond={\n                '!I[2].rank != 0 or (is_const(I[2]) and as_const(I[2]) == 0)': 'bias must be constant 0 or of rank 1',\n            },\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n                '!squeeze_vector(I[2]) if I[2].rank != 0 else None',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'auto_pad': '!convert_auto_pad(padding)',\n                'pads': '!convert_pads(padding)',\n                'strides': '!stride',\n                'dilations': '!dilation',\n                'group': '!group',\n                'output_shape': '!output_shape if _type_ == \"deconv\" and output_shape != [] and padding == [] else None',\n                'output_padding': '!convert_output_padding(I[0].shape, I[1].shape, output_shape, padding=padding, '\n                                  'stride=stride, dilation=dilation, groups=group) '\n                                  'if _type_ == \"deconv\" and output_shape != [] and padding != [] else None',\n                'kernel_shape': '!I[1].shape[2:]',\n            }\n        ),\n    ('max_pool', 'avg_pool', 'lp_pool'):\n        Transform(\n            type=('MaxPool', 'AveragePool', 'LpPool'),\n            cond={\n                '!size[:2] == [1,1]': 'size must be 1 in batch and channel dimensions',\n                '!stride[:2] == [1,1]': 'stride must be 1 in batch and channel dimensions',\n                '!dilation[:2] == [1,1]': 'dilation must be 1 in batch and channel dimensions',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'kernel_shape': '!size[2:]',\n                'auto_pad': '!convert_auto_pad(padding)',\n                'pads': '!convert_pads(padding, truncate=True)',\n                'strides': '!stride[2:]',\n                'dilations': '!dilation[2:] if _type_ == \"max_pool\" and not all(d == 1 for d in dilation[2:]) else None',\n                'count_include_pad': '!(1 if border == \"constant\" else 0) if _type_ == \"avg_pool\" else None',\n            }\n        ),\n    ('min_reduce', 'max_reduce', 'mean_reduce', 'sum_reduce'):\n        Transform(\n            type=('ReduceMin', 'ReduceMax', 'ReduceMean', 'ReduceSum'),\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!axes',\n                'keepdims': 1,\n            }\n        ),\n    'lp_reduce':\n        Transform(\n            type='!\"ReduceL1\" if p == 1 else \"ReduceL2\"',\n            cond={\n                '!p == 1 or p == 2': 'p must be 1 or 2',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!axes',\n                'keepdims': 1,\n            }\n        ),\n    ('argmin_reduce', 'argmax_reduce'):\n        Transform(\n            type=('ArgMin', 'ArgMax'),\n            cond={\n                '!len(axes) == 1': 'axes must be of length 1',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axis': '!axes[0]',\n                'keepdims': 1,\n            }\n        ),\n    'batch_normalization':\n        Transform(\n            type='BatchNormalization',\n            inputs=(\n                '!I[0]',\n                '!squeeze_vector(I[4])',\n                '!squeeze_vector(I[3])',\n                '!squeeze_vector(I[1])',\n                '!squeeze_vector(I[2])',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'epsilon': '!epsilon',\n                'spatial': '!0 if I[1].rank == I[0].rank else None',\n            }\n        ),\n    ('relu', 'sigmoid', 'tanh', 'softplus', 'selu', 'not', 'copy', 'elu', 'erf', 'mish', 'abs', 'sign',\n     'sin', 'cos', 'tan', 'asin', 'acos', 'atan', 'sinh', 'cosh', 'tanh', 'asinh', 'acosh', 'atanh',\n     'exp', 'log', 'neg', 'sqrt', 'ceil', 'floor', 'round'):\n        Transform(\n            type=('Relu', 'Sigmoid', 'Tanh', 'Softplus', 'Selu', 'Not', 'Identity', 'Elu', 'Erf', 'Mish', 'Abs', 'Sign',\n                  'Sin', 'Cos', 'Tan', 'Asin', 'Acos', 'Atan', 'Sinh', 'Cosh', 'Tanh', 'Asinh', 'Acosh', 'Atanh',\n                  'Exp', 'Log', 'Neg', 'Sqrt', 'Ceil', 'Floor', 'Round'),\n            inputs='!I[0]',\n            outputs='!O[0]',\n        ),\n    ('add', 'sub', 'mul', 'div', 'pow', 'min', 'max', 'and', 'or', 'eq', 'lt', 'gt', 'le', 'ge'):\n        Transform(\n            type=('Add', 'Sub', 'Mul', 'Div', 'Pow', 'Min', 'Max', 'And', 'Or',\n                  'Equal', 'Less', 'Greater', 'LessOrEqual', 'GreaterOrEqual'),\n            inputs=(\n                '!broadcast(I[0], O[0].rank)',\n                '!broadcast(I[1], O[0].rank)',\n            ),\n            outputs='!O[0]',\n        ),\n    'sqr':\n        Transform(\n            type='Mul',\n            inputs=('!I[0]', '!I[0]'),\n            outputs='!O[0]',\n        ),\n    'leaky_relu':\n        Transform(\n            type='LeakyRelu',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'alpha': '!alpha',\n            }\n        ),\n    'prelu':\n        Transform(\n            type='PRelu',\n            inputs=(\n                '!I[0]',\n                '!broadcast(I[1], I[0].rank)',\n            ),\n            outputs='!O[0]',\n        ),\n    'transpose':\n        Transform(\n            type='Transpose',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'perm': '!axes',\n            }\n        ),\n    'reshape':\n        Transform(\n            type='Reshape',\n            inputs=(\n                '!I[0]',\n                '!as_tensor(shape, np.int64)',\n            ),\n            outputs='!O[0]',\n        ),\n    'squeeze':\n        Transform(\n            type='Squeeze',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!axes',\n            }\n        ),\n    'unsqueeze':\n        Transform(\n            type='Unsqueeze',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!axes',\n            }\n        ),\n    'matmul':\n        Transform(\n            using={\n                'transposed': '!transposeA or transposeB',\n            },\n            type=\"!'Gemm' if transposed else 'MatMul'\",\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!O[0]',\n            attribs={\n                'transA': '!int(transposeA) if transposed else None',\n                'transB': '!int(transposeB) if transposed else None',\n            }\n        ),\n    'linear':\n        Transform(\n            type='Gemm',\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n                '!squeeze_vector(I[2])',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'transA': 0,\n                'transB': 1,\n            }\n        ),\n    'local_response_normalization':\n        Transform(\n            type='LRN',\n            cond={\n                '!size[0] == 1 and all(s == 1 for s in size[2:])': 'size must be 1 in all non-channel dimensions',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'alpha': '!alpha',\n                'beta': '!beta',\n                'bias': '!bias',\n                'size': '!size[1]',\n            }\n        ),\n    'concat':\n        Transform(\n            type='Concat',\n            inputs=['!I[:]'],\n            outputs='!O[0]',\n            attribs={\n                'axis': '!axis',\n            }\n        ),\n    'split':\n        Transform(\n            type='Split',\n            using={\n                'factor': '!I[0].shape[axis] // sum(ratios)',\n            },\n            inputs='!I[0]',\n            outputs=['!O[:]'],\n            attribs={\n                'axis': '!axis',\n                'split': '![r * factor for r in ratios]',\n            }\n        ),\n    'softmax':\n        Transform(\n            type='Softmax',\n            cond={\n                '!len(axes) == 1': 'axes must be of length 1',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axis': '!axes[0]',\n            }\n        ),\n    'add_n':\n        Transform(\n            type='Sum',\n            inputs=['!I[:]'],\n            outputs='!O[0]',\n        ),\n    'select':\n        Transform(\n            type='Where',\n            inputs=(\n                '!broadcast(I[0], O[0].rank)',\n                '!broadcast(I[1], O[0].rank)',\n                '!broadcast(I[2], O[0].rank)',\n            ),\n            outputs='!O[0]',\n        ),\n    'clamp':\n        Transform(\n            type='Clip',\n            cond={\n                '!I[1].rank == 0': 'input a must be of rank 0',\n                '!I[2].rank == 0': 'input b must be of rank 0',\n            },\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n                '!I[2]',\n            ),\n            outputs='!O[0]',\n        ),\n    'pad':\n        Transform(\n            type='Pad',\n            inputs=(\n                '!I[0]',\n                '!as_tensor(convert_pads(padding), np.int64)',\n                '!as_tensor(value, np.float32)',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'mode': '!\"edge\" if border == \"replicate\" else border',\n            }\n        ),\n    'tile':\n        Transform(\n            type='Tile',\n            inputs=(\n                '!I[0]',\n                '!as_tensor(repeats, np.int64)',\n            ),\n            outputs='!O[0]',\n        ),\n    'slice':\n        Transform(\n            type='Slice',\n            inputs=(\n                '!I[0]',\n                '!as_tensor(begin, np.int64)',\n                '!as_tensor(end, np.int64)',\n                '!as_tensor(axes, np.int64)',\n                '!as_tensor(stride, np.int64)',\n            ),\n            outputs='!O[0]',\n        ),\n    ('l1_normalization', 'l2_normalization'):\n        Transform(\n            type='LpNormalization',\n            cond={\n                '!len(axes) == 1': 'axes must be of length 1',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axis': '!axes[0]',\n                'p': '!1 if _type_ == \"l1_normalization\" else 2',\n            }\n        ),\n    'mean_variance_normalization':\n        Transform(\n            type='!\"InstanceNormalization\" if instance else \"MeanVarianceNormalization\"',\n            using={\n                'instance': '!axes == list(range(2, I[0].rank))'\n                            ' and I[1].rank == 2 and I[1].shape[0] == 1'\n                            ' and I[2].rank == 2 and I[2].shape[0] == 1',\n            },\n            cond={\n                '!is_const(scale, 1.0) if not instance else True':\n                    'scale must be 1 if operation does not denote instance normalization',\n                '!is_const(offset, 0.0) if not instance else True':\n                    'offset must be 0 if operation does not denote instance normalization',\n            },\n            inputs=(\n                '!I[0]',\n                '!squeeze_vector(I[1]) if instance else None',\n                '!squeeze_vector(I[2]) if instance else None',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'axes': '!axes if not instance else None',\n                'epsilon': '!epsilon if instance else None',\n            }\n        ),\n    ('nearest_upsample', 'multilinear_upsample'):\n        Transform(\n            type='Resize',\n            using={\n                'linear': '!_type_ == \"multilinear_upsample\"',\n            },\n            inputs=(\n                '!I[0]',\n                '!as_tensor([], np.float32)',\n                '!as_tensor([1.0, 1.0] + [float(f) for f in factor], np.float32)',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'mode': '!\"linear\" if linear else \"nearest\"',\n                'coordinate_transformation_mode': '!(\"half_pixel\" if method == \"symmetric\" else'\n                                                  ' \"asymmetric\" if method == \"asymmetric\" else'\n                                                  ' \"align_corners\") if linear else None',\n            }\n        ),\n    'nearest_downsample':\n        Transform(\n            type='Resize',\n            inputs=(\n                '!I[0]',\n                '!as_tensor([], np.float32)',\n                '!as_tensor([1.0, 1.0] + [1.0 / f for f in factor], np.float32)',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'mode': 'nearest',\n            }\n        ),\n    'gather':\n        Transform(\n            type='Gather',\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!O[0]',\n            attribs={\n                'axis': '!axis',\n            },\n        ),\n    'cast':\n        Transform(\n            type='Cast',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'to': '!O[0].dtype',\n            }\n        ),\n    'depth_to_space':\n        Transform(\n            type=\"DepthToSpace\",\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'blocksize': '!block_size',\n                'mode': '!\"DCR\" if blocks_first else \"CRD\"',\n            },\n        ),\n    'space_to_depth':\n        Transform(\n            type=\"SpaceToDepth\",\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'blocksize': '!block_size',\n            },\n        ),\n})\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/conversion/nnef_to_tf.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\nfrom .converter import ConverterFromNNEF as _Converter, Transform\nfrom ..model import Tensor, Operation\nfrom ..model.utils import generate_op_names_from_op_type\nfrom ..utils import types\nimport numpy as np\nimport copy\n\n\nclass Converter(_Converter):\n\n    @staticmethod\n    def defined_shapes():\n        return {\n            'relu6': lambda shape: shape,\n        }\n\n    @staticmethod\n    def decomposed_operations():\n        return _Converter.decomposed_operations() + ['linear']\n\n    def __init__(self, data_format='NXC', io_transpose=False, custom_transforms=None, custom_functions=None,\n                 mirror_unsupported=False):\n        _Converter.__init__(self, transforms=self.merge_transforms(_Transforms, custom_transforms),\n                            functions=custom_functions, mirror_unsupported=mirror_unsupported)\n        self._data_format = data_format\n        self._io_transpose = io_transpose\n\n    def __call__(self, graph):\n        self.convert_variables_to_constants(graph)\n        graph = _Converter.__call__(self, graph)\n        self._fix_output_transposes(graph)\n        self._remove_unused_constants(graph)\n        generate_op_names_from_op_type(graph)\n        return graph\n\n    def _global_attribs(self):\n        return {'_lite_': False}\n\n    def _prepare(self, graph):\n        self._fix_inline_constants(graph)\n\n    def _fix_inline_constants(self, graph):\n        for tensor in graph.tensors:\n            mapped = self._tensor_map[tensor]\n            if not mapped.producer and mapped.data is not None:\n                self._const_operation(tensor, tensor.data)\n\n    def _remove_unused_constants(self, graph):\n        ops = [op for op in graph.operations if op.type == 'Const' and not op.output.has_consumer]\n        tensors = [op.output for op in ops]\n        graph.outputs = [tensor for tensor in graph.outputs if tensor not in tensors]\n        graph.remove_operations(ops, unlink=True)\n        graph.remove_tensors(tensors)\n\n    def _fix_output_transposes(self, graph):\n        graph.outputs = [self.transpose_input(tensor) if self.needs_io_transpose(tensor) else\n                         self.undo_transpose(tensor) for tensor in graph.outputs]\n\n    def _const_operation(self, output, value):\n        Operation(output.graph, type='Const', inputs=(), outputs=output,\n                  attribs={'value': types.to_numpy(value, dtype=output.dtype), 'dtype': output.dtype})\n\n    def _transpose_operation(self, input, output, perm):\n        Operation(input.graph, type='Transpose', inputs=(input, self.as_tensor(perm, np.int32)),\n                  outputs=output, attribs={'T': input.dtype})\n\n    def _reshape_operation(self, input, output, shape):\n        Operation(input.graph, type='Reshape', inputs=(input, self.as_tensor(shape, np.int32)), outputs=output,\n                  attribs={'T': input.dtype})\n\n    def _squeeze_operation(self, input, output, axes):\n        Operation(input.graph, type='Squeeze', inputs=input, outputs=output,\n                  attribs={'squeeze_dims': axes, 'T': input.dtype})\n\n    def _unsqueeze_operation(self, input, output, axes):\n        if len(axes) == 1:\n            Operation(input.graph, type='ExpandDims', inputs=(input, self.as_tensor(axes[0], np.int32)),\n                      outputs=output, attribs={'T': input.dtype})\n        else:\n            Operation(input.graph, type='Reshape', inputs=(input, self.as_tensor(output.shape, np.int32)),\n                      outputs=output, attribs={'T': input.dtype})\n\n    def _scale_operation(self, input, output, scalar):\n        if not isinstance(scalar, Tensor):\n            scalar = self.as_tensor(scalar, np.float32)\n\n        Operation(input.graph, type='Mul', inputs=(input, scalar), outputs=output,\n                  attribs={'T': input.dtype})\n\n    def _bias_operation(self, input, output, bias):\n        if not isinstance(bias, Tensor):\n            bias = self.as_tensor(bias, np.float32)\n\n        if bias.rank == 1:\n            Operation(output.graph, type='BiasAdd', inputs=(input, bias), outputs=output, attribs={'T': output.dtype})\n        else:\n            Operation(output.graph, type='Add', inputs=(input, bias), outputs=output, attribs={'T': output.dtype})\n\n    def _make_constant(self, graph, dtype, value, inline):\n        tensor = Tensor(graph, dtype=dtype, shape=self._shape_of(value))\n        self._const_operation(tensor, value)\n        return tensor\n\n    def _transform_constant(self, tensor, func):\n        data = func(tensor.producer.attribs['value'])\n        tensor.shape = data.shape\n        tensor.producer.attribs['value'] = data\n\n    def _is_conv_filter(self, tensor, groups):\n        tensor = self._tensor_map.get(tensor)\n        return tensor and len(tensor.consumers) > 0 and \\\n               all(op.type == 'conv' and op.inputs[1] is tensor and op.attribs['groups'] == groups\n                   for op in tensor.consumers)\n\n    def _ensure_constant_producer(self, tensor):\n        if tensor.is_constant and tensor.producer is None:\n            Operation(tensor.graph, type='Const', inputs=(), outputs=tensor,\n                      attribs={'value': tensor.data, 'dtype': tensor.data.dtype.type})\n\n    def _is_nxc(self, format):\n        return format[0] == 'N' and format[-1] == 'C' and len(format) > 2\n\n    def _is_xcn(self, format):\n        return format[-2] == 'C' and format[-1] == 'N' and len(format) > 2\n\n    def _is_cxn(self, format):\n        return format[0] == 'C' and format[-1] == 'N' and len(format) > 2\n\n    def needs_io_transpose(self, tensor):\n        if tensor.rank <= 2:\n            return False\n        if isinstance(self._io_transpose, bool):\n            return self._io_transpose\n        else:\n            return tensor.name in self._io_transpose\n\n    def is_nxc(self):\n        return self._is_nxc(self._data_format)\n\n    def data_format(self, rank):\n        X = 'W' if rank == 1 else 'HW' if rank == 2 else 'DHW' if rank == 3 else None\n        return self._data_format.replace('X', X) if X else self._data_format\n\n    def convert_padding(self, value):\n        return 'SAME' if value == [] else 'VALID' if all(item == (0, 0) for item in value) else 'EXPLICIT'\n\n    def convert_explicit_paddings(self, value):\n        if value == [] or all(item == (0, 0) for item in value):\n            return None\n        else:\n            paddings = [item for pair in value for item in pair]\n            return [0, 0] + paddings + [0, 0] if self.is_nxc() else [0, 0, 0, 0] + paddings\n\n    def convert_size(self, value):\n        if isinstance(value, tuple):\n            return (1,) + value + (1,) if self.is_nxc() else (1, 1) + value[2:]\n        else:\n            return [1] + value + [1] if self.is_nxc() else [1, 1] + value[2:]\n\n    def transpose_input(self, tensor):\n        if self.is_nxc():\n            return self._pre_transpose(tensor, self.ncx_to_nxc_perm(tensor.rank)) \\\n                if not self.transposing(tensor) and tensor.rank > 2 else tensor\n        else:\n            assert not self.transposing(tensor)\n            return tensor\n\n    def transpose_output(self, tensor):\n        if self.is_nxc():\n            self._transposes[tensor] = self.ncx_to_nxc(tensor.shape)\n        return tensor\n\n    def transpose_filter(self, tensor, format='XCN'):\n        if self._is_xcn(format):\n            perm = self.ncx_to_xcn_perm(tensor.rank)\n        elif self._is_nxc(format):\n            perm = self.ncx_to_nxc_perm(tensor.rank)\n        elif self._is_cxn(format):\n            perm = self.ncx_to_cxn_perm(tensor.rank)\n        else:\n            assert False\n\n        return self._pre_transpose(tensor, perm)\n\n    def transpose_depthwise_filter(self, tensor, channels, format='XCN'):\n        if self._is_xcn(format):\n            perm = self.ncx_to_xcn_perm(tensor.rank)\n        elif self._is_nxc(format):\n            perm = self.ncx_to_nxc_perm(tensor.rank)\n        elif self._is_cxn(format):\n            perm = self.ncx_to_cxn_perm(tensor.rank)\n        else:\n            assert False\n\n        shape = tensor.shape[2:] + (channels, tensor.shape[0] // channels)\n        return self._reshape(self._pre_transpose(tensor, perm), shape)\n\n    def transpose_like(self, tensor, reference):\n        if self.transposing(reference):\n            self.transpose_output(tensor)\n        return tensor\n\n    def transpose_list_like(self, items, ref):\n        return self.ncx_to_nxc(items) if self.transposing(ref) else items\n\n    def transpose_axis_like(self, axis, ref, rank=None):\n        return self.axis_ncx_to_nxc(axis, rank or ref.rank) if self.transposing(ref) else axis\n\n    def undo_transpose(self, tensor):\n        perm = self.nxc_to_ncx_perm(tensor.rank)\n        if perm == list(range(tensor.rank)):\n            return tensor\n        return self._pre_transpose(tensor, perm) if self.transposing(tensor) else tensor\n\n    def squeeze_input(self, tensor, axes):\n        return self._pre_squeeze(tensor, axes=axes)\n\n    def squeeze_output(self, tensor, axes):\n        return self._post_squeeze(tensor, axes=axes)\n\n    def unsqueeze_input(self, tensor, axes):\n        return self._pre_unsqueeze(tensor, axes=axes)\n\n    def unsqueeze_output(self, tensor, axes):\n        return self._post_unsqueeze(tensor, axes=axes)\n\n    def squeeze_vector(self, tensor):\n        if self._is_constant(tensor) and len(self._tensor_map[tensor].consumers) == 1:\n            self._transform_constant(tensor, lambda data: np.squeeze(data, 0))\n            return tensor\n        else:\n            return self.squeeze_input(tensor, axes=[0])\n\n    def scale_output(self, output, scalar):\n        input = Tensor(output.graph, dtype=output.dtype, shape=self._working_shape(output), quant=copy.deepcopy(output.quant))\n        self._scale_operation(input, output, scalar)\n        return input\n\n    def bias_add(self, output, bias):\n        if bias.rank == 0 and np.all(bias.data == 0):\n            return output\n\n        input = Tensor(output.graph, dtype=output.dtype, shape=self._working_shape(output), quant=copy.deepcopy(output.quant))\n        self._bias_operation(input, output, bias)\n        return input\n\n    def split_sizes(self, ratios, size):\n        p = size / sum(ratios)\n        return [p * r for r in ratios]\n\n    def convert_binarg(self, tensor, other):\n        self._ensure_constant_producer(tensor)\n        if tensor.rank == 0:\n            return tensor\n        needs_transpose = self.transposing(other) and not self.transposing(tensor)\n        if other.rank > tensor.rank:\n            if tensor.rank == 2 and tensor.shape[0] == 1 and needs_transpose:\n                return self.squeeze_vector(tensor)\n            tensor = self._pre_unsqueeze(tensor, axes=list(range(tensor.rank, other.rank)))\n        return self.transpose_input(tensor) if needs_transpose else tensor\n\n    def as_numpy(self, value, dtype=None):\n        return types.to_numpy(value, dtype)\n\n    def as_bits(self, items):\n        bits = 0\n        for idx, val in enumerate(items):\n            if val:\n                bits |= (1 << idx)\n        return bits\n\n    def out_of_range(self, x, limit):\n        return x >= limit or x <= -limit\n\n\n_Transforms = Converter.unpack_transforms({\n    'external':\n        Transform(\n            type='Placeholder',\n            using={'needs_transpose': '!needs_io_transpose(O[0])'},\n            outputs='!transpose_output(O[0]) if needs_transpose else O[0]',\n            attribs={\n                'shape': '!tuple(ncx_to_nxc(shape) if needs_transpose else shape)',\n                'dtype': '!dtype',\n            }\n        ),\n    'constant':\n        Transform(\n            type='Const',\n            outputs='!O[0]',\n            attribs={\n                'dtype': '!O[0].dtype',\n                'value': '!value if isinstance(value, np.ndarray) else as_numpy(value[0] if shape == [] else value)',\n            }\n        ),\n    'conv':\n        Transform(\n            type='!\"Conv{n}D\".format(n=I[0].rank - 2) if groups != 0 else \"DepthwiseConv2dNative\"',\n            cond={\n                '!I[0].rank == 4 or I[0].rank == 5': 'rank must be 4 or 5',\n            },\n            using={\n                'channels': '!I[0].shape[1]',\n            },\n            inputs=(\n                '!transpose_input(I[0])',\n                '!transpose_filter(I[1]) if groups != 0 else transpose_depthwise_filter(I[1], channels)',\n            ),\n            outputs=(\n                '!bias_add(transpose_output(O[0]), squeeze_vector(I[2]) if I[2].rank == 2 else I[2])',\n            ),\n            attribs={\n                'padding': '!convert_padding(padding)',\n                'explicit_paddings': '!convert_explicit_paddings(padding)',\n                'strides': '!convert_size(stride)',\n                'dilations': '!convert_size(dilation)',\n                'data_format': '!data_format(I[0].rank - 2)',\n                'T': '!I[0].dtype',\n            }\n        ),\n    'deconv':\n        Transform(\n            type='!\"Conv{n}DBackpropInput\".format(n=I[0].rank - 2) if groups != 0 else \"DepthwiseConv2dNativeBackpropInput\"',\n            cond={\n                '!I[0].rank == 4 or I[0].rank == 5': 'rank must be 4 or 5',\n            },\n            using={\n                'channels': '!O[0].shape[1]',\n            },\n            inputs=(\n                '!as_tensor(ncx_to_nxc(output_shape) if is_nxc() else output_shape, np.int32)',\n                '!transpose_filter(I[1]) if groups != 0 else transpose_depthwise_filter(I[1], channels)',\n                '!transpose_input(I[0])',\n            ),\n            outputs=(\n                '!bias_add(transpose_output(O[0]), squeeze_vector(I[2]) if I[2].rank == 2 else I[2])',\n            ),\n            attribs={\n                'padding': '!convert_padding(padding)',\n                'explicit_paddings': '!convert_explicit_paddings(padding)',\n                'strides': '!convert_size(stride)',\n                'dilations': '!convert_size(dilation)',\n                'data_format': '!data_format(I[0].rank - 2)',\n                'T': '!I[0].dtype',\n            }\n        ),\n    ('max_pool', 'avg_pool', 'max_pool_with_index'):\n        Transform(\n            type=('MaxPool', 'AvgPool', 'MaxPoolWithArgmax'),\n            cond={\n                '!border == \"ignore\"': 'border must be \"ignore\"',\n            },\n            inputs=(\n                '!transpose_input(I[0])',\n            ),\n            outputs=(\n                '!transpose_output(O[0])',\n                '!transpose_output(O[1]) if len(O) > 1 else None',\n            ),\n            attribs={\n                'ksize': '!ncx_to_nxc(size) if is_nxc() else size',\n                'strides': '!ncx_to_nxc(stride) if is_nxc() else stride',\n                'padding': '!convert_padding(padding)',\n                'explicit_paddings': '!convert_explicit_paddings(padding)',\n                'data_format': '!data_format(I[0].rank - 2) if _type_ != \"max_pool_with_index\" else None',\n                'T': '!I[0].dtype',\n            }\n        ),\n    'box':\n        Transform(\n            type='AvgPool',\n            using={'volume': '!int(np.prod(size))'},\n            inputs=(\n                '!transpose_input(I[0])',\n            ),\n            outputs=(\n                '!scale_output(transpose_output(O[0]), volume) if not normalize else transpose_output(O[0])',\n            ),\n            attribs={\n                'ksize': '!ncx_to_nxc(size) if is_nxc() else size',\n                'strides': '!ncx_to_nxc(stride) if is_nxc() else stride',\n                'padding': '!convert_padding(padding)',\n                'explicit_paddings': '!convert_explicit_paddings(padding)',\n                'data_format': '!data_format(I[0].rank - 2)',\n                'T': '!I[0].dtype',\n            }\n        ),\n    'reshape':\n        Transform(\n            type='Reshape',\n            inputs=(\n                '!undo_transpose(I[0])',\n                '!as_tensor(fixed_batch(shape, I[0].shape[0]), np.int32)',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'T': '!dtype if not _lite_ else None',\n            }\n        ),\n    'transpose':\n        Transform(\n            type='Transpose',\n            inputs=(\n                '!I[0]',\n                '!as_tensor(transpose_axis_like(axes, I[0]), np.int32)',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'T': '!dtype if not _lite_ else None',\n            }\n        ),\n    'squeeze':\n        Transform(\n            type='Squeeze',\n            inputs='!undo_transpose(I[0])',\n            outputs='!O[0]',\n            attribs={\n                'squeeze_dims': '!axes',\n                'T': '!dtype if not _lite_ else None',\n            }\n        ),\n    'unsqueeze':\n        Transform(\n            type='!\"ExpandDims\" if len(axes) == 1 else \"Reshape\"',\n            using={\n                'new_shape': '!unsqueeze_shape(I[0].shape, axes)',\n            },\n            inputs=(\n                '!undo_transpose(I[0])',\n                '!as_tensor(axes if len(axes) == 1 else new_shape, np.int32)',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'T': '!dtype if not _lite_ else None',\n                'new_shape': '!new_shape if _lite_ else None',\n            }\n        ),\n    'stack':\n        Transform(\n            type='Pack',\n            inputs=['![undo_transpose(t) for t in I]'],\n            outputs='!O[0]',\n            attribs={\n                'axis': '!axis',\n                'N': '!len(I) if not _lite_ else None',\n                'values_count': '!len(I) if _lite_ else None',\n                'T': '!dtype if not _lite_ else None',\n            }\n        ),\n    'unstack':\n        Transform(\n            type='Unpack',\n            inputs='!undo_transpose(I[0])',\n            outputs=['!O[:]'],\n            attribs={\n                'axis': '!axis',\n                'num': '!len(O)',\n                'T': '!dtype if not _lite_ else None',\n            }\n        ),\n    ('min_reduce', 'max_reduce', 'mean_reduce', 'sum_reduce', 'any_reduce', 'all_reduce'):\n        Transform(\n            type=('Min', 'Max', 'Mean', 'Sum', 'Any', 'All'),\n            using={'dims': '!transpose_axis_like(axes, I[0])'},\n            inputs=(\n                '!I[0]',\n                '!as_tensor(dims, np.int32)',\n            ),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'keep_dims': True,\n                'T': '!I[0].dtype if I[0].dtype != bool and not _lite_ else None',\n            }\n        ),\n    'concat':\n        Transform(\n            type='Concat',\n            using={\n                'dim': '!transpose_axis_like(axis, I[0])'\n            },\n            inputs=['!as_tensor(dim, np.int32)', '!I[:]'],\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'N': '!len(I)',\n                'T': '!O[0].dtype if not _lite_ else None',\n            }\n        ),\n    'split':\n        Transform(\n            type='SplitV',\n            using={\n                'dim': '!transpose_axis_like(axis, I[0])'\n            },\n            inputs=(\n                '!I[0]',\n                '!as_tensor(split_sizes(ratios, I[0].shape[axis]), np.int64)',\n                '!as_tensor(dim, np.int32)',\n            ),\n            outputs=['![transpose_like(O[i], I[0]) for i in range(len(O))]'],\n            attribs={\n                'num_split': '!len(ratios) if not _lite_ else None',\n                'num_splits': '!len(ratios) if _lite_ else None',\n                'T': '!I[0].dtype if not _lite_ else None',\n            }\n        ),\n    ('add', 'sub', 'mul', 'div', 'pow', 'lt', 'gt', 'le', 'ge', 'eq', 'ne', 'min', 'max', 'and', 'or'):\n        Transform(\n            type=('Add', 'Sub', 'Mul', 'RealDiv', 'Pow', 'Less', 'Greater', 'LessEqual', 'GreaterEqual',\n                  'Equal', 'NotEqual', 'Minimum', 'Maximum', 'LogicalAnd', 'LogicalOr'),\n            inputs=(\n                '!convert_binarg(I[0], I[1])',\n                '!convert_binarg(I[1], I[0])',\n            ),\n            outputs='!transpose_output(O[0]) if transposing(I[0]) or transposing(I[1]) else O[0]',\n            attribs={\n                'T': '!I[0].dtype if I[0].dtype != bool and not _lite_ else None',\n            }\n        ),\n    ('copy', 'relu', 'relu6', 'elu', 'selu', 'gelu', 'silu', 'sigmoid', 'softplus', 'exp', 'log',\n     'sin', 'cos', 'tan', 'asin', 'acos', 'atan', 'sinh', 'cosh', 'tanh', 'asinh', 'acosh', 'atanh',\n     'neg', 'rcp', 'sign', 'abs', 'floor', 'ceil', 'round', 'sqr', 'sqrt', 'rsqrt', 'not'):\n        Transform(\n            type=('Identity', 'Relu', 'Relu6', 'Elu', 'Selu', 'Gelu', 'Silu', 'Sigmoid', 'Softplus', 'Exp', 'Log',\n                  'Sin', 'Cos', 'Tan', 'Asin', 'Acos', 'Atan', 'Sinh', 'Cosh', 'Tanh', 'Asinh', 'Acosh', 'Atanh',\n                  'Neg', 'Reciprocal', 'Sign', 'Abs', 'Floor', 'Ceil', 'Round', 'Square', 'Sqrt', 'Rsqrt', 'LogicalNot'),\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'T': '!I[0].dtype if I[0].dtype != bool and not _lite_ else None',\n            }\n        ),\n    'leaky_relu':\n        Transform(\n            type='LeakyRelu',\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'alpha': '!alpha',\n                'T': '!I[0].dtype if not _lite_ else None',\n            }\n        ),\n    'batch_normalization':\n        Transform(\n            type='FusedBatchNorm',\n            using={\n                'channels': '!O[0].shape[1]'\n            },\n            inputs=(\n                '!transpose_input(I[0])',\n                '!squeeze_vector(I[4])',\n                '!squeeze_vector(I[3])',\n                '!squeeze_vector(I[1])',\n                '!squeeze_vector(I[2])',\n            ),\n            outputs=(\n                '!transpose_output(O[0])',\n                '!new_tensor(shape=(channels,), dtype=O[0].dtype)',\n                '!new_tensor(shape=(channels,), dtype=O[0].dtype)',\n                '!new_tensor(shape=(channels,), dtype=O[0].dtype)',\n                '!new_tensor(shape=(channels,), dtype=O[0].dtype)',\n            ),\n            attribs={\n                'epsilon': '!epsilon',\n                'data_format': '!data_format(I[0].rank - 2)',\n                'T': '!I[0].dtype if not _lite_ else None',\n                'is_training': False,\n            }\n        ),\n    'softmax':\n        Transform(\n            type='Softmax',\n            cond={\n                '!axes == [1])': 'axes must equal channel dimension (1)',\n            },\n            inputs='!transpose_input(I[0])',\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'T': '!I[0].dtype if not _lite_ else None',\n                'beta': '!1.0 if _lite_ else None',\n            }\n        ),\n    'matmul':\n        Transform(\n            type='MatMul',\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!O[0]',\n            attribs={\n                'transpose_a': '!transposeA',\n                'transpose_b': '!transposeB',\n                'T': '!I[0].dtype if not _lite_ else None',\n            },\n        ),\n    'clamp':\n        Transform(\n            type='ClipByValue',\n            inputs=('!I[0]', '!I[1]', '!I[2]'),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={'T': '!I[0].dtype if not _lite_ else None'},\n        ),\n    'pad':\n        Transform(\n            type='!\"Pad\" if border == \"constant\" else \"MirrorPad\"',\n            cond={\n                '!border in [\"constant\", \"reflect\", \"reflect-even\"]':\n                    'border must be one of \"constant\", \"reflect\", \"reflect-even\"',\n            },\n            using={'paddings': '![list(item) for item in padding]'},\n            inputs=(\n                '!I[0]',\n                '!as_tensor(ncx_to_nxc(paddings, cond=transposing(I[0])), np.int32)',\n            ),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'T': '!I[0].dtype if not _lite_ else None',\n                'mode': '!\"REFLECT\" if border == \"reflect\" else \"SYMMETRIC\" if border == \"reflect-even\" else None',\n            },\n        ),\n    'tile':\n        Transform(\n            type='Tile',\n            inputs=(\n                '!I[0]',\n                '!as_tensor(transpose_list_like(repeats, I[0]), np.int32)',\n            ),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'T': '!I[0].dtype if not _lite_ else None',\n            },\n        ),\n    'slice':\n        Transform(\n            type='StridedSlice',\n            using={\n                'dims': '!ncx_to_nxc(list(range(I[0].rank)), cond=transposing(I[0]))',\n                'axis': '!ncx_to_nxc(axes, cond=transposing(I[0]))',\n                'begs': '!ncx_to_nxc(begin, cond=transposing(I[0]))',\n                'ends': '!ncx_to_nxc(end, cond=transposing(I[0]))',\n                'strs': '!ncx_to_nxc(stride, cond=transposing(I[0]))',\n            },\n            inputs=(\n                '!I[0]',\n                '!as_tensor([begs[axis.index(i)] if i in axis else 0 for i in dims], np.int32)',\n                '!as_tensor([ends[axis.index(i)] if i in axis else 0 for i in dims], np.int32)',\n                '!as_tensor([strs[axis.index(i)] if i in axis else 1 for i in dims], np.int32)',\n            ),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'T': '!I[0].dtype if not _lite_ else None',\n                'Index': '!np.int32',\n                'begin_mask': '!as_bits([1 if i not in axis or out_of_range(begs[axis.index(i)], I[0].shape[i]) else 0'\n                              ' for i in dims])',\n                'end_mask': '!as_bits([1 if i not in axis or (ends[axis.index(i)] == 0 and strs[axis.index(i)] == 1)'\n                            ' or out_of_range(ends[axis.index(i)], I[0].shape[i]) else 0 for i in dims])',\n                'ellipsis_mask': 0,\n                'new_axis_mask': 0,\n                'shrink_axis_mask': 0,\n            },\n        ),\n    ('argmin_reduce', 'argmax_reduce'):\n        Transform(\n            type=('ArgMin', 'ArgMax'),\n            cond={\n                '!len(axes) == 1': 'axes must be of length 1',\n            },\n            using={'axis': '!transpose_axis_like(axes[0], ref=I[0])'},\n            inputs=(\n                '!I[0]',\n                '!as_tensor(axis, np.int32)',\n            ),\n            outputs='!transpose_like(unsqueeze_output(O[0], axes) if not _lite_ else O[0], I[0])',\n            attribs={\n                'T': '!I[0].dtype if not _lite_ else None',\n                'output_type': '!O[0].dtype',\n            }\n        ),\n    'select':\n        Transform(\n            type='Select',\n            inputs=(\n                '!I[0]',\n                '!convert_binarg(I[1], I[0])',\n                '!convert_binarg(I[2], I[0])',\n            ),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'T': '!I[1].dtype if not _lite_ else None',\n            }\n        ),\n    'nearest_upsample':\n        Transform(\n            type='ResizeNearestNeighbor',\n            using={\n                'size': '!I[0].shape[2:]'\n            },\n            inputs=(\n                '!transpose_input(I[0])',\n                '!as_tensor([s * f for s, f in zip(size, factor)], np.int32)',\n            ),\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'T': '!I[0].dtype if not _lite_ else None',\n                'align_corners': False,\n                'half_pixel_centers': False,\n            }\n        ),\n    'multilinear_upsample':\n        Transform(\n            type='ResizeBilinear',\n            using={\n                'size': '!I[0].shape[2:]'\n            },\n            inputs=(\n                '!transpose_input(I[0])',\n                '!as_tensor([s * f for s, f in zip(size, factor)], np.int32)',\n            ),\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'T': '!I[0].dtype if not _lite_ else None',\n                'align_corners': '!method == \"aligned\"',\n                'half_pixel_centers': '!method == \"symmetric\"',\n            }\n        ),\n    ('nearest_downsample', 'area_downsample'):\n        Transform(\n            type=('ResizeNearestNeighbor', 'ResizeArea'),\n            using={'size': '!I[0].shape[2:]'},\n            inputs=(\n                '!transpose_input(I[0])',\n                '!as_tensor([s // f for s, f in zip(size, factor)], np.int32)',\n            ),\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'T': '!I[0].dtype if not _lite_ else None',\n                'align_corners': False,\n                'half_pixel_centers': '!False if _type_ == \"nearest_downsample\" else None',\n            }\n        ),\n    'local_response_normalization':\n        Transform(\n            type='LRN',\n            cond={\n                '!all(s == 1 or i == 1 for i, s in enumerate(size))':\n                    'size must be singular for all non-channel dimensions',\n            },\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'depth_radius': '!size[1] // 2 if not _lite_ else None',\n                'radius': '!size[1] // 2 if _lite_ else None',\n                'alpha': '!alpha / size[1]',\n                'beta': '!beta',\n                'bias': '!bias',\n            }\n        ),\n    'add_n':\n        Transform(\n            type='AddN',\n            inputs=['!I[:]'],\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'T': '!O[0].dtype',\n                'N': '!len(I)',\n            },\n        ),\n    'cast':\n        Transform(\n            type='Cast',\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'SrcT': '!I[0].dtype',\n                'DstT': '!O[0].dtype',\n            },\n        ),\n    'gather':\n        Transform(\n            type='GatherV2',\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n                '!as_tensor(transpose_axis_like(axis, I[0]), np.int32)',\n            ),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'Tparams': '!I[0].dtype',\n                'Tindices': '!I[1].dtype',\n                'Taxis': np.int32,\n            },\n        ),\n})\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/conversion/nnef_to_tflite.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\nfrom .converter import ConverterFromNNEF as _Converter, Transform, ConversionError\nfrom .nnef_to_tf import Converter as _TFConverter, _Transforms as _TFTransforms\nfrom ..model import Tensor, Operation\nfrom ..model.utils import generate_tensor_names_from_op_type\nfrom ..io.tf.lite import CustomOptionsKey\nimport numpy as np\nimport copy\n\n\ndef tflite_detection_postprocess_shape(input, scores, anchors, **kwargs):\n    return [], [], [], []\n\n\nclass Converter(_TFConverter):\n\n    @staticmethod\n    def defined_shapes():\n        return {\n            'relu6': lambda shape: shape,\n            'TFLite_Detection_PostProcess': tflite_detection_postprocess_shape,\n        }\n\n    @staticmethod\n    def decomposed_operations():\n        return _Converter.decomposed_operations()\n\n    def __init__(self, io_transpose=False, custom_transforms=None, custom_functions=None, mirror_unsupported=False):\n        _Converter.__init__(self, transforms=self.merge_transforms(_Transforms, custom_transforms),\n                            functions=custom_functions, mirror_unsupported=mirror_unsupported)\n        self._data_format = 'NXC'\n        self._io_transpose = io_transpose\n\n    def __call__(self, graph):\n        graph = _TFConverter.__call__(self, graph)\n        self._generate_tensor_names(graph)\n        self._fix_custom_options(graph)\n        return graph\n\n    def _global_attribs(self):\n        return {'_lite_': True}\n\n    def _prepare(self, graph):\n        self._fix_quantized_dtypes(graph)\n        self._fix_quantization_attribs(graph)\n        self._transpose_externals(graph)\n\n    def _transpose_externals(self, graph):\n        for tensor in graph.tensors:\n            mapped = self._tensor_map[tensor]\n            if mapped.producer and mapped.producer.type == 'external' and self.needs_io_transpose(tensor):\n                self._transposes[tensor] = self.ncx_to_nxc(tensor.shape)\n\n    def _generate_tensor_names(self, graph):\n        generate_tensor_names_from_op_type(graph)\n\n        placeholders = 0\n        constants = 0\n        for tensor in graph.tensors:\n            if tensor.name is None:\n                if tensor.data is None:\n                    placeholders += 1\n                    tensor.name = 'PLACEHOLDER' + str(placeholders)\n                else:\n                    constants += 1\n                    tensor.name = 'CONSTANT' + str(constants)\n\n    def _fix_quantized_dtypes(self, graph):\n        for tensor in graph.tensors:\n            if tensor.quant and tensor.dtype == np.float32:\n                bits = tensor.quant['bits']\n                signed = tensor.quant['signed']\n                assert bits == 8 or bits == 32\n                tensor.dtype = (np.int8 if signed else np.uint8) if bits == 8 else (np.int32 if signed else np.uint32)\n\n    def _fix_quantization_attribs(self, graph):\n        for tensor in graph.tensors:\n            if tensor.quant:\n                opname = tensor.quant['op-name']\n                if opname != 'zero_point_linear_quantize':\n                    raise ConversionError(\"Quantization operation '{}' cannot be converted to TFLite\")\n\n                del tensor.quant['op-name']\n                del tensor.quant['bits']\n                if 'signed' in tensor.quant:\n                    del tensor.quant['signed']\n                if 'symmetric' in tensor.quant:\n                    del tensor.quant['symmetric']\n\n    def _fix_custom_options(self, graph):\n        for op in graph.operations:\n            if op.custom:\n                options = op.attribs.get(CustomOptionsKey)\n                if options is not None:\n                    op.attribs[CustomOptionsKey] = bytes.fromhex(options)\n\n    def _make_constant(self, graph, dtype, value, inline):\n        return Tensor(graph, dtype=dtype, shape=self._shape_of(value), data=value)\n\n    def _ensure_constant_producer(self, tensor):\n        pass\n\n    def _transform_constant(self, tensor, func):\n        data = func(tensor.data)\n        tensor.shape = data.shape\n        tensor.data = data\n\n    def _squeeze_operation(self, input, output, axes):\n        Operation(input.graph, type='SQUEEZE', inputs=input, outputs=output, attribs={'squeeze_dims': axes})\n\n    def _unsqueeze_operation(self, input, output, axes):\n        if len(axes) == 1:\n            Operation(input.graph, type='EXPAND_DIMS', inputs=(input, self.as_tensor(axes[0], np.int32)),\n                      outputs=output)\n        else:\n            Operation(input.graph, type='RESHAPE', inputs=(input, self.as_tensor(output.shape, np.int32)),\n                      outputs=output, attribs={'new_shape': output.shape})\n\n    def _transpose_operation(self, input, output, perm):\n        Operation(input.graph, type='TRANSPOSE', inputs=(input, self.as_tensor(perm, np.int32)),\n                  outputs=output)\n\n    def _reshape_operation(self, input, output, shape):\n        Operation(input.graph, type='RESHAPE', inputs=(input, self.as_tensor(shape, np.int32)), outputs=output,\n                  attribs={'new_shape': shape})\n\n    def _bias_operation(self, input, output, bias):\n        if not isinstance(bias, Tensor):\n            bias = self.as_tensor(bias, np.float32)\n\n        Operation(input.graph, type='ADD', inputs=(input, bias), outputs=output)\n\n    def _scale_operation(self, input, output, scalar):\n        if not isinstance(scalar, Tensor):\n            scalar = self.as_tensor(scalar, np.float32)\n\n        Operation(input.graph, type='MUL', inputs=(input, scalar), outputs=output)\n\n    def _pad_operation(self, input, output, paddings):\n        if not isinstance(paddings, Tensor):\n            paddings = self.as_tensor(paddings, np.int64)\n\n        Operation(input.graph, type='PAD', inputs=(input, paddings), outputs=output, attribs={})\n\n    def is_same_padding(self, input_size, output_size, stride):\n        return all(o == i // s for i, o, s in zip(input_size, output_size, stride))\n\n    def is_valid_padding(self, padding):\n        return len(padding) != 0 and all(p == (0, 0) for p in padding)\n\n    def pad_input(self, input, paddings):\n        if all(item == (0, 0) for item in paddings):\n            return input\n\n        shape = tuple(p + x + q for x, (p, q) in zip(self._working_shape(input), paddings))\n        output = Tensor(input.graph, dtype=input.dtype, shape=shape, quant=copy.deepcopy(input.quant))\n        self._pad_operation(input, output, paddings)\n        return output\n\n\n_Transforms = Converter.unpack_transforms({\n    ('external', 'constant'):\n        Transform(type=None),\n    'conv':\n        Transform(\n            type='!\"CONV_2D\" if not depthwise else \"DEPTHWISE_CONV_2D\"',\n            cond={\n                '!I[0].rank == 4': 'rank must be 4',\n            },\n            using={\n                'depthwise': '!groups == 0',\n                'channels': '!I[0].shape[1]',\n                'valid_pad': '!is_valid_padding(padding)',\n                'same_pad': '!is_same_padding(I[0].shape[2:], O[0].shape[2:], stride)',\n                'pads': '![(0, 0)] + padding + [(0, 0)]',\n            },\n            inputs=(\n                '!transpose_input(I[0]) if same_pad or valid_pad else pad_input(transpose_input(I[0]), pads)',\n                '!transpose_filter(I[1], format=\"NXC\" if not depthwise else \"CXN\")',\n                '!squeeze_vector(I[2])',\n            ),\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'stride_h': '!stride[0]',\n                'stride_w': '!stride[1]',\n                'dilation_h_factor': '!dilation[0]',\n                'dilation_w_factor': '!dilation[1]',\n                'padding': '!\"VALID\" if valid_pad else \"SAME\"',\n                'depth_multiplier': '!O[0].shape[1] // channels if depthwise else None',\n            }\n        ),\n    'deconv':\n        Transform(\n            type='TRANSPOSE_CONV',\n            cond={\n                '!I[0].rank == 4': 'rank must be 4',\n                '!groups == 1': 'groups must be 1',\n            },\n            using={\n                'depthwise': '!groups == 0',\n                'channels': '!O[0].shape[1]',\n                'valid_pad': '!is_valid_padding(padding)',\n                'same_pad': '!is_same_padding(I[0].shape[2:], O[0].shape[2:], stride)',\n                'pads': '![(0, 0)] + padding + [(0, 0)]',\n            },\n            inputs=(\n                '!as_tensor(ncx_to_nxc(output_shape), np.int32)',\n                '!transpose_filter(I[1], format=\"CXN\" if not depthwise else \"NXC\")',\n                '!transpose_input(I[0]) if same_pad or valid_pad else pad_input(transpose_input(I[0]), pads)',\n            ),\n            outputs='!bias_add(transpose_output(O[0]), squeeze_vector(I[2]) if I[2].rank == 2 else I[2])',\n            attribs={\n                'stride_h': '!stride[0]',\n                'stride_w': '!stride[1]',\n                'padding': '!\"VALID\" if valid_pad else \"SAME\"',\n                'depth_multiplier': '!I[1].shape[0] // channels if depthwise else None',\n            }\n        ),\n    ('max_pool', 'avg_pool'):\n        Transform(\n            cond={\n                '!size[0] == 1 and size[1] == 1 and ': 'size must be 1 in batch and channel dimensions',\n                '!stride[0] == 1 and stride[1] == 1': 'stride must be 1 in batch and channel dimensions',\n                '!border == \"ignore\"': 'border must be \"ignore\"',\n            },\n            type=('MAX_POOL_2D', 'AVERAGE_POOL_2D'),\n            using={\n                'valid_pad': '!is_valid_padding(padding)',\n                'same_pad': '!is_same_padding(I[0].shape[2:], O[0].shape[2:], stride[2:])',\n            },\n            inputs=(\n                '!transpose_input(I[0]) if same_pad or valid_pad else pad_input(transpose_input(I[0]), padding)',\n            ),\n            outputs=(\n                '!transpose_output(O[0])',\n            ),\n            attribs={\n                'filter_height': '!size[2]',\n                'filter_width': '!size[3]',\n                'stride_h': '!stride[2]',\n                'stride_w': '!stride[3]',\n                'padding': '!\"VALID\" if valid_pad else \"SAME\"',\n            }\n        ),\n    'reshape':\n        Transform(\n            type='RESHAPE',\n            using={\n                'new_shape': '!fixed_batch(shape, I[0].shape[0])',\n            },\n            inputs=(\n                '!undo_transpose(I[0])',\n                '!as_tensor(new_shape, np.int32)',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'new_shape': '!new_shape',\n            }\n        ),\n    'concat':\n        Transform(\n            type='CONCATENATION',\n            inputs=['!I[:]'],\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'axis': '!transpose_axis_like(axis, I[0])',\n            }\n        ),\n    'copy':\n        Transform(\n            type='RESHAPE',\n            using={\n                'shape': '!transpose_list_like(I[0].shape, I[0])',\n            },\n            inputs=(\n                '!I[0]',\n                '!as_tensor(shape, np.int32)',\n            ),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'new_shape': '!shape',\n            }\n        ),\n    'linear':\n        Transform(\n            type='FULLY_CONNECTED',\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n                '!squeeze_vector(I[2]) if not is_zero(I[2]) else None',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'fused_activation_function': \"NONE\",\n                'weights_format': \"DEFAULT\",\n                'keep_num_dims': True,\n                'asymmetric_quantize_inputs': False,\n            }\n        ),\n    'matmul':\n        Transform(\n            type='BATCH_MATMUL',\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'adj_x': '!transposeA',\n                'adj_y': '!transposeB',\n                'asymmetric_quantize_inputs': False,\n            }\n        ),\n    'batch_normalization':\n        Transform(\n            type='MUL',\n            cond={\n                '!I[1].data is not None and I[2].data is not None and'\n                ' (len(I) == 3 or I[3].data is not None) and (len(I) == 4 or I[4].data is not None)':\n                    'all parameters must be constants',\n                '!not any(t.quant for t in I)': 'quantized inputs or parameters are not supported',\n            },\n            using={\n                'mean': '!np.squeeze(I[1].data, axis=0) if I[1].data is not None else None',\n                'std': '!np.squeeze(np.sqrt(I[2].data + epsilon), axis=0) if I[2].data is not None else None',\n                'offset': '!np.squeeze(I[3].data, axis=0) if I[3].data is not None else None if len(I) > 3 else 0',\n                'scale': '!np.squeeze(I[4].data, axis=0) if I[4].data is not None else None if len(I) > 4 else 1',\n            },\n            inputs=(\n                '!transpose_input(I[0])',\n                '!as_tensor(scale / std, np.float32)',\n            ),\n            outputs='!bias_add(transpose_like(O[0], I[0]), as_tensor(offset - scale * mean / std, np.float32))',\n        ),\n    'l2_normalization':\n        Transform(\n            type='L2_NORMALIZATION',\n            cond={\n                '!axes == list(range(I[0].rank))': 'axes must denote all dimensions',\n            },\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n        ),\n    'prelu':\n        Transform(\n            type='PRELU',\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!transpose_like(O[0], I[0])',\n        ),\n    'pad':\n        Transform(\n            type='!\"PAD\" if border == \"constant\" else \"MIRROR_PAD\"',\n            cond={\n                '!border in [\"constant\", \"reflect\", \"reflect-even\"]':\n                    'border must be one of \"constant\", \"reflect\", \"reflect-even\"',\n            },\n            using={'paddings': '![list(item) for item in padding]'},\n            inputs=(\n                '!I[0]',\n                '!as_tensor(ncx_to_nxc(paddings, cond=transposing(I[0])), np.int32)',\n            ),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'mode': '!0 if border == \"reflect\" else 1 if border == \"reflect-even\" else None',\n            },\n        ),\n    'gather':\n        Transform(\n            type='GATHER',\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'axis': '!transpose_axis_like(axis, I[0])',\n            },\n        ),\n    'cast':\n        Transform(\n            type='CAST',\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'in_data_type': '!I[0].dtype',\n                'out_data_type': '!O[0].dtype',\n            },\n        ),\n    # 'copy': _TFTransforms['copy'].with_type('IDENTITY'),  # only works in TF 2.3\n    'transpose': _TFTransforms['transpose'].with_type('TRANSPOSE'),\n    'split': _TFTransforms['split'].with_type('SPLIT_V'),\n    'squeeze': _TFTransforms['squeeze'].with_type('SQUEEZE'),\n    'unsqueeze': _TFTransforms['unsqueeze'].with_type('!\"EXPAND_DIMS\" if len(axes) == 1 else \"RESHAPE\"'),\n    'relu': _TFTransforms['relu'].with_type('RELU'),\n    'relu6': _TFTransforms['relu6'].with_type('RELU6'),\n    'elu': _TFTransforms['elu'].with_type('ELU'),\n    'leaky_relu': _TFTransforms['leaky_relu'].with_type('LEAKY_RELU'),\n    'sigmoid': _TFTransforms['sigmoid'].with_type('LOGISTIC'),\n    'sin': _TFTransforms['sin'].with_type('SIN'),\n    'cos': _TFTransforms['cos'].with_type('COS'),\n    'tan': _TFTransforms['tan'].with_type('TAN'),\n    'asin': _TFTransforms['asin'].with_type('ASIN'),\n    'acos': _TFTransforms['acos'].with_type('ACOS'),\n    'atan': _TFTransforms['atan'].with_type('ATAN'),\n    'sinh': _TFTransforms['sinh'].with_type('SINH'),\n    'cosh': _TFTransforms['cosh'].with_type('COSH'),\n    'tanh': _TFTransforms['tanh'].with_type('TANH'),\n    'asinh': _TFTransforms['asinh'].with_type('ASINH'),\n    'acosh': _TFTransforms['acosh'].with_type('ACOSH'),\n    'atanh': _TFTransforms['atanh'].with_type('ATANH'),\n    'exp': _TFTransforms['exp'].with_type('EXP'),\n    'log': _TFTransforms['log'].with_type('LOG'),\n    'abs': _TFTransforms['abs'].with_type('ABS'),\n    'neg': _TFTransforms['neg'].with_type('NEG'),\n    'not': _TFTransforms['not'].with_type('LOGICAL_NOT'),\n    'floor': _TFTransforms['floor'].with_type('FLOOR'),\n    'ceil': _TFTransforms['ceil'].with_type('CEIL'),\n    'round': _TFTransforms['round'].with_type('ROUND'),\n    'sqr': _TFTransforms['sqr'].with_type('SQUARE'),\n    'sqrt': _TFTransforms['sqrt'].with_type('SQRT'),\n    'rsqrt': _TFTransforms['rsqrt'].with_type('RSQRT'),\n    'add': _TFTransforms['add'].with_type('ADD'),\n    'sub': _TFTransforms['sub'].with_type('SUB'),\n    'mul': _TFTransforms['mul'].with_type('MUL'),\n    'div': _TFTransforms['div'].with_type('DIV'),\n    'pow': _TFTransforms['pow'].with_type('POW'),\n    'min': _TFTransforms['min'].with_type('MINIMUM'),\n    'max': _TFTransforms['max'].with_type('MAXIMUM'),\n    'and': _TFTransforms['and'].with_type('LOGICAL_AND'),\n    'or': _TFTransforms['or'].with_type('LOGICAL_OR'),\n    'lt': _TFTransforms['lt'].with_type('LESS'),\n    'le': _TFTransforms['le'].with_type('LESS_EQUAL'),\n    'gt': _TFTransforms['gt'].with_type('GREATER'),\n    'ge': _TFTransforms['ge'].with_type('GREATER_EQUAL'),\n    'eq': _TFTransforms['eq'].with_type('EQUAL'),\n    'ne': _TFTransforms['ne'].with_type('NOT_EQUAL'),\n    'select': _TFTransforms['select'].with_type('SELECT'),\n    'min_reduce': _TFTransforms['min_reduce'].with_type('REDUCE_MIN'),\n    'max_reduce': _TFTransforms['max_reduce'].with_type('REDUCE_MAX'),\n    'mean_reduce': _TFTransforms['mean_reduce'].with_type('MEAN'),\n    'sum_reduce': _TFTransforms['sum_reduce'].with_type('SUM'),\n    'any_reduce': _TFTransforms['any_reduce'].with_type('REDUCE_ANY'),\n    'all_reduce': _TFTransforms['all_reduce'].with_type('REDUCE_ALL'),\n    'argmin_reduce': _TFTransforms['argmin_reduce'].with_type('ARG_MIN'),\n    'argmax_reduce': _TFTransforms['argmax_reduce'].with_type('ARG_MAX'),\n    'stack': _TFTransforms['stack'].with_type('PACK'),\n    'unstack': _TFTransforms['unstack'].with_type('UNPACK'),\n    'tile': _TFTransforms['tile'].with_type('TILE'),\n    'slice': _TFTransforms['slice'].with_type('STRIDED_SLICE'),\n    'softmax': _TFTransforms['softmax'].with_type('SOFTMAX'),\n    'local_response_normalization': _TFTransforms['local_response_normalization'].with_type('LOCAL_RESPONSE_NORMALIZATION'),\n    'nearest_upsample': _TFTransforms['nearest_upsample'].with_type('RESIZE_NEAREST_NEIGHBOR'),\n    'nearest_downsample': _TFTransforms['nearest_downsample'].with_type('RESIZE_NEAREST_NEIGHBOR'),\n    'multilinear_upsample': _TFTransforms['multilinear_upsample'].with_type('RESIZE_BILINEAR'),\n    'add_n': _TFTransforms['add_n'].with_type('ADD_N'),\n})\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/conversion/onnx_to_nnef.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\nfrom .converter import ConverterToNNEF as _Converter, Transform, ConversionError\nfrom ..model.utils import generate_tensor_names_from_op_type\nfrom ..model import Tensor\nfrom ..utils import types\nfrom collections import OrderedDict\nimport numpy as np\nimport copy\nfrom nnef.shapes import pool_shape, reduce_shape\n\n\n_LP_POOL_FRAGMENT = \"\"\"\nfragment lp_pool( \n    input: tensor<scalar>,\n    size: integer[],\n    border: string = 'constant',\n    padding: (integer, integer)[] = [],\n    stride: integer[] = [],\n    dilation: integer[] = [],\n    p: scalar = 2.0 ) \n-> ( output: tensor<scalar> )\n{\n    powered = pow(abs(input), p);\n    summed = box(powered, size = size, border = border, padding = padding, stride = stride, dilation = dilation);\n    output = pow(summed, 1.0 / p);\n}\n\"\"\"\n\n_LP_REDUCE_FRAGMENT = \"\"\"\nfragment lp_reduce( \n    input: tensor<scalar>,\n    axes: integer[],\n    p: scalar = 2.0 ) \n-> ( output: tensor<scalar> )\n{\n    powered = pow(abs(input), p);\n    summed = sum_reduce(powered, axes = axes);\n    output = pow(summed, 1.0 / p);\n}\n\"\"\"\n\n_MEAN_VARIANCE_NORMALIZATION_FRAGMENT = \"\"\"\nfragment mean_variance_normalization( \n    input: tensor<scalar>,\n    scale: tensor<scalar>,\n    offset: tensor<scalar>,\n    axes: integer[],\n    epsilon: scalar = 1e-5 ) \n-> ( output: tensor<scalar> )\n{\n    mean, variance = moments(input, axes = axes);\n    output = scale * (input - mean) / sqrt(variance + epsilon) + offset;\n}\n\"\"\"\n\n_LSTM_STEP_FRAGMENT = \"\"\"\nfragment lstm_step(\n    x: tensor<scalar>,\n    h: tensor<scalar>,\n    c: tensor<scalar>,\n    W: tensor<scalar>,\n    R: tensor<scalar>,\n    B: tensor<scalar> )\n-> ( h_out: tensor<scalar>,\n    c_out: tensor<scalar> )\n{\n    [Wb, Rb] = split(B, axis = 1, ratios = [1, 1]);\n    z = linear(x, W, Wb) + linear(h, R, Rb);\n    [i, f, g, o] = split(z, axis = 1, ratios=[1, 1, 1, 1]);\n    c_out = sigmoid(f) * c + sigmoid(i) * tanh(g);\n    h_out = sigmoid(o) * tanh(c_out);\n}\n\"\"\"\n\n_LSTM_LOOP_FRAGMENT = \"\"\"\nfragment lstm_loop(\n    X: tensor<scalar>,\n    W: tensor<scalar>,\n    R: tensor<scalar>,\n    B: tensor<scalar>,\n    h0: tensor<scalar>,\n    c0: tensor<scalar>,\n    steps: integer,\n    index: integer = 0,\n    axis: integer = 0 )\n-> ( hn: tensor<scalar>, cn: tensor<scalar> )\n{\n    x0 = squeeze(slice(X, axes = [axis], begin = [index], end = [index + 1]), axes = [axis]);\n    h1, c1 = lstm_step(x0, h0, c0, W, R, B);\n    hn, cn = lstm_loop(X, W, R, B, h1, c1, index = index + 1, steps=steps) if index + 1 < steps else (h1, c1);\n}\n\"\"\"\n\n_ERF_FRAGMENT = \"\"\"\nfragment erf( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    t = 1.0 / (1.0 + 0.3275911 * abs(x));\n    z = 1.0 - (((((1.061405429 * t + -1.453152027) * t) + 1.421413741) * t + -0.284496736) * t + 0.254829592) * t * exp(-x * x);\n    y = sign(x) * z;\n}\n\"\"\"\n\n_MISH_FRAGMENT = \"\"\"\nfragment mish( x: tensor<scalar> ) -> ( y: tensor<scalar> )\n{\n    y = x * tanh(log(1.0 + exp(x)));\n}\n\"\"\"\n\n_DEPTH_TO_SPACE_FRAGMENT = \"\"\"\nfragment depth_to_space( x: tensor<scalar>, block_size: integer, blocks_first: logical ) -> ( y: tensor<scalar> )\n{\n    r = reshape(x, axis_start=1, axis_count=1, shape=[block_size, block_size, -1] \n                                if blocks_first else [-1, block_size, block_size]);\n    t = transpose(r, axes=[0, 3, 4, 1, 5, 2] if blocks_first else [0, 1, 4, 2, 5, 3]);\n    q = reshape(t, axis_start=4, axis_count=2, shape=[-1]);\n    y = reshape(q, axis_start=2, axis_count=2, shape=[-1]);\n}\n\"\"\"\n\n_SPACE_TO_DEPTH_FRAGMENT = \"\"\"\nfragment space_to_depth( x: tensor<scalar>, block_size: integer, blocks_first: logical ) -> ( y: tensor<scalar> )\n{\n    p = reshape(x, axis_start=3, axis_count=1, shape=[-1, block_size]);\n    r = reshape(p, axis_start=2, axis_count=1, shape=[-1, block_size]);\n    t = transpose(r, axes=[0, 3, 5, 1, 2, 4] if blocks_first else [0, 1, 3, 5, 2, 4]);\n    y = reshape(t, axis_start=1, axis_count=1, shape=[-1]);\n}\n\"\"\"\n\n_INT_MAX = 2 ** 31 - 1\n\n\nclass Converter(_Converter):\n\n    @staticmethod\n    def defined_operations():\n        return {\n            'lp_pool': _LP_POOL_FRAGMENT,\n            'lp_reduce': _LP_REDUCE_FRAGMENT,\n            'mean_variance_normalization': _MEAN_VARIANCE_NORMALIZATION_FRAGMENT,\n            'lstm_step': _LSTM_STEP_FRAGMENT,\n            'lstm_loop': _LSTM_LOOP_FRAGMENT,\n            'erf': _ERF_FRAGMENT,\n            'mish': _MISH_FRAGMENT,\n            'depth_to_space': _DEPTH_TO_SPACE_FRAGMENT,\n            'space_to_depth': _SPACE_TO_DEPTH_FRAGMENT,\n        }\n\n    @staticmethod\n    def defined_operation_dependencies():\n        return {\n            'lstm_loop': ['lstm_step'],\n        }\n\n    @staticmethod\n    def defined_shapes():\n        return {\n            'lp_pool': pool_shape,\n            'lp_reduce': reduce_shape,\n            'mean_variance_normalization': lambda input, scale, offset, **kwargs: input,\n            'lstm_step': lambda x, h, c, W, R, B: (h, c),\n            'lstm_loop': lambda X, W, R, B, h, c, **kwargs: (h, c),\n            'erf': lambda x: x,\n            'mish': lambda x: x,\n            'depth_to_space': lambda x, block_size, **kwargs: [x[0], x[1] // block_size ** 2, x[2] * block_size, x[3] * block_size],\n            'space_to_depth': lambda x, block_size, **kwargs: [x[0], x[1] * block_size ** 2, x[2] // block_size, x[3] // block_size],\n        }\n\n    def __init__(self, custom_transforms=None, custom_functions=None, mirror_unsupported=False, keep_io_names=False,\n                 infer_shapes=False, custom_shapes=None, io_transpose=False):\n        _Converter.__init__(self, transforms=self.merge_transforms(_Transforms, custom_transforms),\n                            functions=custom_functions,\n                            mirror_unsupported=mirror_unsupported,\n                            infer_shapes=infer_shapes,\n                            custom_shapes=dict(**self.defined_shapes(), **custom_shapes or {}))\n        self._keep_io_names = keep_io_names\n        self._io_transpose = io_transpose\n\n    def __call__(self, graph):\n        graph = _Converter.__call__(self, graph)\n        self.remove_unused_constants(graph)\n        self.inline_scalar_constants(graph)\n        self.convert_constants_to_variables(graph)\n        self._ensure_valid_ids(graph)\n        if self._io_transpose is not False:\n            self._transpose_inputs(graph)\n            self._transpose_outputs(graph)\n            graph.sort()\n        generate_tensor_names_from_op_type(graph, keep_io_names=self._keep_io_names)\n        return graph\n\n    def _prepare(self, graph):\n        self._insert_externals_and_constants(graph)\n\n    def _is_constant(self, tensor):\n        if tensor.producer:\n            return tensor.producer.type == 'Constant'\n        else:\n            return tensor.data is not None\n\n    def _read_constant(self, tensor, type=None):\n        if tensor.producer and tensor.producer.type == 'Constant':\n            value = tensor.producer.attribs['value']\n        elif not tensor.producer:\n            value = tensor.data\n        else:\n            raise ConversionError('trying to evaluate non-constant tensor')\n\n        return types.from_numpy(value, type=type) if isinstance(value, np.ndarray) else types.cast(value, type=type)\n\n    def _needs_io_transpose(self, tensor):\n        if tensor.rank <= 2:\n            return False\n        if isinstance(self._io_transpose, bool):\n            return self._io_transpose\n        else:\n            return tensor.name in self._io_transpose\n\n    def _transpose_inputs(self, graph):\n        inputs = [self._transpose_input(tensor) if self._needs_io_transpose(tensor) else tensor\n                  for tensor in graph.inputs]\n\n        if self._keep_io_names:\n            for i in range(len(inputs)):\n                if inputs[i] is not graph.inputs[i]:\n                    inputs[i].name = graph.inputs[i].name\n\n        graph.inputs = inputs\n\n    def _transpose_outputs(self, graph):\n        outputs = [self._transpose_output(tensor) if self._needs_io_transpose(tensor) else tensor\n                   for tensor in graph.outputs]\n\n        if self._keep_io_names:\n            for i in range(len(outputs)):\n                if outputs[i] is not graph.outputs[i]:\n                    outputs[i].name = graph.outputs[i].name\n\n        graph.outputs = outputs\n\n    def _transpose_input(self, tensor):\n        external = tensor.producer\n        external.outputs = self._post_transpose(tensor, self.ncx_to_nxc_perm(tensor.rank))\n        external.attribs['shape'] = list(self.nxc_to_ncx(tensor.shape))\n        return external.output\n\n    def _transpose_output(self, tensor):\n        return self._pre_transpose(tensor, self.nxc_to_ncx_perm(tensor.rank))\n\n    @staticmethod\n    def _interleave(items):\n        return [item[0] for item in items] + [item[1] for item in items]\n\n    @staticmethod\n    def _uninterleave(items):\n        count = len(items) // 2\n        return list(zip(items[:count], items[count:]))\n\n    def convert_padding(self, pads, auto_pad, output_padding, rank, ceil_stride=None):\n        if auto_pad == \"NOTSET\" or auto_pad == \"SAME_LOWER\":\n            padding = self._uninterleave(pads)\n            if output_padding is not None:\n                for i in range(len(padding)):\n                    padding[i] = (padding[i][0], padding[i][1] - output_padding[i])\n            padding = [(0, 0)] * (rank - len(padding)) + padding\n            return self.ceil_pads(padding, ceil_stride) if ceil_stride else padding\n        elif auto_pad == \"VALID\":\n            padding = [(0, 0,)] * rank\n            if output_padding is not None:\n                offs = rank - len(output_padding)\n                for i in range(len(output_padding)):\n                    padding[i + offs] = (padding[i + offs][0], padding[i + offs][1] - output_padding[i])\n            return self.ceil_pads(padding, ceil_stride) if ceil_stride else padding\n        elif auto_pad == \"SAME_UPPER\":\n            return []\n        else:\n            assert False\n\n    def convert_pads(self, pads):\n        return self._uninterleave(pads)\n\n    def squeeze_input(self, tensor, axes, keep_dims=False):\n        return self._pre_squeeze(tensor, axes=axes) if not keep_dims and len(axes) else tensor\n\n    def squeeze_output(self, tensor, axes, keep_dims=False):\n        return self._post_squeeze(tensor, axes=axes) if not keep_dims and len(axes) else tensor\n\n    def unsqueeze_input(self, tensor, axes, keep_dims=False):\n        return self._pre_unsqueeze(tensor, axes=axes) if not keep_dims and len(axes) else tensor\n\n    def unsqueeze_output(self, tensor, axes, keep_dims=False):\n        return self._post_unsqueeze(tensor, axes=axes) if not keep_dims and len(axes) else tensor\n\n    def unsqueeze_vector(self, tensor):\n        original = self._tensor_map[tensor]\n        if self._is_constant(original) and len(original.consumers) == 1:\n            self._transform_constant(tensor, lambda data: np.expand_dims(data, 0))\n            return tensor\n        else:\n            return self.unsqueeze_input(tensor, axes=[0])\n\n    def bias_add(self, output, bias):\n        if bias.rank == 0 and bias.data == 0:\n            return output\n\n        input = Tensor(output.graph, dtype=output.dtype, shape=output.shape, quant=copy.deepcopy(output.quant))\n        self._bias_operation(input, output, bias)\n        return input\n\n    def lower_pads(self, input_size, filter_size, output_size, stride, dilation):\n        rank = len(input_size)\n        total = [None] * rank\n        for i in range(rank):\n            dilated_size = (filter_size[i] - 1) * dilation[i] + 1\n            total[i] = max((input_size[i] // stride[i] - 1) * stride[i] + dilated_size - output_size[i], 0)\n        pads = [(t // 2, t - t // 2) for t in total]\n        return self._interleave(pads)\n\n    def ceil_pads(self, pads, stride):\n        return [(p, q + s - 1) for (p, q), s in zip(pads, stride)]\n\n    def broadcast(self, tensor, rank):\n        return self.unsqueeze_input(tensor, axes=list(range(rank - tensor.rank))) if tensor.rank > 0 else tensor\n\n    def ensure_list(self, arg):\n        return [arg] if not isinstance(arg, list) else arg\n\n    def ensure_scalar(self, arg):\n        return arg[0] if isinstance(arg, list) and len(arg) == 1 else arg\n\n    def limit_range(self, x):\n        return _INT_MAX if x > _INT_MAX else -_INT_MAX if x < -_INT_MAX else x\n\n    def is_unused(self, tensor):\n        if len(tensor.name) == 0:\n            return True\n        original = self._tensor_map[tensor]\n        return len(original.consumers) == 0\n\n\n_Transforms = Converter.unpack_transforms({\n    ('Conv', 'ConvTranspose'):\n        Transform(\n            type=('conv', 'deconv'),\n            defaults={\n                'strides': '![1] * (I[0].rank - 2)',\n                'dilations': '![1] * (I[0].rank - 2)',\n                'pads': '![0, 0] * (I[0].rank - 2)',\n                'auto_pad': \"NOTSET\",\n                'group': 1,\n                'output_shape': None,\n                'output_padding': None,\n            },\n            using={\n                '_pads': '!lower_pads(I[0].shape[2:], I[1].shape[2:], O[0].shape[2:], strides, dilations)'\n                         ' if auto_pad == \"SAME_LOWER\" else pads',\n            },\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n                '!unsqueeze_vector(I[2]) if len(I) > 2 else None',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'stride': '!strides',\n                'dilation': '!dilations',\n                'padding': '!convert_padding(_pads, auto_pad, output_padding, I[0].rank - 2)',\n                'groups': '!group',\n                'output_shape': '!output_shape',\n            }\n        ),\n    ('MaxPool', 'AveragePool', 'LpPool'):\n        Transform(\n            type=('max_pool', 'avg_pool', 'lp_pool'),\n            defaults={\n                'strides': '![1] * (I[0].rank - 2)',\n                'dilations': '![1] * (I[0].rank - 2)',\n                'pads': '![0, 0] * (I[0].rank - 2)',\n                'auto_pad': \"NOTSET\",\n                'ceil_mode': 0,\n                'storage_order': 0,\n                'count_include_pad': 0,\n            },\n            cond={\n                '!storage_order == 0': 'storage_order must be 0',\n            },\n            using={\n                '_pads': '!lower_pads(I[0].shape[2:], kernel_shape, O[0].shape[2:], strides, dilations)'\n                         ' if auto_pad == \"SAME_LOWER\" else pads',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'size': '![1, 1] + kernel_shape',\n                'stride': '![1, 1] + strides',\n                'dilation': '![1, 1] + dilations',\n                'padding': '!convert_padding(_pads, auto_pad, None, I[0].rank, [1, 1] + strides if ceil_mode == 1 else None)',\n                'border': '!\"constant\" if count_include_pad else \"ignore\"',\n            }\n        ),\n    ('GlobalMaxPool', 'GlobalAveragePool', 'GlobalLpPool'):\n        Transform(\n            type=('max_reduce', 'mean_reduce', 'lp_reduce'),\n            defaults={\n                'p': 2,\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!list(range(2, I[0].rank))',\n                'p': '!float(p) if _type_ == \"GlobalLpPool\" else None',\n            }\n        ),\n    ('ReduceMin', 'ReduceMax', 'ReduceMean', 'ReduceSum', 'ReduceL1', 'ReduceL2'):\n        Transform(\n            type=('min_reduce', 'max_reduce', 'mean_reduce', 'sum_reduce', 'lp_reduce', 'lp_reduce'),\n            defaults={\n                'keepdims': 1,\n            },\n            inputs='!I[0]',\n            outputs='!squeeze_output(O[0], axes, keepdims)',\n            attribs={\n                'axes': '!ensure_positive(axes, I[0].rank)',\n                'p': '!1.0 if _type_ == \"ReduceL1\" else 2.0 if _type_ == \"ReduceL2\" else None',\n            }\n        ),\n    ('ArgMin', 'ArgMax'):\n        Transform(\n            type=('argmin_reduce', 'argmax_reduce'),\n            defaults={\n                'axis': 0,\n                'keepdims': 1,\n                'select_last_index': 0,\n            },\n            using={\n                'axes': '![ensure_positive(axis, I[0].rank)]',\n            },\n            cond={\n                '!select_last_index == 0': 'select_last_index must be 0',\n            },\n            inputs='!I[0]',\n            outputs='!squeeze_output(O[0], axes, keepdims)',\n            attribs={\n                'axes': '!axes',\n            }\n        ),\n    'BatchNormalization':\n        Transform(\n            type='batch_normalization',\n            defaults={\n                'epsilon': 1e-5,\n                'spatial': 1,\n            },\n            inputs=(\n                '!I[0]',\n                '!unsqueeze_vector(I[3])',\n                '!unsqueeze_vector(I[4])',\n                '!unsqueeze_vector(I[2])',\n                '!unsqueeze_vector(I[1])',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'epsilon': '!epsilon',\n            }\n        ),\n    ('Relu', 'Sigmoid', 'Tanh', 'Softplus', 'Selu', 'Not', 'Identity', 'Elu', 'Erf', 'Mish', 'Abs', 'Sign',\n     'Sin', 'Cos', 'Tan', 'Asin', 'Acos', 'Atan', 'Sinh', 'Cosh', 'Tanh', 'Asinh', 'Acosh', 'Atanh',\n     'Exp', 'Log', 'Neg', 'Sqrt', 'Ceil', 'Floor', 'Round'):\n        Transform(\n            type=('relu', 'sigmoid', 'tanh', 'softplus', 'selu', 'not', 'copy', 'elu', 'erf', 'mish', 'abs', 'sign',\n                  'sin', 'cos', 'tan', 'asin', 'acos', 'atan', 'sinh', 'cosh', 'tanh', 'asinh', 'acosh', 'atanh',\n                  'exp', 'log', 'neg', 'sqrt', 'ceil', 'floor', 'round'),\n            inputs='!I[0]',\n            outputs='!O[0]',\n        ),\n    ('Add', 'Sub', 'Mul', 'Div', 'Pow', 'Min', 'Max', 'And', 'Or',\n     'Equal', 'Less', 'Greater', 'LessOrEqual', 'GreaterOrEqual'):\n        Transform(\n            type=('add', 'sub', 'mul', 'div', 'pow', 'min', 'max', 'and', 'or', 'eq', 'lt', 'gt', 'le', 'ge'),\n            inputs=(\n                '!broadcast(I[0], O[0].rank)',\n                '!broadcast(I[1], O[0].rank)',\n            ),\n            outputs='!O[0]',\n        ),\n    'LeakyRelu':\n        Transform(\n            type='leaky_relu',\n            defaults={\n                'alpha': 0.01,\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'alpha': '!alpha',\n            }\n        ),\n    'PRelu':\n        Transform(\n            type='prelu',\n            inputs=(\n                '!I[0]',\n                '!broadcast(I[1], I[0].rank)',\n            ),\n            outputs='!O[0]',\n        ),\n    'Transpose':\n        Transform(\n            type='transpose',\n            defaults={\n                'perm': '!list(reversed(range(I[0].rank)))',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!ensure_positive(perm, I[0].rank)',\n            }\n        ),\n    'Reshape':\n        Transform(\n            type='reshape',\n            defaults={\n                'shape': '!as_const(I[1])',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'shape': '!flexible_batch(shape, I[0].shape[0])',\n            }\n        ),\n    'Flatten':\n        Transform(\n            type='reshape',\n            defaults={\n                'axis': 1,\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'shape': '![0] * axis + [-1]',\n            }\n        ),\n    'Squeeze':\n        Transform(\n            type='squeeze',\n            defaults={\n                'axes': '!as_const(I[1])',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!ensure_positive(axes, I[0].rank)',\n            }\n        ),\n    'Unsqueeze':\n        Transform(\n            type='unsqueeze',\n            defaults={\n                'axes': '!as_const(I[1])',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!ensure_positive(axes, O[0].rank)',\n            }\n        ),\n    'MatMul':\n        Transform(\n            type='matmul',\n            inputs=(\n                '!broadcast(I[0], O[0].rank)',\n                '!broadcast(I[1], O[0].rank)',\n            ),\n            outputs=(\n                '!O[0]',\n            ),\n        ),\n    'Gemm':\n        Transform(\n            type='!\"linear\" if is_linear else \"matmul\"',\n            defaults={\n                'alpha': 1.0,\n                'beta': 1.0,\n                'transA': 0,\n                'transB': 0,\n            },\n            cond={\n                '!alpha == 1.0': 'alpha must be 1',\n                '!beta == 1.0 or len(I) == 2': 'beta must be 1',\n            },\n            using={\n                'is_linear': '!len(I) > 2 and I[2].rank == 1 and transB',\n                'bias': '!broadcast(I[2], O[0].rank) if len(I) > 2 and not is_linear else None',\n            },\n            inputs=(\n                '!I[0]',\n                '!I[1]',\n                '!unsqueeze_vector(I[2]) if is_linear else None',\n            ),\n            outputs='!O[0] if is_linear or bias is None else bias_add(O[0], bias)',\n            attribs={\n                'transposeA': '!bool(transA) if not is_linear else None',\n                'transposeB': '!bool(transB) if not is_linear else None',\n            }\n        ),\n    'LRN':\n        Transform(\n            type='local_response_normalization',\n            defaults={\n                'alpha': 0.0001,\n                'beta': 0.75,\n                'bias': 1.0,\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'alpha': '!alpha',\n                'beta': '!beta',\n                'bias': '!bias',\n                'size': '![1, size] + [1] * (I[0].rank - 2)',\n            }\n        ),\n    'Concat':\n        Transform(\n            type='concat',\n            defaults={\n                'axis': 1,\n            },\n            inputs=['!I[:]'],\n            outputs='!O[0]',\n            attribs={\n                'axis': '!ensure_positive(axis, O[0].rank)',\n            }\n        ),\n    'Split':\n        Transform(\n            type='split',\n            defaults={\n                'axis': 0,\n                'split': '!as_const(I[1])',\n            },\n            inputs='!I[0]',\n            outputs=['!O[:]'],\n            attribs={\n                'axis': '!ensure_positive(axis, I[0].rank)',\n                'ratios': '!split',\n            }\n        ),\n    'Dropout':\n        Transform(\n            type='copy',\n            inputs='!I[0]',\n            outputs='!O[0]',\n        ),\n    'Softmax':\n        Transform(\n            type='softmax',\n            defaults={\n                'axis': 1,\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '![ensure_positive(axis, I[0].rank)]',\n            }\n        ),\n    'Sum':\n        Transform(\n            type='add_n',\n            inputs=['!I[:]'],\n            outputs='!O[0]',\n        ),\n    'Where':\n        Transform(\n            type='select',\n            inputs=(\n                '!broadcast(I[0], O[0].rank)',\n                '!broadcast(I[1], O[0].rank)',\n                '!broadcast(I[2], O[0].rank)',\n            ),\n            outputs='!O[0]',\n        ),\n    'Clip':\n        Transform(\n            type='!\"max\" if I[2].name == \"\" else \"min\" if I[1].name == \"\" else \"clamp\"',\n            inputs=(\n                '!I[0]',\n                '!I[1] if I[1].name != \"\" else None',\n                '!I[2] if I[2].name != \"\" else None',\n            ),\n            outputs='!O[0]',\n        ),\n    'Pad':\n        Transform(\n            type='pad',\n            defaults={\n                'mode': \"constant\",\n                'value': 0.0,\n            },\n            using={\n                'constant_value': '!ensure_scalar(as_const(I[2])) if len(I) > 2 else value',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'padding': '!convert_pads(as_const(I[1]) if len(I) > 1 else pads)',\n                'value': '!constant_value',\n                'border': '!\"replicate\" if mode == \"edge\" else mode',\n            }\n        ),\n    'Tile':\n        Transform(\n            type='tile',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'repeats': '!as_const(I[1])',\n            }\n        ),\n    'Expand':\n        Transform(\n            type='tile',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'repeats': '![O[0].shape[i] // I[0].shape[i] for i in range(I[0].rank)]',\n            }\n        ),\n    'Slice':\n        Transform(\n            type='slice',\n            using={\n                'axes': '!as_const(I[3]) if len(I) > 3 else list(range(I[0].rank))',\n                'starts': '![limit_range(x) for x in as_const(I[1])]',\n                'ends': '![limit_range(x) for x in as_const(I[2])]',\n                'steps': '!as_const(I[4]) if len(I) > 4 else None',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!ensure_positive(axes, I[0].rank)',\n                'begin': '!starts',\n                'end': '!ends',\n                'stride': '!steps',\n            }\n        ),\n    'LpNormalization':\n        Transform(\n            type='!\"l1_normalization\" if p == 1 else \"l2_normalization\"',\n            defaults={\n                'axis': -1,\n                'p': 2,\n            },\n            cond={\n                '!p == 1 or p == 2': 'p must be 1 or 2',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '![ensure_positive(axis, I[0].rank)]',\n            }\n        ),\n    'MeanVarianceNormalization':\n        Transform(\n            type='mean_variance_normalization',\n            defaults={\n                'axes': [0, 2, 3],\n            },\n            inputs=(\n                '!I[0]',\n                '!as_tensor(1.0, np.float32, inline=True)',\n                '!as_tensor(0.0, np.float32, inline=True)',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'axes': '!axes',\n                'epsilon': 0.0,\n            }\n        ),\n    'InstanceNormalization':\n        Transform(\n            type='mean_variance_normalization',\n            defaults={\n                'epsilon': 1e-5,\n            },\n            inputs=(\n                '!I[0]',\n                '!unsqueeze_vector(I[1])',\n                '!unsqueeze_vector(I[2])',\n            ),\n            outputs='!O[0]',\n            attribs={\n                'axes': '!list(range(2, I[0].rank))',\n                'epsilon': '!epsilon',\n            }\n        ),\n    'Upsample':\n        Transform(\n            type='!\"nearest_upsample\" if mode == \"nearest\" else \"multilinear_upsample\"',\n            defaults={\n                'mode': \"nearest\",\n                'scales': '!as_const(I[1])',\n            },\n            cond={\n                '!scales[0] == 1 and scales[1] == 1': 'scales must be 1 in batch and channel dimensions',\n                '!all(int(s) == s for s in scales[2:])': 'scales must be integers in all dimensions',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'factor': '![int(s) for s in scales[2:]]',\n                'method': '!\"asymmetric\" if mode == \"linear\" else None',\n            }\n        ),\n    'Resize':\n        Transform(\n            type='!(\"nearest_downsample\" if downsample else \"nearest_upsample\") if mode == \"nearest\" else'\n                 ' \"multilinear_upsample\"',\n            defaults={\n                'mode': \"nearest\",\n                'coordinate_transformation_mode': \"half_pixel\",\n            },\n            using=OrderedDict([\n                ('scales', '!as_const(I[1 if len(I) == 2 else 2])'),\n                ('sizes', '!as_const(I[3]) if len(I) > 3 else'\n                          ' [int(I[0].shape[i] * scales[i]) for i in range(I[0].rank)]'),\n                ('upsample', '!is_integer_upsample(I[0].shape, sizes)'),\n                ('downsample', '!is_integer_downsample(I[0].shape, sizes)'),\n            ]),\n            cond={\n                '!mode == \"nearest\" or mode == \"linear\"':\n                    'mode must be one of \"nearest\", \"linear\"',\n                '!upsample or downsample if mode == \"nearest\" else True':\n                    \"nearest resize must be integer up-sample or down-sample\",\n                '!upsample if mode == \"linear\" else True':\n                    'linear resize must be integer up-sample',\n                '!sizes[0] == I[0].shape[0] and sizes[1] == I[0].shape[1]':\n                    'batch and channel dimensions must be preserved',\n                '!coordinate_transformation_mode == \"half_pixel\" or'\n                ' coordinate_transformation_mode == \"pytorch_half_pixel\" or'\n                ' coordinate_transformation_mode == \"asymmetric\" or'\n                ' coordinate_transformation_mode == \"align_corners\"':\n                    'coordinate_transformation_mode must be one of'\n                    ' \"half_pixel\", \"pytorch_half_pixel\", \"asymmetric\", \"align_corners\"',\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'factor': '!upsample_factor(I[0].shape[2:], sizes[2:]) if upsample else'\n                          ' downsample_factor(I[0].shape[2:], sizes[2:])',\n                'method': '!(\"aligned\" if coordinate_transformation_mode == \"align_corners\" else'\n                          ' \"symmetric\" if coordinate_transformation_mode == \"half_pixel\" or '\n                          ' coordinate_transformation_mode == \"pytorch_half_pixel\" else'\n                          ' \"asymmetric\") if mode == \"linear\" else None',\n            }\n        ),\n    'Constant':\n        Transform(\n            type='constant',\n            outputs='!O[0]',\n            attribs={\n                'value': '!ensure_list(from_numpy(value)) if len(value.shape) <= 1 and int(np.prod(value.shape)) <= 10 '\n                         'else value',\n                'shape': '!list(value.shape)',\n                'dtype': '!value.dtype',\n            }\n        ),\n    'Gather':\n        Transform(\n            type='gather',\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!O[0]',\n            defaults={\n                'axis': 0,\n            },\n            attribs={\n                'axis': '!ensure_positive(axis, I[0].rank)',\n            },\n        ),\n    'Cast':\n        Transform(\n            using={\n                'same_type': '!nnef_dtype(O[0].dtype) == nnef_dtype(I[0].dtype)',\n            },\n            type='!\"copy\" if same_type else \"cast\"',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'dtype': '!O[0].dtype if not same_type else None',\n            },\n        ),\n    'LSTM':\n        Transform(\n            cond={\n                '!direction == \"forward\"': 'direction must be \"forward\"',\n                '!is_unused(O[0])': 'first output must not have consumer operations',\n                '!len(I[4].name) == 0': 'sequence_lens must not be defined',\n            },\n            defaults={\n                'layout': 0,\n            },\n            using={\n                'seq_axis': '!0 if layout == 0 else 1',\n                'dir_axis': '!0 if layout == 0 else 2',\n            },\n            type='lstm_loop',\n            inputs=(\n                '!I[0]',                                    # X\n                '!squeeze_input(I[1], axes=[0])',           # W\n                '!squeeze_input(I[2], axes=[0])',           # R\n                '!I[3]',                                    # B\n                '!squeeze_input(I[5], axes=[dir_axis])',    # h_0\n                '!squeeze_input(I[6], axes=[dir_axis])',    # c_0\n            ),\n            outputs=(\n                '!unsqueeze_output(O[1], axes=[dir_axis])',    # h_n\n                '!unsqueeze_output(O[2], axes=[dir_axis])',    # c_n\n            ),\n            attribs={\n                'steps': '!I[0].shape[seq_axis]',\n                'axis': '!seq_axis',\n            },\n        ),\n    'DepthToSpace':\n        Transform(\n            type=\"depth_to_space\",\n            defaults={\n                'mode': \"DCR\",\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'block_size': '!blocksize',\n                'blocks_first': '!mode == \"DCR\"',\n            },\n        ),\n    'SpaceToDepth':\n        Transform(\n            type=\"space_to_depth\",\n            defaults={\n                'mode': \"DCR\",\n            },\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'block_size': '!blocksize',\n                'blocks_first': '!mode == \"DCR\"',\n            },\n        ),\n})\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/conversion/tf_to_nnef.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\nfrom .converter import ConverterToNNEF as _Converter, Transform, ConversionError\nfrom ..model.utils import generate_tensor_names_from_op_type\nfrom ..utils import types\nfrom collections import OrderedDict\nimport numpy as np\n\n\n_RELU6_FRAGMENT = \"\"\"\nfragment relu6( input: tensor<scalar> ) -> ( output: tensor<scalar> )\n{\n    output = clamp(input, 0.0, 6.0);\n}\n\"\"\"\n\n_INT_MAX = 2 ** 31 - 1\n\n\nclass Converter(_Converter):\n\n    _ConvOpTypes = ['Conv1D', 'Conv2D', 'Conv3D', 'Conv2DBackpropInput', 'Conv1DBackpropInput', 'Conv3DBackpropInput']\n    _DepthwiseConvOpTypes = ['DepthwiseConv2dNative', 'DepthwiseConv2dNativeBackpropInput']\n\n    @staticmethod\n    def defined_operations():\n        return {\n            'relu6': _RELU6_FRAGMENT,\n        }\n\n    def __init__(self, io_transpose=False, custom_transforms=None, custom_functions=None,\n                 mirror_unsupported=False, keep_io_names=False):\n        _Converter.__init__(self, transforms=self.merge_transforms(_Transforms, custom_transforms),\n                            functions=custom_functions, mirror_unsupported=mirror_unsupported)\n        self._io_transpose = io_transpose\n        self._keep_io_names = keep_io_names\n\n    def __call__(self, graph):\n        graph = _Converter.__call__(self, graph)\n        self.remove_unused_constants(graph)\n        self.inline_scalar_constants(graph)\n        self.convert_constants_to_variables(graph)\n        self._fix_output_transposes(graph)\n        self._ensure_valid_ids(graph)\n        generate_tensor_names_from_op_type(graph, keep_io_names=self._keep_io_names)\n        return graph\n\n    def _global_attribs(self):\n        return {'_lite_': False}\n\n    def _fix_output_transposes(self, graph):\n        outputs = [self.transpose_input(tensor) if self.needs_io_transpose(tensor) else\n                   self.undo_transpose(tensor) for tensor in graph.outputs]\n\n        if self._keep_io_names:\n            for i in range(len(outputs)):\n                if outputs[i] is not graph.outputs[i]:\n                    outputs[i].name = graph.outputs[i].name\n\n        graph.outputs = outputs\n\n    def _is_conv_filter(self, tensor):\n        tensor = self._tensor_map.get(tensor)\n        return tensor and len(tensor.consumers) > 0 and \\\n               all(op.type in Converter._ConvOpTypes and op.inputs[1] is tensor for op in tensor.consumers)\n\n    def _is_depthwise_conv_filter(self, tensor):\n        tensor = self._tensor_map.get(tensor)\n        return tensor and len(tensor.consumers) > 0 and \\\n               all(op.type in Converter._DepthwiseConvOpTypes and op.inputs[1] is tensor for op in tensor.consumers)\n\n    def _is_constant(self, tensor):\n        if tensor.producer:\n            return tensor.producer.type == 'Const'\n        else:\n            return tensor.data is not None\n\n    def _read_constant(self, tensor, type=None):\n        if tensor.producer is None:\n            return types.from_numpy(tensor.data, type=type)\n        elif tensor.producer.type == 'Const':\n            value = tensor.producer.attribs['value']\n            return types.from_numpy(value, type=type) if isinstance(value, np.ndarray) else types.cast(value, type=type)\n        else:\n            raise ConversionError('trying to evaluate non-constant tensor')\n\n    def needs_io_transpose(self, tensor):\n        if tensor.rank <= 2:\n            return False\n        if isinstance(self._io_transpose, bool):\n            return self._io_transpose\n        else:\n            return tensor.name in self._io_transpose\n\n    def is_nxc(self, format):\n        return format[0] == 'N' and format[-1] == 'C' and len(format) > 2\n\n    def is_cxn(self, format):\n        return format[0] == 'C' and format[-1] == 'N' and len(format) > 2\n\n    def is_xcn(self, format):\n        return format[-2] == 'C' and format[-1] == 'N' and len(format) > 2\n\n    def transpose_input(self, tensor, format='NXC'):\n        if self.is_nxc(format):\n            return self._pre_transpose(tensor, self.nxc_to_ncx_perm(tensor.rank)) \\\n                if not self.transposing(tensor) and tensor.rank > 2 else tensor\n        else:\n            assert not self.transposing(tensor)\n            return tensor\n\n    def transpose_output(self, tensor, format='NXC'):\n        if self.is_nxc(format):\n            self._transposes[tensor] = self.nxc_to_ncx(tensor.shape)\n        return tensor\n\n    def transpose_filter(self, tensor, format='XCN'):\n        if self.is_xcn(format):\n            perm = self.xcn_to_ncx_perm(tensor.rank)\n        elif self.is_nxc(format):\n            perm = self.nxc_to_ncx_perm(tensor.rank)\n        elif self.is_cxn(format):\n            perm = self.cxn_to_ncx_perm(tensor.rank)\n        else:\n            assert False\n\n        return self._pre_transpose(tensor, perm)\n\n    def transpose_depthwise_filter(self, tensor, format='XCN'):\n        if self.is_xcn(format):\n            perm = self.xcn_to_ncx_perm(tensor.rank)\n        elif self.is_nxc(format):\n            perm = self.nxc_to_ncx_perm(tensor.rank)\n        elif self.is_cxn(format):\n            perm = self.cxn_to_ncx_perm(tensor.rank)\n        else:\n            assert False\n\n        shape = tensor.shape[:-2] + (1, -1)\n        return self._pre_transpose(self._reshape(tensor, shape), perm)\n\n    def transpose_like(self, tensor, ref):\n        if ref is not None and self.transposing(ref):\n            self.transpose_output(tensor)\n        return tensor\n\n    def undo_transpose(self, tensor):\n        perm = self.ncx_to_nxc_perm(tensor.rank)\n        if perm == list(range(tensor.rank)):\n            return tensor\n        return self._pre_transpose(tensor, perm) if self.transposing(tensor) else tensor\n\n    def convert_size(self, value, format):\n        return value[1:-1] if self.is_nxc(format) else value[2:]\n\n    def convert_padding(self, padding, rank, explicit_paddings=None, format=None):\n        padding = padding.upper()\n        if padding == 'SAME':\n            return []\n        elif padding == 'VALID':\n            return [(0, 0)] * rank\n        elif padding == 'EXPLICIT':\n            assert explicit_paddings is not None and format is not None\n            explicit_paddings = list(zip(explicit_paddings[0::2], explicit_paddings[1::2]))\n            return explicit_paddings[1:-1] if self.is_nxc(format) else explicit_paddings[2:]\n        else:\n            assert False, \"unknown padding type '{}'\".format(padding)\n\n    def transpose_list_like(self, items, ref):\n        return self.nxc_to_ncx(items) if ref is not None and self.transposing(ref) else items\n\n    def transpose_axis_like(self, axis, ref, rank=None):\n        return self.axis_nxc_to_ncx(axis, rank or ref.rank) if ref is not None and self.transposing(ref) else \\\n            self.ensure_positive(axis, rank or ref.rank)\n\n    def squeeze_input(self, tensor, axes, keep_dims=False):\n        return self._pre_squeeze(tensor, axes=axes) if not keep_dims else tensor\n\n    def squeeze_output(self, tensor, axes, keep_dims=False):\n        return self._post_squeeze(tensor, axes=axes) if not keep_dims else tensor\n\n    def unsqueeze_input(self, tensor, axes, keep_dims=False):\n        return self._pre_unsqueeze(tensor, axes=axes) if not keep_dims else tensor\n\n    def unsqueeze_output(self, tensor, axes, keep_dims=False):\n        return self._post_unsqueeze(tensor, axes=axes) if not keep_dims else tensor\n\n    def unsqueeze_vector(self, tensor):\n        original = self._tensor_map[tensor]\n        if self._is_constant(original) and len(original.consumers) == 1:\n            self._transform_constant(tensor, lambda data: np.expand_dims(data, 0))\n            return tensor\n        else:\n            return self.unsqueeze_input(tensor, axes=[0])\n\n    def convert_binarg(self, tensor, other):\n        if tensor.rank == 0:\n            return tensor\n        needs_transpose = self.transposing(other) and not self.transposing(tensor)\n        if other.rank > tensor.rank:\n            if tensor.rank == 1 and needs_transpose:\n                return self.unsqueeze_vector(tensor)\n            tensor = self._pre_unsqueeze(tensor, axes=list(range(other.rank - tensor.rank)))\n        return self.transpose_input(tensor) if needs_transpose else tensor\n\n    def ensure_list(self, value):\n        return value if isinstance(value, list) else list(value) if isinstance(value, tuple) else [value]\n\n    def is_bit_set(self, mask, idx):\n        return mask & (1 << idx) != 0\n\n    def bit_count(self, mask):\n        count = 0\n        for i in range(mask.bit_length()):\n            if self.is_bit_set(mask, i):\n                count += 1\n        return count\n\n    def replace_item_with(self, items, index, count, value):\n        return items[:index] + [value] * count + items[index+1:]\n\n    def replace_bit_with(self, mask, index, count, value):\n        value_bits = (((1 << count) - 1) << index) if value else 0\n        low_bits = mask & ((1 << index) - 1)\n        high_bits = (mask & ~((1 << (index + 1)) - 1)) << (count - 1)\n        return low_bits | value_bits | high_bits\n\n    def beg_index(self, stride):\n        return _INT_MAX if stride < 0 else 0\n\n    def end_index(self, stride):\n        return _INT_MAX if stride > 0 else -_INT_MAX\n\n\n_Transforms = Converter.unpack_transforms({\n    'Placeholder':\n        Transform(\n            type='external',\n            using={'needs_transpose': '!needs_io_transpose(O[0])'},\n            outputs='!transpose_output(O[0]) if needs_transpose else O[0]',\n            attribs={\n                'shape': '!list(nxc_to_ncx(shape) if needs_transpose else shape)',\n                'dtype': '!dtype',\n            }\n        ),\n    'Const':\n        Transform(\n            type='constant',\n            outputs='!O[0]',\n            attribs={\n                'shape': '!list(value.shape)',\n                'dtype': '!dtype',\n                'value': '!value',\n            }\n        ),\n    ('Conv2D', 'Conv3D', 'DepthwiseConv2dNative'):\n        Transform(\n            type='conv',\n            using={\n                'depthwise': '!_type_ == \"DepthwiseConv2dNative\"',\n            },\n            defaults={\n                'explicit_paddings': [],\n            },\n            inputs=(\n                '!transpose_input(I[0], data_format)',\n                '!transpose_filter(I[1]) if not depthwise else transpose_depthwise_filter(I[1])',\n            ),\n            outputs=(\n                '!transpose_output(O[0], data_format)',\n            ),\n            attribs={\n                'stride': '!convert_size(strides, data_format)',\n                'dilation': '!convert_size(dilations, data_format)',\n                'padding': '!convert_padding(padding, I[0].rank - 2, explicit_paddings, data_format)',\n                'groups': '!1 if not depthwise else 0',\n            }\n        ),\n    ('Conv2DBackpropInput', 'Conv3DBackpropInput', 'DepthwiseConv2dNativeBackpropInput'):\n        Transform(\n            type='deconv',\n            using={\n                'depthwise': '!_type_ == \"DepthwiseConv2dNativeBackpropInput\"',\n            },\n            defaults={\n                'explicit_paddings': [],\n            },\n            inputs=(\n                '!transpose_input(I[2], data_format)',\n                '!transpose_filter(I[1]) if not depthwise else transpose_depthwise_filter(I[1])',\n            ),\n            outputs=(\n                '!transpose_output(O[0], data_format)',\n            ),\n            attribs={\n                'stride': '!convert_size(strides, data_format)',\n                'dilation': '!convert_size(dilations, data_format)',\n                'padding': '!convert_padding(padding, I[2].rank - 2, explicit_paddings, data_format)',\n                'output_shape': '!nxc_to_ncx(as_const(I[0])) if is_nxc(data_format) else as_const(I[0])',\n                'groups': '!1 if not depthwise else 0',\n            }\n        ),\n    ('MaxPool', 'AvgPool', 'MaxPoolWithArgmax'):\n        Transform(\n            type=('max_pool', 'avg_pool', 'max_pool_with_index'),\n            defaults={\n                'explicit_paddings': [],\n                'data_format': 'NHWC',\n            },\n            inputs=(\n                '!transpose_input(I[0], data_format)',\n            ),\n            outputs=(\n                '!transpose_output(O[0], data_format)',\n                '!transpose_output(O[1], data_format) if len(O) > 1 else None',\n            ),\n            attribs={\n                'size': '!nxc_to_ncx(ksize) if is_nxc(data_format) else ksize',\n                'stride': '!nxc_to_ncx(strides) if is_nxc(data_format) else strides',\n                'padding': '!convert_padding(padding, I[0].rank, explicit_paddings, data_format)',\n                'border': '!\"ignore\"',\n            }\n        ),\n    'Concat':\n        Transform(\n            type='concat',\n            inputs=['!I[1:]'],\n            outputs='!transpose_like(O[0], I[1])',\n            attribs={\n                'axis': '!transpose_axis_like(as_const(I[0]), I[1], O[0].rank)',\n            }\n        ),\n    'ConcatV2':\n        Transform(\n            type='concat',\n            inputs=['!I[:-1]'],\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'axis': '!transpose_axis_like(as_const(I[-1]), I[0], O[0].rank)',\n            }\n        ),\n    'Split':\n        Transform(\n            type='split',\n            inputs='!I[1]',\n            outputs=['![transpose_like(O[i], I[1]) for i in range(len(O))]'],\n            attribs={\n                'axis': '!transpose_axis_like(as_const(I[0]), I[1])',\n                'ratios': '![1] * (num_split if not _lite_ else num_splits)',\n            }\n        ),\n    'SplitV':\n        Transform(\n            type='split',\n            inputs='!I[0]',\n            outputs=['![transpose_like(O[i], I[0]) for i in range(len(O))]'],\n            attribs={\n                'axis': '!transpose_axis_like(as_const(I[2]), I[0])',\n                'ratios': '!as_const(I[1])',\n            }\n        ),\n    'Reshape':\n        Transform(\n            type='reshape',\n            inputs='!undo_transpose(I[0])',\n            outputs='!O[0]',\n            attribs={\n                'shape': '!flexible_batch(as_const(I[1]), I[0].shape[0])',\n                'dtype': '!I[0].dtype',\n            }\n        ),\n    'Transpose':\n        Transform(\n            type='transpose',\n            inputs='!I[0]',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!transpose_axis_like(as_const(I[1]), I[0])',\n                'dtype': '!I[0].dtype',\n            }\n        ),\n    'Squeeze':\n        Transform(\n            type='squeeze',\n            inputs='!undo_transpose(I[0])',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!ensure_list(ensure_positive(squeeze_dims, I[0].rank)) if len(squeeze_dims) != 0 else'\n                        ' [i for i, x in enumerate(I[0].shape) if x == 1]',\n                'dtype': '!I[0].dtype',\n            }\n        ),\n    'ExpandDims':\n        Transform(\n            type='unsqueeze',\n            inputs='!undo_transpose(I[0])',\n            outputs='!O[0]',\n            attribs={\n                'axes': '!ensure_list(ensure_positive(as_const(I[1]), O[0].rank))',\n                'dtype': '!I[0].dtype',\n            }\n        ),\n    'Pack':\n        Transform(\n            type='stack',\n            inputs=['![undo_transpose(t) for t in I]'],\n            outputs='!O[0]',\n            attribs={\n                'axis': '!ensure_positive(axis, O[0].rank)',\n            }\n        ),\n    'Unpack':\n        Transform(\n            type='unstack',\n            inputs='!undo_transpose(I[0])',\n            outputs=['!O[:]'],\n            attribs={\n                'axis': '!ensure_positive(axis, I[0].rank)',\n            }\n        ),\n    ('Min', 'Max', 'Mean', 'Sum', 'Any', 'All'):\n        Transform(\n            type=('min_reduce', 'max_reduce', 'mean_reduce', 'sum_reduce', 'any_reduce', 'all_reduce'),\n            using={\n                'axes': '!ensure_list(transpose_axis_like(as_const(I[1]), I[0]))'\n            },\n            inputs='!I[0]',\n            outputs=(\n                '!transpose_like(O[0], I[0]) if keep_dims else squeeze_output(O[0], axes)',\n            ),\n            attribs={\n                'axes': '!axes',\n            }\n        ),\n    ('Add', 'AddV2', 'Sub', 'Mul', 'RealDiv', 'Pow', 'LogicalAnd', 'LogicalOr',\n     'Less', 'Greater', 'LessEqual', 'GreaterEqual', 'Equal', 'NotEqual', 'Minimum', 'Maximum'):\n        Transform(\n            type=('add', 'add', 'sub', 'mul', 'div', 'pow', 'and', 'or',\n                  'lt', 'gt', 'le', 'ge', 'eq', 'ne', 'min', 'max'),\n            inputs=(\n                '!convert_binarg(I[0], I[1])',\n                '!convert_binarg(I[1], I[0])',\n            ),\n            outputs='!transpose_output(O[0]) if transposing(I[0]) or transposing(I[1]) else O[0]',\n        ),\n    ('Identity', 'Relu', 'Relu6', 'Elu', 'Selu', 'Gelu', 'Silu', 'Swish', 'Sigmoid', 'Softplus', 'Exp', 'Log',\n     'Sin', 'Cos', 'Tan', 'Asin', 'Acos', 'Atan', 'Sinh', 'Cosh', 'Tanh', 'Asinh', 'Acosh', 'Atanh',\n     'Neg', 'Reciprocal', 'Sign', 'Abs', 'Floor', 'Ceil', 'Round', 'Square', 'Sqrt', 'Rsqrt', 'LogicalNot'):\n        Transform(\n            type=('copy', 'relu', 'relu6', 'elu', 'selu', 'gelu', 'silu', 'silu', 'sigmoid', 'softplus', 'exp', 'log',\n                  'sin', 'cos', 'tan', 'asin', 'acos', 'atan', 'sinh', 'cosh', 'tanh', 'asinh', 'acosh', 'atanh',\n                  'neg', 'rcp', 'sign', 'abs', 'floor', 'ceil', 'round', 'sqr', 'sqrt', 'rsqrt', 'not'),\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n        ),\n    'LeakyRelu':\n        Transform(\n            type='leaky_relu',\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'alpha': '!alpha',\n            }\n        ),\n    ('FusedBatchNorm', 'FusedBatchNormV3'):\n        Transform(\n            type='batch_normalization',\n            inputs=(\n                '!transpose_input(I[0], data_format)',\n                '!unsqueeze_vector(I[3])',\n                '!unsqueeze_vector(I[4])',\n                '!unsqueeze_vector(I[2])',\n                '!unsqueeze_vector(I[1])',\n            ),\n            outputs=(\n                '!transpose_output(O[0], data_format)',\n            ),\n            attribs={\n                'epsilon': '!epsilon',\n            }\n        ),\n    'BiasAdd':\n        Transform(\n            type='add',\n            inputs=(\n                '!transpose_input(I[0], data_format)',\n                '!unsqueeze_vector(I[1])',\n            ),\n            outputs='!transpose_output(O[0], data_format)',\n        ),\n    'Softmax':\n        Transform(\n            type='softmax',\n            cond={\n                '!beta == 1 if _lite_ else True': 'beta must be 1',\n            },\n            inputs='!transpose_input(I[0])',\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'axes': [1],\n            }\n        ),\n    'MatMul':\n        Transform(\n            type='matmul',\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!O[0]',\n            attribs={\n                'transposeA': '!transpose_a',\n                'transposeB': '!transpose_b',\n            },\n        ),\n    'ClipByValue':\n        Transform(\n            type='clamp',\n            inputs=('!I[0]', '!I[1]', '!I[2]'),\n            outputs='!transpose_like(O[0], I[0])',\n        ),\n    ('Pad', 'MirrorPad'):\n        Transform(\n            type='pad',\n            cond={\n                '!mode in [\"CONSTANT\", \"REFLECT\", \"SYMMETRIC\"]':\n                    'mode must be one of \"CONSTANT\", \"REFLECT\" or SYMMETRIC',\n            },\n            defaults={\n                'mode': 'CONSTANT',\n            },\n            using={\n                'paddings': '!transpose_list_like(as_const(I[1]), ref=I[0])',\n            },\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'padding': '![tuple(item) for item in paddings]',\n                'border': '!\"reflect\" if mode == \"REFLECT\" else \"reflect-even\" if mode == \"SYMMETRIC\" else \"constant\"',\n            }\n        ),\n    'Slice':\n        Transform(\n            type='slice',\n            using={\n                'beg': '!as_const(I[1])',\n                'end': '![0 if s == -1 else b + s for b, s in zip(as_const(I[1]), as_const(I[2]))]',\n            },\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'axes': '!list(range(I[0].rank))',\n                'begin': '!transpose_list_like(beg, ref=I[0])',\n                'end': '!transpose_list_like(end, ref=I[0])',\n            }\n        ),\n    'StridedSlice':\n        Transform(\n            type='slice',\n            using=OrderedDict([\n                ('ref', '!I[0] if new_axis_mask == 0 else None'),\n                ('rank', '!I[0].rank + bit_count(new_axis_mask)'),\n                ('stride', '!as_const(I[3])'),\n                ('ellipsis_index', '!int(math.log2(ellipsis_mask)) if ellipsis_mask != 0 else None'),\n                ('ellipsis_count', '!rank - (len(stride) - 1) if ellipsis_mask != 0 else None'),\n                ('beg', '!replace_item_with(as_const(I[1]), ellipsis_index, ellipsis_count, 0) '\n                        'if ellipsis_index is not None else as_const(I[1])'),\n                ('end', '!replace_item_with(as_const(I[2]), ellipsis_index, ellipsis_count, 0) '\n                        'if ellipsis_index is not None else as_const(I[2])'),\n                ('begin_mask', '!replace_bit_with(begin_mask, ellipsis_index, ellipsis_count, 1) '\n                               'if ellipsis_index is not None else begin_mask'),\n                ('end_mask', '!replace_bit_with(end_mask, ellipsis_index, ellipsis_count, 1) '\n                             'if ellipsis_index is not None else end_mask'),\n                ('new_axis_mask', '!replace_bit_with(new_axis_mask, ellipsis_index, ellipsis_count, 0) '\n                                  'if ellipsis_index is not None else new_axis_mask'),\n                ('shrink_axis_mask', '!replace_bit_with(shrink_axis_mask, ellipsis_index, ellipsis_count, 0) '\n                                     'if ellipsis_index is not None else shrink_axis_mask'),\n                ('masked_beg', '![beg_index(s) if is_bit_set(begin_mask,i) else b '\n                               'for i, (b, s) in enumerate(zip(beg,stride))]'),\n                ('masked_end', '![end_index(s) if is_bit_set(end_mask,i) else b + 1 '\n                               'if is_bit_set(shrink_axis_mask,i) else e '\n                               'for i, (b, e, s) in enumerate(zip(beg,end,stride))]'),\n                ('axes', '!transpose_axis_like([i for i in range(rank) '\n                         'if not (is_bit_set(begin_mask,i) and is_bit_set(end_mask,i)) '\n                         'and not is_bit_set(new_axis_mask,i)], ref, rank)'),\n                ('new_axes', '!transpose_axis_like([i for i in range(rank) '\n                             'if is_bit_set(new_axis_mask,i)], ref, rank)'),\n                ('del_axes', '!transpose_axis_like([i for i in range(rank) '\n                             'if is_bit_set(shrink_axis_mask,i)], ref, rank)'),\n            ]),\n            inputs='!unsqueeze_input(undo_transpose(I[0]), new_axes) if len(new_axes) else I[0]',\n            outputs='!transpose_like(squeeze_output(O[0], del_axes) if len(del_axes) else O[0], ref)',\n            attribs={\n                'axes': '![i for i in range(rank) if i in axes]',\n                'begin': '![b for i, b in enumerate(transpose_list_like(masked_beg, ref)) if i in axes]',\n                'end': '![e for i, e in enumerate(transpose_list_like(masked_end, ref)) if i in axes]',\n                'stride': '![s for i, s in enumerate(transpose_list_like(stride, ref)) if i in axes]',\n            }\n        ),\n    ('ArgMin', 'ArgMax'):\n        Transform(\n            type=('argmin_reduce', 'argmax_reduce'),\n            using={\n                'axis': '!transpose_axis_like(as_const(I[1]), ref=I[0])'\n            },\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], ref=I[0]) if _lite_ else squeeze_output(O[0], [axis])',\n            attribs={\n                'axes': '!ensure_list(axis)',\n            }\n        ),\n    'Select':\n        Transform(\n            type='select',\n            inputs=(\n                '!I[0]',\n                '!convert_binarg(I[1], I[0])',\n                '!convert_binarg(I[2], I[0])',\n            ),\n            outputs='!transpose_like(O[0], ref=I[0])',\n        ),\n    'Tile':\n        Transform(\n            type='tile',\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'repeats': '!transpose_list_like(as_const(I[1]), I[0])',\n            }\n        ),\n    'ResizeNearestNeighbor':\n        Transform(\n            type='!\"nearest_upsample\" if upsample else \"nearest_downsample\"',\n            using=OrderedDict([\n                ('old_size', '!I[0].shape[1:-1]'),\n                ('new_size', '!as_const(I[1])'),\n                ('upsample', '!is_integer_upsample(old_size, new_size)'),\n                ('downsample', '!is_integer_downsample(old_size, new_size)'),\n            ]),\n            cond={\n                '!upsample or downsample': 'nearest resize must be integer up-sample or down-sample',\n                '!not align_corners': 'align_corners is not supported',\n            },\n            inputs='!transpose_input(I[0])',\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'factor': '!upsample_factor(old_size, new_size) if upsample else downsample_factor(old_size, new_size)',\n            }\n        ),\n    'ResizeArea':\n        Transform(\n            type='area_downsample',\n            cond={\n                '!is_integer_downsample(I[0].shape[1:-1], O[0].shape[1:-1])': 'area resize must be integer down-sample',\n                '!not align_corners': 'align_corners is not supported',\n            },\n            using={\n                'size': '!I[0].shape[1:-1]'\n            },\n            inputs='!transpose_input(I[0])',\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'factor': '!downsample_factor(size, as_const(I[1]))',\n            }\n        ),\n    'ResizeBilinear':\n        Transform(\n            type='multilinear_upsample',\n            cond={\n                '!is_integer_upsample(I[0].shape[1:-1], O[0].shape[1:-1])': 'bilinear resize must be integer up-sample',\n            },\n            using={\n                'size': '!I[0].shape[1:-1]'\n            },\n            inputs='!transpose_input(I[0])',\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'factor': '!upsample_factor(size, as_const(I[1]))',\n                'method': '!\"aligned\" if align_corners else \"symmetric\" if half_pixel_centers else \"asymmetric\"',\n            }\n        ),\n    'LRN':\n        Transform(\n            type='local_response_normalization',\n            using={\n                'size': '!(radius if _lite_ else depth_radius) * 2 + 1'\n            },\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'size': '![1, size] + [1] * (I[0].rank - 2)',\n                'alpha': '!alpha * size',\n                'beta': '!beta',\n                'bias': '!bias',\n            }\n        ),\n    'Cast':\n        Transform(\n            using={\n                'same_type': '!nnef_dtype(O[0].dtype) == nnef_dtype(I[0].dtype)',\n            },\n            type='!\"copy\" if same_type else \"cast\"',\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'dtype': '!O[0].dtype if not same_type else None',\n            },\n        ),\n    ('Gather', 'GatherV2'):\n        Transform(\n            type='gather',\n            using={\n                'axis': '!transpose_axis_like(as_const(I[2]), ref=I[0])'\n            },\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'axis': '!axis',\n            },\n        ),\n    'AddN':\n        Transform(\n            type='add_n',\n            inputs=['!I[:]'],\n            outputs='!transpose_like(O[0], I[0])'\n        ),\n})\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/conversion/tflite_to_nnef.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\nfrom .converter import Converter as _Converter, Transform, ConversionError\nfrom .tf_to_nnef import Converter as _TFConverter, _Transforms as _TFTransforms, _RELU6_FRAGMENT\nfrom ..model import Tensor, Operation\nfrom ..utils import types\nfrom ..io.tf.lite import CustomOptionsKey\nimport numpy as np\nimport copy\n\n\n_DETECTION_POSTPROCESS_FRAGMENT = \"\"\"\nfragment TFLite_Detection_PostProcess( \n    boxes: tensor<scalar>, \n    scores: tensor<scalar>, \n    anchors: tensor<scalar>, \n    detections_per_class: integer,\n    max_classes_per_detection: integer,\n    max_detections: integer, \n    nms_iou_threshold: scalar, \n    nms_score_threshold: scalar, \n    num_classes: integer, \n    use_regular_nms: logical, \n    h_scale: scalar,\n    w_scale: scalar, \n    x_scale: scalar, \n    y_scale: scalar ) \n-> ( \n    detection_boxes: tensor<scalar>, \n    detection_classes: tensor<scalar>, \n    detection_scores: tensor<scalar>, \n    num_detections: tensor<scalar> );\n\"\"\"\n\n\nclass Converter(_TFConverter):\n\n    _ConvOpTypes = ['CONV_1D', 'CONV_2D', 'CONV_3D', 'TRANSPOSE_CONV', 'DEPTHWISE_CONV_2D']\n\n    _ActivationOpTypes = {\n        'ELU': 'elu',\n        'RELU': 'relu',\n        'RELU6': 'relu6',\n        'LOGISTIC': 'sigmoid',\n        'TANH': 'tanh',\n    }\n\n    @staticmethod\n    def defined_operations():\n        return {\n            'relu6': _RELU6_FRAGMENT,\n            'TFLite_Detection_PostProcess': _DETECTION_POSTPROCESS_FRAGMENT,\n        }\n\n    def __init__(self, io_transpose=False, custom_transforms=None, custom_functions=None,\n                 mirror_unsupported=False, keep_io_names=False):\n        _Converter.__init__(self, transforms=self.merge_transforms(_Transforms, custom_transforms),\n                            functions=custom_functions, mirror_unsupported=mirror_unsupported)\n        self._io_transpose = io_transpose\n        self._keep_io_names = keep_io_names\n\n    def __call__(self, graph):\n        graph = _TFConverter.__call__(self, graph)\n        self._fix_custom_options(graph)\n        return graph\n\n    def _global_attribs(self):\n        return {'_lite_': True}\n\n    def _prepare(self, graph):\n        self._fix_quantization_attribs(graph)\n        self._fix_quantized_dtypes(graph)\n        self._insert_externals_and_constants(graph)\n        self._transpose_externals(graph)\n\n    def _is_constant(self, tensor):\n        return tensor.producer is None and tensor.data is not None\n\n    def _read_constant(self, tensor, type=None):\n        if tensor.producer is None:\n            return types.from_numpy(tensor.data, type=type)\n        else:\n            raise ConversionError('trying to evaluate non-constant tensor')\n\n    def _transpose_externals(self, graph):\n        for op in graph.operations:\n            if op.type == 'external':\n                if self.needs_io_transpose(op.output):\n                    shape = self.nxc_to_ncx(op.output.shape)\n                    op.attribs['shape'] = list(shape)\n                    self._transposes[op.output] = shape\n\n    @staticmethod\n    def _is_zero(value):\n        return np.all(value == 0) if isinstance(value, np.ndarray) else value == 0\n\n    def _fix_quantized_dtypes(self, graph):\n        for tensor in graph.tensors:\n            if tensor.quant:\n                scale = tensor.quant.get('scale')\n                if scale is not None and not self._is_zero(scale):\n                    tensor.dtype = np.float32\n                else:\n                    tensor.quant = None\n\n    def _fix_quantization_attribs(self, graph):\n        dtype_bits = {\n            np.int8: 8,\n            np.uint8: 8,\n            np.int16: 16,\n            np.uint16: 16,\n            np.int32: 32,\n            np.uint32: 32,\n            np.int64: 64,\n            np.uint64: 64,\n        }\n\n        for tensor in graph.tensors:\n            if tensor.quant:\n                scale = tensor.quant.get('scale')\n                zero_point = tensor.quant.get('zero_point')\n                if scale is not None and not self._is_zero(scale):\n                    if 'min' in tensor.quant:\n                        del tensor.quant['min']\n                    if 'max' in tensor.quant:\n                        del tensor.quant['max']\n                    assert tensor.dtype == np.uint8 or tensor.dtype == np.int8 or \\\n                           tensor.dtype == np.uint16 or tensor.dtype == np.int16 or \\\n                           tensor.dtype == np.uint32 or tensor.dtype == np.int32, \\\n                        \"unknown quantized dtype '{}'\".format(tensor.dtype)\n                    tensor.quant['op-name'] = 'zero_point_linear_quantize'\n                    tensor.quant['bits'] = 32 if self._is_conv_bias(tensor) else dtype_bits[tensor.dtype]\n                    tensor.quant['signed'] = tensor.dtype == np.int8 or tensor.dtype == np.int16 or tensor.dtype == np.int32\n                    tensor.quant['symmetric'] = self._is_conv_filter(tensor)\n\n                    if tensor.data is None:\n                        if isinstance(zero_point, np.ndarray) and len(zero_point.shape) == 1:\n                            tensor.quant['zero_point'] = np.expand_dims(zero_point, axis=0)\n                        if isinstance(scale, np.ndarray) and len(scale.shape) == 1:\n                            tensor.quant['scale'] = np.expand_dims(scale, axis=0)\n\n    def _fix_custom_options(self, graph):\n        for op in graph.operations:\n            if op.custom:\n                options = op.attribs.get(CustomOptionsKey)\n                if options is not None:\n                    op.attribs[CustomOptionsKey] = options.hex()\n\n    def _is_conv_filter(self, tensor):\n        tensor = self._tensor_map.get(tensor)\n        return tensor and len(tensor.consumers) > 0 and \\\n               all(op.type in Converter._ConvOpTypes and op.inputs[1] is tensor for op in tensor.consumers)\n\n    def _is_conv_bias(self, tensor):\n        tensor = self._tensor_map.get(tensor)\n        return tensor and len(tensor.consumers) > 0 and \\\n               all(op.type in Converter._ConvOpTypes and op.inputs[2] is tensor for op in tensor.consumers)\n\n    def activation(self, output, func):\n        if func is None or func == 'NONE':\n            return output\n\n        if func not in self._ActivationOpTypes:\n            raise ConversionError(\"Unsupported fused activation function '{}'\".format(func))\n\n        input = Tensor(output.graph, dtype=output.dtype, shape=self._working_shape(output), quant=copy.deepcopy(output.quant))\n        Operation(output.graph, type=self._ActivationOpTypes[func], inputs=input, outputs=output)\n        return input\n\n    def flat_list(self, array):\n        return [item for items in array for item in items] if len(array) and isinstance(array[0], (list, tuple)) else array\n\n    def flatten(self, input):\n        shape = (input.shape[0], int(np.prod(input.shape[1:])))\n        output = Tensor(input.graph, dtype=input.dtype, shape=shape, quant=copy.deepcopy(input.quant))\n        self._reshape_operation(input, output, shape)\n        return output\n\n    def same_shape(self, input, output):\n        return self._tensor_map[input].shape == self._tensor_map[output].shape\n\n\n_Transforms = Converter.unpack_transforms({\n    ('CONV_1D', 'CONV_2D', 'CONV_3D', 'DEPTHWISE_CONV_2D'):\n        Transform(\n            type='conv',\n            using={\n                'depthwise': '!_type_ == \"DEPTHWISE_CONV_2D\"',\n            },\n            inputs=(\n                '!transpose_input(I[0])',\n                '!transpose_filter(I[1], format=\"NXC\" if not depthwise else \"CXN\")',\n                '!unsqueeze_vector(I[2])',\n            ),\n            outputs='!activation(transpose_output(O[0]), fused_activation_function)',\n            attribs={\n                'stride': '![stride_h, stride_w]',\n                'dilation': '![dilation_h_factor, dilation_w_factor]',\n                'padding': '!convert_padding(padding, I[0].rank - 2)',\n                'groups': '!1 if not depthwise else 0',\n            }\n        ),\n    'TRANSPOSE_CONV':\n        Transform(\n            type='deconv',\n            using={\n                'depthwise': False,\n            },\n            inputs=(\n                '!transpose_input(I[2])',\n                '!transpose_filter(I[1], format=\"CXN\" if not depthwise else \"NXC\")',\n            ),\n            outputs='!transpose_output(O[0])',\n            attribs={\n                'stride': '![stride_h, stride_w]',\n                'padding': '!convert_padding(padding, I[0].rank - 2)',\n                'output_shape': '!nxc_to_ncx(as_const(I[0]))',\n                'groups': '!1 if not depthwise else 0',\n            }\n        ),\n    ('MAX_POOL_2D', 'AVERAGE_POOL_2D'):\n        Transform(\n            type=('max_pool', 'avg_pool'),\n            inputs=(\n                '!transpose_input(I[0])',\n            ),\n            outputs=(\n                '!transpose_output(O[0])',\n            ),\n            attribs={\n                'size': '![1, 1, filter_height, filter_width]',\n                'stride': '![1, 1, stride_h, stride_w]',\n                'padding': '!convert_padding(padding, I[0].rank)',\n                'border': '!\"ignore\"',\n            }\n        ),\n    'RESHAPE':\n        Transform(\n            type='reshape',\n            inputs='!undo_transpose(I[0])',\n            outputs='!O[0]',\n            attribs={\n                'shape': '!flexible_batch(flat_list(as_const(I[1])) if len(I) > 1 else new_shape, I[0].shape[0])',\n                'dtype': '!I[0].dtype',\n            }\n        ),\n    'CONCATENATION':\n        Transform(\n            type='concat',\n            inputs=['!I[:]'],\n            outputs='!activation(transpose_like(O[0], I[0]), fused_activation_function)',\n            attribs={\n                'axis': '!transpose_axis_like(axis, I[0], O[0].rank)',\n            }\n        ),\n    'FULLY_CONNECTED':\n        Transform(\n            type='linear',\n            cond={\n                '!weights_format == \"DEFAULT\"': 'wights_format must be \"DEFAULT\"',\n            },\n            inputs=(\n                '!I[0] if keep_num_dims else flatten(I[0])',\n                '!I[1]',\n                '!unsqueeze_vector(I[2]) if len(I) > 2 else None',\n            ),\n            outputs='!activation(O[0], fused_activation_function)',\n        ),\n    'BATCH_MATMUL':\n        Transform(\n            type='matmul',\n            cond={\n                '!asymmetric_quantize_inputs == False': 'asymmetric_quantize_inputs must be False',\n            },\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!O[0]',\n            attribs={\n                'transposeA': '!adj_x',\n                'transposeB': '!adj_y',\n            },\n        ),\n    'L2_NORMALIZATION':\n        Transform(\n            type='l2_normalization',\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'axes': '!list(range(I[0].rank))',\n            }\n        ),\n    'PRELU':\n        Transform(\n            type='prelu',\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!transpose_like(O[0], I[0])',\n        ),\n    ('PAD', 'MIRROR_PAD'):\n        Transform(\n            type='pad',\n            cond={\n                '!mode < 2': 'mode must be 0 or 1',\n            },\n            defaults={\n                'mode': None,\n            },\n            using={\n                'paddings': '!transpose_list_like(as_const(I[1]), ref=I[0])',\n            },\n            inputs='!I[0]',\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'padding': '![tuple(item) for item in paddings]',\n                'border': '!\"reflect\" if mode == 0 else \"reflect-even\" if mode == 1 else \"constant\"',\n            }\n        ),\n    'GATHER':\n        Transform(\n            type='gather',\n            inputs=('!I[0]', '!I[1]'),\n            outputs='!transpose_like(O[0], I[0])',\n            attribs={\n                'axis': '!transpose_axis_like(axis, I[0])',\n            },\n        ),\n    'IDENTITY': _TFTransforms['Identity'],\n    'QUANTIZE': _TFTransforms['Identity'],\n    'TRANSPOSE': _TFTransforms['Transpose'],\n    'SPLIT': _TFTransforms['Split'],\n    'SPLIT_V': _TFTransforms['SplitV'],\n    'PACK': _TFTransforms['Pack'],\n    'UNPACK': _TFTransforms['Unpack'],\n    'TILE': _TFTransforms['Tile'],\n    'SQUEEZE': _TFTransforms['Squeeze'],\n    'EXPAND_DIMS': _TFTransforms['ExpandDims'],\n    'SLICE': _TFTransforms['Slice'],\n    'STRIDED_SLICE': _TFTransforms['StridedSlice'],\n    'RELU': _TFTransforms['Relu'],\n    'RELU6': _TFTransforms['Relu6'],\n    'ELU': _TFTransforms['Elu'],\n    'LEAKY_RELU': _TFTransforms['LeakyRelu'],\n    'LOGISTIC': _TFTransforms['Sigmoid'],\n    'SIN': _TFTransforms['Sin'],\n    'COS': _TFTransforms['Cos'],\n    'TAN': _TFTransforms['Tan'],\n    'ASIN': _TFTransforms['Asin'],\n    'ACOS': _TFTransforms['Acos'],\n    'ATAN': _TFTransforms['Atan'],\n    'SINH': _TFTransforms['Sinh'],\n    'COSH': _TFTransforms['Cosh'],\n    'TANH': _TFTransforms['Tanh'],\n    'ASINH': _TFTransforms['Asinh'],\n    'ACOSH': _TFTransforms['Acosh'],\n    'ATANH': _TFTransforms['Atanh'],\n    'EXP': _TFTransforms['Exp'],\n    'LOG': _TFTransforms['Log'],\n    'ABS': _TFTransforms['Abs'],\n    'NEG': _TFTransforms['Neg'],\n    'LOGICAL_NOT': _TFTransforms['LogicalNot'],\n    'FLOOR': _TFTransforms['Floor'],\n    'CEIL': _TFTransforms['Ceil'],\n    'ROUND': _TFTransforms['Round'],\n    'SQUARE': _TFTransforms['Square'],\n    'SQRT': _TFTransforms['Sqrt'],\n    'RSQRT': _TFTransforms['Rsqrt'],\n    'ADD': _TFTransforms['Add'],\n    'SUB': _TFTransforms['Sub'],\n    'MUL': _TFTransforms['Mul'],\n    'DIV': _TFTransforms['RealDiv'],\n    'POW': _TFTransforms['Pow'],\n    'MINIMUM': _TFTransforms['Minimum'],\n    'MAXIMUM': _TFTransforms['Maximum'],\n    'LOGICAL_AND': _TFTransforms['LogicalAnd'],\n    'LOGICAL_OR': _TFTransforms['LogicalOr'],\n    'LESS': _TFTransforms['Less'],\n    'LESS_EQUAL': _TFTransforms['LessEqual'],\n    'GREATER': _TFTransforms['Greater'],\n    'GREATER_EQUAL': _TFTransforms['GreaterEqual'],\n    'EQUAL': _TFTransforms['Equal'],\n    'NOT_EQUAL': _TFTransforms['NotEqual'],\n    'SELECT': _TFTransforms['Select'],\n    'REDUCE_MIN': _TFTransforms['Min'],\n    'REDUCE_MAX': _TFTransforms['Max'],\n    'MEAN': _TFTransforms['Mean'],\n    'SUM': _TFTransforms['Sum'],\n    'REDUCE_ANY': _TFTransforms['Any'],\n    'REDUCE_ALL': _TFTransforms['All'],\n    'ARG_MIN': _TFTransforms['ArgMin'],\n    'ARG_MAX': _TFTransforms['ArgMax'],\n    'SOFTMAX': _TFTransforms['Softmax'],\n    'LOCAL_RESPONSE_NORMALIZATION': _TFTransforms['LRN'],\n    'RESIZE_NEAREST_NEIGHBOR': _TFTransforms['ResizeNearestNeighbor'],\n    'RESIZE_BILINEAR': _TFTransforms['ResizeBilinear'],\n    'ADD_N': _TFTransforms['AddN'],\n    'CAST': _TFTransforms['Cast'],\n})\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/convert.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .conversion import *\nfrom .model import utils\nimport numpy as np\nimport importlib\nimport argparse\nimport json\nimport six\n\n\ndef get_reader(input_format, decomposed, fold_constants, custom_shapes):\n    if input_format == 'tf':\n        from .io.tf.graphdef import Reader\n        return Reader(fold_constants=fold_constants)\n    elif input_format == 'tflite':\n        from .io.tf.lite import Reader\n        return Reader()\n    elif input_format == 'nnef':\n        from .io.nnef import Reader\n        return Reader(decomposed=decomposed, custom_shapes=custom_shapes)\n    elif input_format == 'onnx':\n        from .io.onnx import Reader\n        return Reader(simplify=fold_constants)\n    elif input_format == 'caffe2':\n        from .io.caffe2 import Reader\n        return Reader()\n    elif input_format == 'caffe':\n        from .io.caffe2 import Reader\n        return Reader(legacy=True)\n    else:\n        return None\n\n\ndef get_writer(output_format, fragments, fragment_dependencies, generate_fragments, annotate_shapes, compression):\n    if output_format == 'tf':\n        from .io.tf.graphdef import Writer\n        return Writer()\n    elif output_format == 'tflite':\n        from .io.tf.lite import Writer\n        return Writer()\n    elif output_format == 'nnef':\n        from .io.nnef import Writer\n        return Writer(fragments=fragments, fragment_dependencies=fragment_dependencies,\n                      generate_custom_fragments=generate_fragments,\n                      annotate_shapes=annotate_shapes, compression=compression)\n    elif output_format == 'onnx':\n        from .io.onnx import Writer\n        return Writer()\n    elif output_format == 'caffe2':\n        from .io.caffe2 import Writer\n        return Writer()\n    else:\n        return None\n\n\ndef get_converter(input_format, output_format, io_transpose, custom_transforms, custom_functions, custom_shapes,\n                  mirror_unsupported, keep_io_names):\n    if input_format == 'tf' and output_format == 'nnef':\n        from .conversion.tf_to_nnef import Converter\n        return Converter(io_transpose=io_transpose,\n                         custom_transforms=custom_transforms,\n                         custom_functions=custom_functions,\n                         mirror_unsupported=mirror_unsupported,\n                         keep_io_names=keep_io_names)\n    elif input_format == 'nnef' and output_format == 'tf':\n        from .conversion.nnef_to_tf import Converter\n        return Converter(io_transpose=io_transpose,\n                         custom_transforms=custom_transforms,\n                         custom_functions=custom_functions,\n                         mirror_unsupported=mirror_unsupported)\n    elif input_format == 'tflite' and output_format == 'nnef':\n        from .conversion.tflite_to_nnef import Converter\n        return Converter(io_transpose=io_transpose,\n                         custom_transforms=custom_transforms,\n                         custom_functions=custom_functions,\n                         mirror_unsupported=mirror_unsupported,\n                         keep_io_names=keep_io_names)\n    elif input_format == 'nnef' and output_format == 'tflite':\n        from .conversion.nnef_to_tflite import Converter\n        return Converter(io_transpose=io_transpose,\n                         custom_transforms=custom_transforms,\n                         custom_functions=custom_functions,\n                         mirror_unsupported=mirror_unsupported)\n    elif (input_format == 'onnx' or input_format == 'caffe2' or input_format == 'caffe') and output_format == 'nnef':\n        from .conversion.onnx_to_nnef import Converter\n        return Converter(custom_transforms=custom_transforms,\n                         custom_functions=custom_functions,\n                         custom_shapes=custom_shapes,\n                         infer_shapes=bool(custom_shapes),\n                         mirror_unsupported=mirror_unsupported,\n                         keep_io_names=keep_io_names,\n                         io_transpose=io_transpose)\n    elif input_format == 'nnef' and (output_format == 'onnx' or output_format == 'caffe2'):\n        from .conversion.nnef_to_onnx import Converter\n        return Converter(custom_transforms=custom_transforms,\n                         custom_functions=custom_functions,\n                         mirror_unsupported=mirror_unsupported)\n    else:\n        return None\n\n\ndef get_optimizer(format, custom_optimizers=None, dequantize=False):\n    if format == 'nnef':\n        from .optimization.nnef_optimizer import Optimizer\n        return Optimizer(custom_optimizers=custom_optimizers, dequantize=dequantize)\n    elif format == 'tf':\n        from .optimization.tf_optimizer import Optimizer\n        return Optimizer(custom_optimizers=custom_optimizers)\n    elif format == 'tflite':\n        from .optimization.tflite_optimizer import Optimizer\n        return Optimizer(custom_optimizers=custom_optimizers)\n    elif format == 'onnx':\n        from .optimization.onnx_optimizer import Optimizer\n        return Optimizer(custom_optimizers=custom_optimizers)\n    else:\n        return None\n\n\ndef get_custom_converters(module_names):\n    CUSTOM_TRANSFORMS = \"CUSTOM_TRANSFORMS\"\n\n    transforms = {}\n    functions = {}\n    for module_name in module_names:\n        module = importlib.import_module(module_name)\n        if hasattr(module, CUSTOM_TRANSFORMS):\n            transforms.update(getattr(module, CUSTOM_TRANSFORMS))\n\n        functions.update(Converter.find_public_functions(module))\n\n    return transforms, functions\n\n\ndef get_custom_shapes(module_names):\n    CUSTOM_SHAPES = \"CUSTOM_SHAPES\"\n\n    shapes = {}\n    for module_name in module_names:\n        module = importlib.import_module(module_name)\n        if hasattr(module, CUSTOM_SHAPES):\n            shapes.update(getattr(module, CUSTOM_SHAPES))\n\n    return shapes\n\n\ndef get_custom_fragments(module_names):\n    CUSTOM_FRAGMENTS = \"CUSTOM_FRAGMENTS\"\n\n    fragments = {}\n    for module_name in module_names:\n        module = importlib.import_module(module_name)\n        if hasattr(module, CUSTOM_FRAGMENTS):\n            fragments.update(getattr(module, CUSTOM_FRAGMENTS))\n\n    return fragments\n\n\ndef get_custom_optimizers(module_names):\n    CUSTOM_OPTIMIZERS = \"CUSTOM_OPTIMIZERS\"\n\n    optimizers = {}\n    for module_name in module_names:\n        module = importlib.import_module(module_name)\n        if hasattr(module, CUSTOM_OPTIMIZERS):\n            optimizers.update(getattr(module, CUSTOM_OPTIMIZERS))\n\n    return optimizers\n\n\ndef needs_conversion(input_format, output_format):\n    if input_format == 'caffe2' and output_format == 'onnx':\n        return False\n    elif input_format == 'onnx' and output_format == 'caffe2':\n        return False\n    elif input_format == 'caffe' and (output_format == 'onnx' or output_format == 'caffe2'):\n        return False\n    else:\n        return input_format != output_format\n\n\ndef check_nan_or_inf(graph, which):\n    valid = True\n    for tensor in graph.tensors:\n        if tensor.data is not None:\n            if np.any(np.isnan(tensor.data)):\n                print(\"{} graph contains nan in tensor '{}'\".format(which, tensor.name))\n                valid = False\n            if np.any(np.isinf(tensor.data)):\n                print(\"{} graph contains inf in tensor '{}'\".format(which, tensor.name))\n                valid = False\n\n    for op in graph.operations:\n        for key, value in six.iteritems(op.attribs):\n            if isinstance(value, np.ndarray) and np.issubdtype(value.dtype.type, np.floating):\n                if np.any(np.isnan(value)):\n                    print(\"{} graph contains nan in attribute '{}' of operator '{}'\".format(which, key, op.type) +\n                          \" named '{}'\".format(op.name) if op.name is not None else \"\")\n                    valid = False\n                if np.any(np.isinf(value)):\n                    print(\"{} graph contains inf in attribute '{}' of operator '{}'\".format(which, key, op.type) +\n                          \" named '{}'\".format(op.name) if op.name is not None else \"\")\n                    valid = False\n\n    return valid\n\n\ndef main(args):\n    io_transpose = False if args.io_transpose is None else True if len(args.io_transpose) == 0 else args.io_transpose\n\n    custom_transforms, custom_functions = get_custom_converters(args.custom_converters) \\\n        if args.custom_converters is not None else (None, None)\n\n    custom_shapes = get_custom_shapes(args.custom_shapes) or {} if args.custom_shapes is not None else {}\n\n    converter = None\n    if needs_conversion(args.input_format, args.output_format):\n        converter = get_converter(args.input_format, args.output_format, io_transpose,\n                                  custom_transforms, custom_functions, custom_shapes,\n                                  args.mirror_unsupported, args.keep_io_names)\n        if converter is None:\n            print(\"Unsupported conversion: {} to {}\".format(args.input_format, args.output_format))\n            return -1\n\n    decomposed = converter.decomposed_operations() if converter else []\n    fragments = converter.defined_operations() if converter else {}\n    dependencies = converter.defined_operation_dependencies() if converter else {}\n\n    if args.decompose is not None:\n        decomposed += args.decompose\n\n    if converter is not None:\n        custom_shapes.update(converter.defined_shapes())\n\n    if args.custom_fragments is not None:\n        fragments.update(get_custom_fragments(args.custom_fragments))\n\n    reader = get_reader(args.input_format, decomposed=decomposed, fold_constants=args.fold_constants,\n                        custom_shapes=custom_shapes)\n    if reader is None:\n        print(\"Unsupported input-format: {}\".format(args.input_format))\n        return -1\n\n    writer = get_writer(args.output_format,\n                        fragments=fragments, fragment_dependencies=dependencies,\n                        generate_fragments=args.generate_custom_fragments,\n                        annotate_shapes=args.annotate_shapes, compression=args.compress)\n    if writer is None:\n        print(\"Unsupported output-format: {}\".format(args.output_format))\n        return -1\n\n    default_output_model = args.input_model + '.' + (args.output_format if args.output_format != 'tf' else 'pb')\n\n    reader_kwargs = {}\n    if args.input_shapes is not None:\n        input_shapes = eval(args.input_shapes)\n        if not isinstance(input_shapes, dict) or not all(isinstance(name, str) and isinstance(shape, tuple)\n                                                        for name, shape in six.iteritems(input_shapes)):\n            print(\"'Input-shape' must be a dict of strings to tuples\")\n            return -1\n\n        reader_kwargs['input_shapes'] = input_shapes\n\n    try:\n        graph = reader(args.input_model, **reader_kwargs)\n\n        if not check_nan_or_inf(graph, 'Input'):\n            return -1\n\n        if args.input_names is not None or args.output_names is not None:\n            not_found_names = []\n\n            if args.input_names is not None:\n                input_names = set(args.input_names)\n                inputs = [tensor for tensor in graph.tensors if tensor.name in input_names]\n\n                if len(inputs) != len(input_names):\n                    found_names = [tensor.name for tensor in inputs]\n                    not_found_names.append([input_name for input_name in input_names if input_name not in found_names])\n                else:\n                    graph.inputs = inputs\n\n            if args.output_names is not None:\n                output_names = set(args.output_names)\n                outputs = [tensor for tensor in graph.tensors if tensor.name in output_names]\n\n                if len(outputs) != len(output_names):\n                    found_names = [tensor.name for tensor in outputs]\n                    not_found_names.append([output_name for output_name in output_names if output_name not in found_names])\n                else:\n                    graph.outputs = outputs\n\n            if len(not_found_names) > 0:\n                print(\"Could not find tensor(s) in graph: {}\".format(not_found_names))\n                return -1\n\n            utils.remove_unreachable(graph)\n\n        optimizer = get_optimizer(args.input_format)\n        if optimizer:\n            graph = optimizer(graph, only_required=True)\n\n            if not check_nan_or_inf(graph, 'Optimized input'):\n                return -1\n\n        if args.static_only:\n            if not utils.remove_dynamic(graph):\n                print(\"Conversion is called with --static-only but model contains dynamic inputs, \"\n                      \"which would result in an empty model\")\n                return -1\n            utils.remove_unreachable(graph)\n\n        if converter:\n            graph.sort()\n            graph = converter(graph)\n\n            if not check_nan_or_inf(graph, 'Converted'):\n                return -1\n\n        tensor_mapping = converter.tensor_mapping() if args.tensor_mapping is not None and converter else None\n\n        if args.optimize:\n            custom_optimizers = get_custom_optimizers(args.custom_optimizers) if args.custom_optimizers is not None else None\n            optimizer = get_optimizer(args.output_format, custom_optimizers=custom_optimizers, dequantize=args.dequantize)\n            if optimizer:\n                tensor_lookup = {tensor.name: tensor for tensor in graph.tensors if tensor.name is not None} \\\n                    if args.tensor_mapping is not None else None\n\n                graph = optimizer(graph)\n\n                if not check_nan_or_inf(graph, 'Optimized output'):\n                    return -1\n\n                if args.tensor_mapping is not None:\n                    if converter:\n                        tensor_mapping = {src: tensor_lookup[dst].name for src, dst in six.iteritems(tensor_mapping)\n                                          if tensor_lookup[dst].graph is graph}\n                    else:\n                        tensor_mapping = {name: tensor.name for name, tensor in six.iteritems(tensor_lookup)\n                                          if tensor.graph is graph}\n\n        writer(graph, args.output_model or default_output_model)\n        print(\"Written '{}'\".format(args.output_model or default_output_model))\n\n        if args.tensor_mapping is not None:\n            with open(args.tensor_mapping, 'w') as file:\n                json.dump(tensor_mapping, file, indent=4)\n\n            print(\"Written '{}'\".format(args.tensor_mapping))\n\n        return 0\n    except IOError as e:\n        print(e)\n        return -1\n    except ConversionError as e:\n        print(e)\n        if e.details:\n            for detail in e.details:\n                print(detail)\n        return -1\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--input-model', type=str, required=True,\n                        help='The input model')\n    parser.add_argument('--output-model', type=str, default=None,\n                        help='The output model')\n    parser.add_argument('--input-format', type=str, required=True,\n                        choices=['tf', 'tflite', 'onnx', 'nnef', 'caffe2', 'caffe'],\n                        help='The format of the input model')\n    parser.add_argument('--output-format', type=str, required=True,\n                        choices=['tf', 'tflite', 'onnx', 'nnef', 'caffe2'],\n                        help='The format of the output model')\n    parser.add_argument('--input-shapes', type=str, default=None,\n                        help='The (dict of) shape(s) to use for input(s).')\n    parser.add_argument('--io-transpose', type=str, nargs='*', default=None,\n                        help='The inputs/outputs to transpose')\n    parser.add_argument('--fold-constants', action='store_true',\n                        help='Enable folding of constant ops')\n    parser.add_argument('--optimize', action='store_true',\n                        help='Turn on optimization of resulting NNEF model')\n    parser.add_argument('--dequantize', action='store_true',\n                        help='Dequantize the weights of a quantized network and omit quantization parameters')\n    parser.add_argument('--custom-converters', type=str, nargs='+',\n                        help='Module(s) containing custom converter code')\n    parser.add_argument('--custom-shapes', type=str, nargs='+',\n                        help='Module(s) containing custom shape inference code (when converting to/from NNEF)')\n    parser.add_argument('--custom-fragments', type=str, nargs='+',\n                        help='Module(s) containing custom fragment code (when converting to NNEF)')\n    parser.add_argument('--custom-optimizers', type=str, nargs='+',\n                        help='Module(s) containing custom optimizer code (when converting to NNEF)')\n    parser.add_argument('--mirror-unsupported', action='store_true',\n                        help='Enable mirror-conversion of unsupported operations')\n    parser.add_argument('--generate-custom-fragments', action='store_true',\n                        help='Enable automatic generation of fragments for custom operations')\n    parser.add_argument('--keep-io-names', action='store_true',\n                        help='Keep the names of model inputs/outputs if possible')\n    parser.add_argument('--decompose', type=str, nargs='*', default=None,\n                        help='Names of operators to be decomposed by NNEF parser')\n    parser.add_argument('--input-names', type=str, nargs='+',\n                        help='Names of input tensor where the graph is cut before conversion')\n    parser.add_argument('--output-names', type=str, nargs='+',\n                        help='Names of output tensor where the graph is cut before conversion')\n    parser.add_argument('--static-only', action='store_true',\n                        help='Only convert static part of the graph, for which tensor shapes are known')\n    parser.add_argument('--tensor-mapping', type=str, nargs='?', default=None, const='tensor_mapping.json',\n                        help='Export mapping of tensor names from input to output model')\n    parser.add_argument('--annotate-shapes', action='store_true',\n                        help='Add tensor shapes as comments to NNEF output model')\n    parser.add_argument('--compress', type=int, nargs='?', default=None, const=1,\n                        help='Compress output NNEF folder at the given compression level')\n    exit(main(parser.parse_args()))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execute.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .utils import stdio\nfrom .interpreter import Statistics\nfrom collections import namedtuple\nimport importlib\nimport argparse\nimport numpy as np\nimport json\nimport nnef\nimport six\nimport sys\nimport os\n\n\n_onnx_dtype_to_numpy = {\n    \"tensor(float)\": np.float32,\n    \"tensor(double)\": np.float64,\n    \"tensor(int8)\": np.int8,\n    \"tensor(int16)\": np.int16,\n    \"tensor(int32)\": np.int32,\n    \"tensor(int64)\": np.int64,\n    \"tensor(uint8)\": np.uint8,\n    \"tensor(uint16)\": np.uint16,\n    \"tensor(uint32)\": np.uint32,\n    \"tensor(uint64)\": np.uint64,\n    \"tensor(bool)\": np.bool_,\n}\n\n_nnef_dtype_to_numpy = {\n    'scalar': np.float32,\n    'integer': np.int32,\n    'logical': np.bool_,\n}\n\n_numpy_dtype_remap = {\n    np.short: np.int64,\n    np.longlong: np.int64,\n    np.ushort: np.uint64,\n    np.uint: np.uint64,\n    np.ulonglong: np.uint64,\n    np.double: np.float64,\n    np.longdouble: np.float64,\n}\n\n\ndef _is_lambda(v):\n    LAMBDA = lambda: 0\n    return isinstance(v, type(LAMBDA)) and v.__name__ == LAMBDA.__name__\n\n\ndef uniform(min=0, max=1):\n    return lambda shape: np.random.uniform(min, max, shape)\n\n\ndef normal(mean=0, std=1):\n    return lambda shape: np.random.normal(mean, std, shape)\n\n\ndef needs_transpose(io_transpose, name):\n    return io_transpose is not None and (len(io_transpose) == 0 or name in io_transpose)\n\n\ndef transpose_channels_last_to_first(x):\n    rank = len(x.shape)\n    return np.transpose(x, axes=[0, rank - 1] + list(range(1, rank - 1)))\n\n\ndef transpose_channels_first_to_last(x):\n    rank = len(x.shape)\n    return np.transpose(x, axes=[0] + list(range(2, rank)) + [1])\n\n\ndef read_input(file, name, shape, dtype, transpose):\n    data = nnef.read_tensor(file)\n\n    any_batch = shape[0] == 0\n    offset = int(any_batch)\n    if tuple(data.shape[offset:]) != tuple(shape[offset:]):\n        raise ValueError(\"Mismatch between declared and read shape for input '{}'; {} vs {}\"\n                         .format(name, data.shape, shape))\n    if data.dtype != dtype:\n        raise ValueError(\"Mismatch between declared and read dtype for input '{}'; {} vs {}\"\n                         .format(name, data.dtype, dtype))\n\n    return transpose_channels_first_to_last(data) if transpose else data\n\n\ndef compute_statistics(array):\n    if array.size == 0:\n        return Statistics(num=0, min=0.0, max=0.0, sum=0.0, ssum=0.0)\n\n    return Statistics(\n        num=array.size,\n        min=float(np.min(array)),\n        max=float(np.max(array)),\n        sum=float(np.sum(array)),\n        ssum=float(np.sum(array * array)),\n    )\n\n\nclass RandomInputSource:\n\n    def __init__(self, distribution):\n        self._distribution = distribution\n\n    def __call__(self, name, shape, dtype):\n        return self._distribution(shape).astype(dtype)\n\n\nclass StreamInputSource:\n\n    def __init__(self, stream, io_transpose):\n        self._stream = stream\n        self._io_transpose = io_transpose\n\n    def __call__(self, name, shape, dtype):\n        return read_input(self._stream, name, shape, dtype,\n                          transpose=needs_transpose(self._io_transpose, name))\n\n\nclass FileInputSource:\n\n    def __init__(self, folder, io_transpose):\n        self._folder = folder\n        self._io_transpose = io_transpose\n\n    def __call__(self, name, shape, dtype):\n        with open(os.path.join(self._folder, name + '.dat')) as file:\n            return read_input(file, name, shape, dtype,\n                              transpose=needs_transpose(self._io_transpose, name))\n\n\nTensorInfo = namedtuple('TensorInfo', ['name', 'shape', 'dtype'])\n\n\nclass Executor:\n\n    def input_info(self):\n        raise NotImplementedError()\n\n    def output_info(self):\n        raise NotImplementedError()\n\n    def tensor_info(self):\n        raise NotImplementedError()\n\n    def __call__(self, inputs, output_names=None, collect_statistics=False):\n        raise NotImplementedError()\n\n\nclass TFExecutor(Executor):\n\n    def __init__(self, model_path):\n        try:\n            import tensorflow.compat.v1 as tf\n        except ImportError:\n            import tensorflow as tf\n        from .io.tf.graphdef.protobuf import GraphDef\n\n        self.Session = tf.Session\n\n        graph_def = GraphDef()\n        with open(model_path, 'rb') as file:\n            graph_def.ParseFromString(file.read())\n\n        self.graph = tf.Graph()\n        with self.graph.as_default():\n            tf.import_graph_def(graph_def, name='')\n\n        ops = self.graph.get_operations()\n        consumed = {tensor for op in ops for tensor in op.inputs}\n\n        self.inputs = [op.outputs[0] for op in ops if op.type == 'Placeholder']\n        self.outputs = [tensor for op in ops if len(op.inputs) for tensor in op.outputs\n                        if tensor not in consumed and tensor.name.endswith(':0')]\n\n    def input_info(self):\n        return [TensorInfo(tensor.name, tuple(tensor.shape.as_list()), tensor.dtype.as_numpy_dtype)\n                for tensor in self.inputs]\n\n    def output_info(self):\n        return [TensorInfo(tensor.name, tuple(tensor.shape.as_list()), tensor.dtype.as_numpy_dtype)\n                for tensor in self.outputs]\n\n    def tensor_info(self):\n        tensors = [tensor for op in self.graph.get_operations() for tensor in op.outputs]\n        return [TensorInfo(tensor.name, tuple(tensor.shape.as_list()), tensor.dtype.as_numpy_dtype)\n                for tensor in tensors]\n\n    def __call__(self, inputs, output_names=None, collect_statistics=False):\n        ops = self.graph.get_operations()\n\n        if output_names is not None:\n            tensor_names = {tensor.name for op in ops for tensor in op.outputs}\n            invalid = {name for name in output_names if name not in tensor_names}\n            if len(invalid):\n                raise ValueError('Invalid tensor name(s): {}'.format(invalid))\n\n            outputs = {tensor.name: tensor for op in ops for tensor in op.outputs if tensor.name in output_names}\n        else:\n            outputs = {tensor.name: tensor for tensor in self.outputs}\n\n        if collect_statistics:\n            tensors = {tensor.name: tensor for op in ops for tensor in op.outputs if tensor.name.endswith(':0')}\n            with self.Session(graph=self.graph) as sess:\n                values = sess.run(tensors, feed_dict=inputs)\n\n                outputs = {name: values[name] for name in outputs}\n\n                stats = {}\n                for name, array in six.iteritems(values):\n                    stats[name] = compute_statistics(array)\n\n                return outputs, stats\n        else:\n            with self.Session(graph=self.graph) as sess:\n                outputs = sess.run(outputs, feed_dict=inputs)\n\n            return outputs, None\n\n\nclass TFLiteExecutor(Executor):\n\n    def __init__(self, model_path):\n        try:\n            import tensorflow.compat.v1 as tf\n        except ImportError:\n            import tensorflow as tf\n\n        self.interpreter = tf.lite.Interpreter(model_path=model_path)\n        self.interpreter.allocate_tensors()\n\n    def input_info(self):\n        return [TensorInfo(tensor['name'], tensor['shape'], tensor['dtype'])\n                for tensor in self.interpreter.get_input_details()]\n\n    def output_info(self):\n        return [TensorInfo(tensor['name'], tensor['shape'], tensor['dtype'])\n                for tensor in self.interpreter.get_output_details()]\n\n    def tensor_info(self):\n        return [TensorInfo(tensor['name'], tensor['shape'], tensor['dtype'])\n                for tensor in self.interpreter.get_tensor_details()]\n\n    def __call__(self, inputs, output_names=None, collect_statistics=False):\n        for tensor in self.interpreter.get_input_details():\n            self.interpreter.set_tensor(tensor['index'], inputs[tensor['name']])\n\n        self.interpreter.invoke()\n\n        if output_names is not None:\n            tensor_names = {tensor['name'] for tensor in self.interpreter.get_tensor_details()}\n            invalid = {name for name in output_names if name not in tensor_names}\n            if len(invalid):\n                raise ValueError('Invalid tensor name(s): {}'.format(invalid))\n\n            outputs = {tensor['name']: self.interpreter.get_tensor(tensor['index'])\n                       for tensor in self.interpreter.get_tensor_details()\n                       if tensor['name'] in output_names}\n        else:\n            outputs = {tensor['name']: self.interpreter.get_tensor(tensor['index'])\n                       for tensor in self.interpreter.get_output_details()}\n\n        stats = {tensor['name']: compute_statistics(self.interpreter.get_tensor(tensor['index']))\n                 for tensor in self.interpreter.get_tensor_details()} if collect_statistics else None\n\n        return outputs, stats\n\n\nclass ONNXExecutor(Executor):\n\n    def __init__(self, model_path, require_intermediates=False):\n        import onnxruntime\n\n        options = onnxruntime.SessionOptions()\n        options.inter_op_num_threads = 1\n        options.intra_op_num_threads = 1\n        options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL\n\n        self.session = onnxruntime.InferenceSession(model_path, sess_options=options,\n                                                    providers=['CPUExecutionProvider'])\n        self.inputs = [TensorInfo(tensor.name, tensor.shape, _onnx_dtype_to_numpy[tensor.type])\n                       for tensor in self.session.get_inputs()]\n        self.outputs = [TensorInfo(tensor.name, tensor.shape, _onnx_dtype_to_numpy[tensor.type])\n                        for tensor in self.session.get_outputs()]\n\n        if require_intermediates:\n            import onnx\n            from onnx.shape_inference import infer_shapes\n\n            model = onnx.load_model(model_path)\n            model = infer_shapes(model)\n\n            for info in model.graph.value_info:\n                output_info = model.graph.output.add()\n                output_info.ParseFromString(info.SerializeToString())\n\n            self.session = onnxruntime.InferenceSession(model.SerializeToString(), sess_options=options,\n                                                        providers=['CPUExecutionProvider'])\n\n    def input_info(self):\n        return self.inputs\n\n    def output_info(self):\n        return self.outputs\n\n    def tensor_info(self):\n        return None\n\n    def __call__(self, inputs, output_names=None, collect_statistics=False):\n        if output_names is not None:\n            inputs_as_outputs = {name: inputs[name] for name in output_names if name in inputs}\n            output_names = [name for name in output_names if name not in inputs]\n        else:\n            output_names = [output.name for output in self.outputs]\n            inputs_as_outputs = {}\n\n        if collect_statistics:\n            original_outputs = [output.name for output in self.outputs]\n            fetch_names = [tensor.name for tensor in self.session.get_outputs()\n                           if tensor.name not in original_outputs] + original_outputs\n            values = self.session.run(fetch_names, inputs)\n            outputs = {name: value for name, value in zip(fetch_names, values) if name in set(output_names)}\n\n            stats = {}\n            for name, value in zip(fetch_names, values):\n                stats[name] = compute_statistics(value)\n        else:\n            values = self.session.run(output_names, inputs)\n            outputs = {name: value for name, value in zip(output_names, values)}\n            stats = None\n\n        outputs.update(inputs_as_outputs)\n\n        return outputs, stats\n\n\nclass NNEFExecutor(Executor):\n\n    def __init__(self, model_path, custom_operators, decomposed):\n        from .interpreter.pytorch import Interpreter\n        self.interpreter = Interpreter(model_path, custom_operators=custom_operators, decomposed=decomposed)\n\n    def input_info(self):\n        return [TensorInfo(tensor.name, tensor.shape, _nnef_dtype_to_numpy[tensor.dtype])\n                for tensor in self.interpreter.input_details()]\n\n    def output_info(self):\n        return [TensorInfo(tensor.name, tensor.shape, _nnef_dtype_to_numpy[tensor.dtype])\n                for tensor in self.interpreter.output_details()]\n\n    def tensor_info(self):\n        return [TensorInfo(tensor.name, tensor.shape, _nnef_dtype_to_numpy[tensor.dtype])\n                for tensor in self.interpreter.tensor_details()]\n\n    def __call__(self, inputs, output_names=None, collect_statistics=False):\n        inputs = [inputs[tensor.name] for tensor in self.interpreter.input_details()]\n        if collect_statistics:\n            return self.interpreter(inputs, output_names, collect_statistics)\n        else:\n            return self.interpreter(inputs, output_names, collect_statistics), None\n\n\ndef get_executor(format, model_path, require_intermediates, custom_operators, decomposed):\n    if format == 'tf':\n        return TFExecutor(model_path)\n    elif format == 'tflite':\n        return TFLiteExecutor(model_path)\n    elif format == 'onnx':\n        return ONNXExecutor(model_path, require_intermediates)\n    elif format == 'nnef':\n        return NNEFExecutor(model_path, custom_operators, decomposed)\n    else:\n        return None\n\n\ndef write_nnef_tensor(filename, value):\n    with open(filename, 'wb') as file:\n        dtype = _numpy_dtype_remap.get(value.dtype.type)\n        if dtype is not None:\n            value = value.astype(dtype)\n\n        nnef.write_tensor(file, value)\n\n\ndef write_statistics(filename, statistics):\n    statistics = {name: {'min': stats.min, 'max': stats.max, 'mean': stats.mean(), 'std': stats.std()}\n                  for name, stats in six.iteritems(statistics)}\n\n    with open(filename, 'w') as file:\n        json.dump(statistics, file, indent=4)\n\n\ndef get_custom_operators(module_names):\n    CUSTOM_OPERATORS = \"CUSTOM_OPERATORS\"\n\n    operators = {}\n    for module_name in module_names:\n        module = importlib.import_module(module_name)\n        if hasattr(module, CUSTOM_OPERATORS):\n            operators.update(getattr(module, CUSTOM_OPERATORS))\n\n    return operators\n\n\ndef batched_info(tensor_info, batch_size):\n    for info in tensor_info:\n        if info.shape[0] != batch_size and info.shape[0] != 1 and not isinstance(info.shape[0], str):\n            raise ValueError('invalid input shape {} for batch size {}'.format(info.shape, batch_size))\n\n    return [TensorInfo(name=info.name, shape=(batch_size, *info.shape[1:]), dtype=info.dtype)\n            for info in tensor_info]\n\n\ndef accumulate_statistics(global_stats, local_stats):\n    if global_stats is None:\n        return local_stats\n    for name, stats in six.iteritems(local_stats):\n        global_stats[name] += stats\n    return global_stats\n\n\ndef main(args):\n    if args.input_path is not None:\n        source = FileInputSource(args.input_path, args.io_transpose)\n    elif args.random is not None:\n        if args.batch_size == 0:\n            print('batch-size must not be 0 when inputs are random generated', file=sys.stderr)\n            return -1\n\n        try:\n            distribution = eval(args.random)\n            if not _is_lambda(distribution):\n                distribution = distribution()\n            source = RandomInputSource(distribution)\n        except Exception as e:\n            print(\"Could not evaluate distribution: \" + str(e), file=sys.stderr)\n            return -1\n    else:\n        if not stdio.is_stdin_piped():\n            print('Input must be piped', file=sys.stderr)\n            return -1\n\n        stdio.set_stdin_to_binary()\n        source = StreamInputSource(sys.stdin, args.io_transpose)\n\n    output_names = eval(args.output_names) if args.output_names is not None and args.output_names != \"*\" else args.output_names\n    custom_operators = get_custom_operators(args.custom_operators) if args.custom_operators is not None else None\n\n    if args.random is not None and args.seed is not None:\n        np.random.seed(args.seed)\n\n    collect_statistics = args.statistics is not None\n\n    try:\n        executor = get_executor(args.format, args.model, collect_statistics, custom_operators, args.decompose)\n\n        if isinstance(output_names, dict):\n            fetch_names = output_names.keys()\n        elif output_names == \"*\":\n            tensors = executor.tensor_info()\n            fetch_names = [info.name for info in tensors] if tensors is not None else None\n        else:\n            fetch_names = output_names\n\n        input_info = executor.input_info()\n        if args.batch_size is not None:\n            input_info = batched_info(input_info, args.batch_size)\n\n        output_info = executor.output_info()\n\n        inputs = {info.name: source(info.name, info.shape, info.dtype) for info in input_info}\n\n        batch_size = args.batch_size\n        if batch_size == 0:\n            batch_size = next(iter(six.itervalues(inputs))).shape[0]\n            if not all(input.shape[0] == batch_size for input in six.itervalues(inputs)):\n                print('All inputs must have the same batch-size', file=sys.stderr)\n                return -1\n\n        if batch_size is not None and batch_size != 1:\n            slices = {name: [] for name in fetch_names} if fetch_names is not None else \\\n                     {info.name: [] for info in output_info}\n            stats = None\n\n            for k in range(batch_size):\n                slice_inputs = {name: np.expand_dims(data[k], axis=0) for name, data in six.iteritems(inputs)}\n                slice_outputs, slice_stats = executor(slice_inputs, fetch_names, collect_statistics)\n\n                for name, data in six.iteritems(slice_outputs):\n                    slices[name].append(data)\n\n                if collect_statistics:\n                    stats = accumulate_statistics(stats, slice_stats)\n\n            outputs = {name: np.concatenate(items, axis=0) for name, items in six.iteritems(slices)}\n        else:\n            outputs, stats = executor(inputs, fetch_names, collect_statistics)\n\n    except ValueError as e:\n        print(e, file=sys.stderr)\n        return -1\n\n    for name, value in six.iteritems(outputs):\n        if needs_transpose(args.io_transpose, name):\n            outputs[name] = transpose_channels_last_to_first(value)\n\n    if isinstance(output_names, dict):\n        outputs = {output_names[name]: value for name, value in six.iteritems(outputs)}\n\n    if args.tensor_mapping is not None:\n        with open(args.tensor_mapping) as file:\n            tensor_mapping = json.load(file)\n\n        if stats is not None:\n            stats = {tensor_mapping.get(key, key): value for key, value in six.iteritems(stats)}\n\n    if stats is not None:\n        write_statistics(args.statistics, stats)\n        print('Written {}'.format(args.statistics))\n\n    if args.output_path is not None:\n        for name, value in six.iteritems(outputs):\n            filename = os.path.join(args.output_path, name + \".dat\")\n            write_nnef_tensor(filename, value)\n            print('Written {}'.format(filename))\n    else:\n        if not stdio.is_stdout_piped():\n            if collect_statistics:\n                return 0\n\n            print('Output must be piped', file=sys.stderr)\n            return -1\n\n        stdio.set_stdout_to_binary()\n\n        for name, value in six.iteritems(outputs):\n            nnef.write_tensor(sys.stdout, value)\n\n    return 0\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('model', type=str,\n                        help='The model to execute')\n    parser.add_argument('--format', type=str, required=True, choices=['tf', 'tflite', 'onnx', 'nnef'],\n                        help='The format of the model')\n    parser.add_argument('--random', type=str, default=None,\n                        help='Random distribution for input generation')\n    parser.add_argument('--seed', type=int, default=None,\n                        help='Random seed for input generation')\n    parser.add_argument('--input-path', type=str, default=None,\n                        help='Folder to read inputs from')\n    parser.add_argument('--output-path', type=str, default=None,\n                        help='Folder to save outputs into')\n    parser.add_argument('--output-names', type=str, default=None,\n                        help='The set (dict) of tensor names (to file names) considered as outputs to be saved. '\n                             'Use * to save all tensors')\n    parser.add_argument('--io-transpose', type=str, nargs='*', default=None,\n                        help='The inputs/outputs to transpose from channels last to channels first dimension order')\n    parser.add_argument('--decompose', type=str, nargs='*', default=None,\n                        help='Names of operators to be decomposed by NNEF parser')\n    parser.add_argument('--statistics', type=str, nargs='?', default=None, const='stats.json',\n                        help='Calculate activations statistics and save to output path in json format')\n    parser.add_argument('--custom-operators', type=str, nargs='+', default=None,\n                        help='Module(s) containing custom operator code')\n    parser.add_argument('--batch-size', type=int, default=None,\n                        help='Specify batch-size for single-batch models')\n    parser.add_argument('--tensor-mapping', type=str, default=None,\n                        help='Use mapping of tensor names for statistics')\n    exit(main(parser.parse_args()))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/__init__.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/tvm/__init__.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/tvm/nnef_frontend/__init__.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/tvm/nnef_frontend/relax/__init__.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nNNEF frontend for converting graphs into Relax IRModels.\n\"\"\"\n\nimport tvm\nfrom packaging import version\n\nver = version.parse(tvm.__version__)\nif ver.minor < 20:\n    raise ImportError(f\"TVM version 0.20 or higher is required, but found {tvm.__version__}\")\n\n\nfrom .nnef_frontend import from_nnef\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/tvm/nnef_frontend/relax/nnef_frontend.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"NNEF: Neural Network Exchange Format frontend for TVM relay\"\"\"\nimport os\nimport typing\nimport nnef\nimport numpy as np\n\nimport tvm\nfrom tvm import relax\nfrom tvm.ir import IRModule\nfrom tvm.relax import expr as tvm_expr\n\nfrom .nnef_ops import _get_converter_map\n\n\ndef get_type(elem_type: str):\n    \"\"\"\n    Gives numpy style type for nnef primitive types, uses x32 versions.\n\n    :param elem_type: string, (scalar, integer, logical, string)\n    :return: returns numpy dtype equivalent (float32, int32, bool, string)\n    \"\"\"\n    if elem_type == \"scalar\":\n        return \"float32\"\n    if elem_type == \"integer\":\n        return \"int32\"\n    if elem_type == \"logical\":\n        return \"bool\"\n    if elem_type == \"string\":\n        return \"string\"\n    raise TypeError(f'Type \"{elem_type}\" is not implemented')\n\n\n# Converter class\nclass NNEFConverter:\n    \"\"\"\n    Helper class for class level attributes, for conversion of NNEF model.\n    Public method to use is from_nnef.\n\n    Parameters\n    ----------\n\n    keep_params_in_input : bool, optional\n        If this parameter is true, the nnef variables will be converted to\n        constants, and be embedded into the relay model, allowing optimizations\n        at compile time.\n        If False the params will have to be added as inputs,\n        the model can't load them automatically\n\n    \"\"\"\n\n    def __init__(self, keep_params_in_input=False):\n        self._nodes = {}\n        self._consts = {}\n        self._inputs = {}\n        self._num_inputs = 0\n        self._params = {}\n        self._num_params = 0\n        self._keep_params_in_input = keep_params_in_input\n        self._bb = relax.BlockBuilder()\n\n    def from_nnef(self, graph: nnef.Graph) -> tvm.IRModule:\n        \"\"\"\n        Convert an NNEF model into an equivalent TVM Relay IRModule.\n\n        Parameters\n        ----------\n        graph : nnef.Graph\n            An NNEF Graph object that was imported with nnef.load_graph.\n            Shapes should be inferred by nnef.infer_shapes on graph beforehand.\n\n        Returns\n        -------\n        mod : tvm.IRModule\n            The relay module for compilation\n\n        params : dict of str to tvm.nd.NDArray\n            The parameter dictionary to be used\n\n        \"\"\"\n        with self._bb.function(\"main\"):\n            with self._bb.dataflow():\n                self._parse_inputs(graph)\n                self._construct_nodes(graph)\n\n                outputs = [self._nodes[n] for n in graph.outputs]\n                outputs = outputs[0] if len(outputs) == 1 else tvm_expr.Tuple(outputs)\n\n                output_var = self._bb.emit_output(outputs)\n\n            func_attrs = {\"num_input\": self._num_inputs}\n\n            input_list = [value for value in self._inputs.values() if isinstance(value, relax.Var)]\n\n            if self._keep_params_in_input and self._params:\n                param_var_list, param_value_list = map(list, zip(*self._params.values()))\n                input_list.append(param_var_list)\n                func_attrs[\"params\"] = param_value_list\n\n            self._bb.emit_func_output(output_var, input_list)\n\n        relax_mod = self._bb.get()\n        relax_mod[\"main\"] = relax_mod[\"main\"].with_attrs(func_attrs)\n        return relax_mod\n\n    def _parse_inputs(self, graph):\n        \"\"\"Save inputs into class from inputs attrib of graph\"\"\"\n        for inp in graph.inputs:\n            self._num_inputs += 1\n            tensor = graph.tensors[inp]\n            self._nodes[inp] = self._new_var(inp, shape=tensor.shape, dtype=get_type(tensor.dtype))\n            self._inputs[inp] = self._nodes[inp]\n\n    def _construct_nodes(self, graph):\n        \"\"\"Construct TVM relay calls from every operation of the nnef graph\"\"\"\n        for op in graph.operations:\n            if op.name == \"external\":\n                # externals are handled as input, not needed,\n                # but nnef treats them as operations as well\n                continue\n\n            if op.name == \"variable\":\n                self._set_variable(graph.tensors[op.outputs[\"output\"]])\n\n            elif op.name == \"constant\":\n                self._set_const(op)\n\n            else:\n                # every other operator can be grouped more easily,\n                # as it does not need self for conversion\n                self._set_operator(op)\n\n    def _set_operator(self, node):\n        self._set_literal_inputs(node)\n        inputs = []\n        for ink, inv in node.inputs.items():\n            if isinstance(inv, list):\n                for i, linv in enumerate(inv):\n                    if linv in self._nodes.keys():\n                        inputs.append(self._nodes[linv])\n                    else:  # handle literal inputs\n                        name = f\"{node.name}_{ink}_{i}\"\n                        assert name in self._nodes, f\"{name} has not been properly handled\"\n                        inputs.append(self._nodes[name])\n\n            else:\n                if inv in self._nodes.keys():\n                    inputs.append(self._nodes[inv])\n                else:  # handle literal inputs\n                    name = f\"{node.name}_{ink}\"\n                    assert name in self._nodes, f\"{name} has not been properly handled\"\n                    inputs.append(self._nodes[name])\n\n        converted = self._get_relay_op_call(node.name, inputs, node.attribs)\n        converted = self._bb.normalize(converted)\n\n        if not isinstance(converted.struct_info, relax.TupleStructInfo):\n            outputs_num = 1\n        else:\n            outputs_num = len(converted.struct_info.fields)\n\n        if outputs_num == 1:\n            # check if the singular ret val is a list of only one element\n            ret_val = list(node.outputs.values())[0]\n            if isinstance(ret_val, list):\n                self._nodes[ret_val[0]] = converted\n            else:\n                self._nodes[ret_val] = converted\n        else:\n            for i, out in zip(range(outputs_num), node.outputs[\"values\"]):\n                self._nodes[out] = converted[i]\n\n    def _set_const(self, node):\n        \"\"\"Create a tvm.relay.Constant from a nnef constant tensor\"\"\"\n        name = node.outputs[\"output\"]\n        data = node.attribs[\"value\"]\n        shape = node.attribs[\"shape\"]\n        if len(data) == 1:\n            data = np.full(shape, data, dtype=get_type(node.dtype))\n        else:\n            data = np.array(data, dtype=get_type(node.dtype))\n        self._consts[name] = tvm_expr.const(data)\n        self._nodes[name] = self._consts[name]\n\n    def _set_variable(self, tensor):\n        \"\"\"Create a tvm.relay.Var (or Constant) from a nnef variable tensor\"\"\"\n        tens_data = tensor.data\n        if not self._keep_params_in_input:\n            self._consts[tensor.name] = tvm_expr.const(tens_data)\n            self._nodes[tensor.name] = self._consts[tensor.name]\n        else:\n            var = self._new_var(tensor.name, shape=tensor.shape, dtype=get_type(tensor.dtype))\n            self._nodes[tensor.name] = var\n            self._params[tensor.name] = (var, tvm.nd.array(tens_data))\n\n    def _set_literal_inputs(self, node):\n        \"\"\"Checks if node has literal inputs and saves them into a tvm.relay.Constant.\n        naming as {node.name}_{input field name}\"\"\"\n        for field_name, value in node.inputs.items():\n            if isinstance(value, list):\n                for v in value:\n                    if v not in self._nodes.keys():\n                        self._nodes[f\"{node.name}_{v}\"] = tvm_expr.const(v)\n\n            else:\n                if value not in self._nodes.keys():\n                    self._nodes[f\"{node.name}_{field_name}\"] = tvm_expr.const(value)\n\n    def _get_relay_op_call(self, name, inputs, attrs):\n        \"\"\"Returns the tvm.Call equivalent to the nnef operator\"\"\"\n        conv_map = _get_converter_map()\n        if name in conv_map:\n\n            call = conv_map[name](self._bb, *inputs, **attrs)\n        else:\n            # This error is reached if NNEF is expanded with additional ops\n            raise NotImplementedError(\n                f\"Operator {name} is not implemented, as {name} has been added after 1.0.5.\"\n            )\n        return call\n\n    def _infer_type(self, val):\n        if isinstance(val, bool):\n            return \"bool\", True\n        if isinstance(val, float):\n            return \"float32\", True\n        if isinstance(val, int):\n            return \"int32\", True\n        if isinstance(val, str):\n            # the string vals can be names of nodes in some of the cases\n            if isinstance(val, nnef.Identifier):\n                if val in self._nodes.keys():\n                    node = self._nodes[val]\n                    if isinstance(node, tvm_expr.Var):\n                        return node.type_annotation.dtype, False\n                    if isinstance(node, tvm_expr.Constant):\n                        return node.data.dtype, False\n                    if isinstance(node, tvm_expr.Call):\n                        return node.checked_type.dtype, False\n                raise Exception(\n                    f\"{val} has not been loaded into the model \"\n                    \"but it should have been, as a var or call.\"\n                )\n            return \"string\", True\n\n        raise TypeError(f'Value \"{val}\" is not a recognized type')\n\n    def _new_var(self, name, shape, dtype=\"float32\"):\n        return relax.Var(\n            name_hint=name,\n            struct_info=relax.TensorStructInfo(shape=shape, dtype=dtype),\n        )\n\n\ndef from_nnef(\n    model: typing.Union[str, os.PathLike, nnef.Graph],\n    keep_params_in_input: bool = False,\n) -> IRModule:\n    \"\"\"\n    Convert an NNEF model into an equivalent TVM Relay IRModule.\n\n\n    Parameters\n    ----------\n    model : os.PathLike or str or nnef.Graph\n        Path to an NNEF model directory, containing the graph.nnef (and weight files)\n\n    keep_params_in_input : bool, optional\n        If this parameter is true, the nnef variables will be converted to\n        constants, and be embedded into the relax model, allowing optimizations\n        at compile time.\n        If False the params will have to be added as inputs,\n        the model can't load them automatically\n\n    Returns\n    -------\n    mod : tvm.IRModule\n        The relay module for compilation\n\n    params : dict of str to tvm.nd.NDArray\n        The parameter dictionary to be used\n    \"\"\"\n    conv_clss = NNEFConverter(keep_params_in_input)\n\n    if not isinstance(model, nnef.Graph):\n        model = nnef.load_graph(model)\n\n    # fills in the nnef graph's shape information\n    nnef.infer_shapes(model)\n\n    return conv_clss.from_nnef(graph=model)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/tvm/nnef_frontend/relax/nnef_ops.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"NNEF frontend converter helper funcs and ops\"\"\"\nimport math\n\nimport itertools\nfrom functools import reduce\n\nimport numpy as np\n\nimport tvm\nfrom tvm import relax\nfrom tvm.relax import expr as tvm_expr\nfrom tvm.relax import op as tvm_op\nfrom tvm import topi\n\n\n# Base methods\n\n\ndef dimension_picker(prefix, kernel_shape, suffix=\"\"):\n    \"\"\"\n    Returns the correct name for nth dimensional operator. Uses the \"kernel_shape\" attribute.\\n\n    E.g.call: dimension_picker(op_name)(attr)\n\n    :param prefix: the name of the operator (e.g. conv)\n    :param kernel_shape: shape of the tensor to fit the operation\n    :param suffix: optional suffix for ops\n    :return: \"prefix`n`d\" where n is the correct dimension for the kernel\n    \"\"\"\n\n    rank = len(kernel_shape[2:])\n    if rank == 1:\n        return prefix + \"1d\" + suffix\n    if rank == 2:\n        return prefix + \"2d\" + suffix\n    if rank == 3:\n        return prefix + \"3d\" + suffix\n    op_name = prefix + \"1d/2d/3d\"\n    msg = f\"Only 1D, 2D, and 3D kernels are supported for operator {op_name}.\"\n    raise tvm.error.OpAttributeInvalid(msg)\n\n\ndef _size_conv(size, rank):\n    # window of size (DH)W is only possible when it is checked outside,\n    # which is needed for alternative solution\n    if rank == 3:\n        if len(size) == 1:\n            return size\n        if len(size) == 3:\n            assert (\n                size[0] == 1 and size[1] == 1\n            ), \"Incorrect window dimensions, first two dimensions must be 1\"\n            return size[2]\n    if rank == 4:\n        if len(size) == 2:\n            return size\n        if len(size) == 4:\n            assert (\n                size[0] == 1 and size[1] == 1\n            ), \"Incorrect window dimensions, first two dimensions must be 1\"\n            return size[2:]\n    if rank == 5:\n        if len(size) == 3:\n            return size\n        if len(size) == 5:\n            assert (\n                size[0] == 1 and size[1] == 1\n            ), \"Incorrect window dimensions, first two dimensions must be 1\"\n            return size[2:]\n\n    raise ValueError(f\"Unexpected window size, got {len(size)}\")\n\n\ndef _stride_conv(stride, rank):\n    if rank == 3:\n        # {conv style} :: [s] -> [s]\n        if len(stride) == 1:\n            return stride\n        # {pool style} :: [N, C, s] -> asrt N,C == 1; [s]\n        if len(stride) == 3:\n            assert (\n                stride[0] == 1 and stride[1] == 1\n            ), \"Not supported stride dimensions, first two dimensions must be 1\"\n            return stride[2:]\n    if rank == 4:\n        # {conv style} :: [sh, sw] -> [sh, sw]\n        if len(stride) == 2:\n            return stride\n        # {pool style} :: [N, C, sh, sw] -> asrt N,C == 1; [sh, sw]\n        if len(stride) == 4:\n            assert (\n                stride[0] == 1 and stride[1] == 1\n            ), \"Not supported stride dimensions, first two dimensions must be 1\"\n            return stride[2:]\n    if rank == 5:\n        # {conv style} :: [sd, sh, sw] -> [sd, sh, sw]\n        if len(stride) == 3:\n            return stride\n        # {pool style} :: [N, C, sd, sh, sw] -> asrt N,C == 1; [sd, sh, sw]\n        if len(stride) == 5:\n            assert (\n                stride[0] == 1 and stride[1] == 1\n            ), \"Not supported stride dimensions, first two dimensions must be 1\"\n            return stride[2:]\n    raise ValueError(f\"Unexpected stride in {rank - 2}D, got {len(stride)}: {stride}\")\n\n\ndef _padding_conv(padding, rank, keepdims=False):\n    if isinstance(padding[0], (tuple, list)):\n        # 1D\n        if rank == 3:\n            # {conv style} :: [(l,r)] -> (l,r)\n            if len(padding) == 1:\n                return padding[0]\n            if len(padding) == 3:\n                # {pool style} :: [(batch),(channel),(l,r)] -> asrt N,C == 0, (l,r)\n                if not keepdims:\n                    assert padding[0] == (0, 0) and padding[1] == (0, 0), (\n                        \"Incorrect padding. \" \"Padding on C,I dimensions not supported\"\n                    )\n                    return padding[2]\n                # {sliding window style} :: [(batch),(channel),(l,r)] -> [(batch),(channel),(l,r)]\n                else:\n                    return padding\n\n        # 2D\n\n        if rank == 4:\n            # {conv style} :: [(u,d),(l,r)] -> (u, l, d, r)\n            if len(padding) == 2:\n                # change UDLR to ULDR padding, LC is faster here\n                return [x[i] for i in [0, 1] for x in padding]\n\n            if len(padding) == 4:\n                # {pool style} :: [(batch size),(channel),(u,d),(l,r)] ->\n                #                  -> asrt N,C == 0, (u, l, d, r)\n                if not keepdims:\n                    assert padding[0] == (0, 0) and padding[1] == (0, 0), (\n                        \"Incorrect padding. \" \"Padding on C,I dimensions not supported\"\n                    )\n                    # itertools is faster than LC (slicing)\n                    return list(itertools.chain.from_iterable(zip(padding[2], padding[3])))\n                # {sliding window style} :: [(batch),(channel),(u,d),(l,r)] ->\n                #                            -> [(batch),(channel),(u,d),(l,r)]\n                else:\n                    return padding\n\n        # 3D\n\n        if rank == 5:\n            # {conv style} :: [(f,b),(u,d),(l,r)] -> (f, u, l, b, d, r)\n            if len(padding) == 3:\n                # LC is faster\n                return [x[i] for i in [0, 1] for x in padding]\n\n            if len(padding) == 5:\n                # {pool style} :: [(batch size),(channel),(f,b)(u,p),(l,r)] ->\n                #                  -> asrt N,C == 0, (f, u, l, b, d, r)\n                if not keepdims:\n                    assert padding[0] == (0, 0) and padding[1] == (0, 0), (\n                        \"Incorrect padding. \" \"Padding on C,I dimensions not supported\"\n                    )\n                    # itertools faster barely\n                    return list(\n                        itertools.chain.from_iterable(zip(padding[2], padding[3], padding[4]))\n                    )\n                # {s-w style} :: [(batch),(channel),(f,b),(u,d),(l,r)] ->\n                #                 -> [(batch),(channel),(f,b),(u,d),(l,r)]\n                else:\n                    return padding\n\n        raise ValueError(\n            f\"Incorrect padding style for {rank - 2}D operand. Only length of {rank - 2}, {rank} \"\n            f\"supported, got {len(padding)}: {padding}\"\n        )\n\n    raise ValueError(\"nnef should not have singular padding\")\n\n\ndef _calculate_nnef_padding(active_shape, strides, kernel_shape, dilation):\n    \"\"\"Ordering of nnef autopad and tvm autopad are sometimes different,\n    this method calculates nnef like padding from dimensions\n\n    Parameters\n    ----------\n        active_shape\n            the data dimensions\n        strides\n            the strides over the active dimensions\n        kernel_shape\n            the shape of the window, must have the same rank as active shape\n        dilation\n            the dilations over the active dimensions\n    \"\"\"\n    output = [(ui + (s - 1)) // s for ui, s in zip(active_shape, strides)]\n    dilated = [(f - 1) * d + 1 for f, d in zip(kernel_shape, dilation)]\n    total = [\n        max(0, (di - 1) * s + df - ui)\n        for di, s, df, ui in zip(output, strides, dilated, active_shape)\n    ]\n    padding = [(pad // 2, (pad + 1) // 2) for pad in total]\n    return padding\n\n\ndef _calculate_nnef_padding_deconv(data_sh, strides, kernel_active_sh, dilation, output_shape):\n    out_sh = output_shape[2:] if output_shape else [ui * s for ui, s in zip(data_sh, strides)]\n    dilated = [(f - 1) * d + 1 for f, d in zip(kernel_active_sh[2:], dilation)]\n    total = [\n        max(0, (di - 1) * s + df - ui) for di, s, df, ui in zip(data_sh, strides, dilated, out_sh)\n    ]\n    return total, out_sh\n\n\ndef __unexpected_attrs(op, kwargs):\n    raise NotImplementedError(\n        f\"{op} received unexpected attributes(s), possibly mismatched versions. \"\n        \"Attributes(s) ignored: \" + \", \".join(f\"{k} := {v}\" for k, v in kwargs.items())\n    )\n\n\n# Conversion map, operator functions\n\n\ndef _get_converter_map():\n    return {  # Unary\n        \"copy\": copy_converter,  # arithmetic\n        \"neg\": neg_converter,\n        \"rcp\": rcp_converter,\n        \"exp\": exp_converter,\n        \"log\": log_converter,\n        \"sin\": sin_converter,\n        \"cos\": cos_converter,\n        \"tan\": tan_converter,\n        \"sinh\": sinh_converter,\n        \"cosh\": cosh_converter,\n        \"tanh\": tanh_converter,\n        \"asin\": asin_converter,\n        \"acos\": acos_converter,\n        \"atan\": atan_converter,\n        \"asinh\": asinh_converter,\n        \"acosh\": acosh_converter,\n        \"atanh\": atanh_converter,\n        \"abs\": abs_converter,\n        \"sign\": sign_converter,\n        \"not\": not_converter,  # logical\n        \"floor\": floor_converter,  # rounding\n        \"ceil\": ceil_converter,\n        \"round\": round_converter,\n        # Binary\n        \"add\": add_converter,  # arithmetic\n        \"sub\": sub_converter,\n        \"mul\": mul_converter,\n        \"div\": div_converter,\n        \"pow\": pow_converter,\n        \"lt\": lt_converter,  # comparison\n        \"gt\": gt_converter,\n        \"le\": le_converter,\n        \"ge\": ge_converter,\n        \"eq\": eq_converter,\n        \"ne\": ne_converter,\n        \"and\": and_converter,  # logical\n        \"or\": or_converter,\n        # select\n        \"select\": select_converter,\n        # simplifier\n        \"sqr\": sqr_converter,\n        \"sqrt\": sqrt_converter,\n        \"rsqr\": rsqr_converter,\n        \"rsqrt\": rsqrt_converter,\n        \"log2\": log2_converter,\n        \"min\": min_converter,\n        \"max\": max_converter,\n        \"clamp\": clamp_converter,\n        # sliding-window\n        \"conv\": conv_converter,\n        \"deconv\": deconv_converter,\n        \"box\": box_converter,\n        \"debox\": debox_converter,\n        \"argmax_pool\": ndop,\n        \"sample\": ndop,\n        \"desample\": ndop,\n        \"nearest_downsample\": nearest_downsample_converter,\n        \"area_downsample\": area_downsample_converter,\n        \"nearest_upsample\": nearest_upsample_converter,\n        \"multilinear_upsample\": multilinear_upsample_converter,\n        # reduce\n        \"sum_reduce\": sum_reduce_converter,\n        \"max_reduce\": max_reduce_converter,\n        \"min_reduce\": min_reduce_converter,\n        \"argmax_reduce\": argmax_reduce_converter,\n        \"argmin_reduce\": argmin_reduce_converter,\n        \"all_reduce\": all_reduce_converter,\n        \"any_reduce\": any_reduce_converter,\n        \"mean_reduce\": mean_reduce_converter,\n        # tensor shape\n        \"reshape\": reshape_converter,\n        \"squeeze\": squeeze_converter,\n        \"unsqueeze\": unsqueeze_converter,\n        \"transpose\": transpose_converter,\n        \"split\": split_converter,\n        \"concat\": concat_converter,\n        \"stack\": stack_converter,\n        \"unstack\": unstack_converter,\n        \"slice\": slice_converter,\n        \"pad\": pad_converter,\n        \"tile\": tile_converter,\n        # region-of-interest - not needed - not supported\n        \"avg_roi_pool\": ndop,\n        \"max_roi_pool\": ndop,\n        \"roi_resample\": ndop,\n        \"avg_roi_align\": ndop,\n        \"max_roi_align\": ndop,\n        # matrix multiplication\n        \"matmul\": matmul_converter,\n        # variables\n        \"update\": ndop,  # --- not used\n        # Compound\n        \"sigmoid\": sigmoid_converter,  # activation\n        \"relu\": relu_converter,\n        \"prelu\": prelu_converter,\n        \"leaky_relu\": leaky_relu_converter,\n        \"elu\": elu_converter,\n        \"selu\": selu_converter,\n        \"gelu\": gelu_converter,\n        \"silu\": silu_converter,\n        \"softmax\": softmax_converter,\n        \"softplus\": softplus_converter,\n        \"linear\": linear_converter,  # linear\n        \"separable_conv\": separable_conv_converter,\n        \"separable_deconv\": separable_deconv_converter,\n        \"max_pool_with_index\": ndop,  # pooling\n        \"max_pool\": max_pool_converter,\n        \"avg_pool\": avg_pool_converter,\n        \"rms_pool\": rms_pool_converter,\n        \"local_response_normalization\": local_response_normalization_converter,  # normalization\n        \"local_mean_normalization\": local_mean_normalization_converter,\n        \"local_variance_normalization\": local_variance_normalization_converter,\n        \"local_contrast_normalization\": local_contrast_normalization_converter,\n        \"l1_normalization\": l1_normalization_converter,\n        \"l2_normalization\": l2_normalization_converter,\n        \"batch_normalization\": batch_normalization_converter,\n        \"min_max_linear_quantize\": ndop,  # quantization\n        \"zero_point_linear_quantize\": ndop,\n        \"linear_quantize\": ndop,\n        \"logarithmic_quantize\": ndop,\n        # MISC\n        \"copy_n\": ndop,\n        \"add_n\": ndop,\n        \"moments\": ndop,\n    }\n\n\n# pylint: disable=unused-argument\n\n# not implemented ops\ndef ndop(*args, **kwargs):\n    # print(args, kwargs)\n    raise Exception(\"Not supported operator was called, please check for compatibility\")\n\n\n#   # Unary ops\n\n\ndef copy_converter(bbuilder, data, **kwargs):\n    \"\"\"Copy converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"copy\", kwargs)\n\n    return bbuilder.emit_te(topi.identity, data)\n\n\ndef neg_converter(bbuilder, data, **kwargs):\n    \"\"\"Neg converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"neg\", kwargs)\n\n    return relax.op.unary.negative(data)\n\n\ndef rcp_converter(bbuilder, data, **kwargs):\n    \"\"\"Rcp converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"rcp\", kwargs)\n\n    if isinstance(data, relax.Call):\n        d_type = data.checked_type.dtype\n    else:\n        d_type = data.struct_info.dtype\n\n    return div_converter(bbuilder, tvm_expr.const(1, dtype=d_type), data)\n\n\ndef exp_converter(bbuilder, data, **kwargs):\n    \"\"\"Exp converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"exp\", kwargs)\n\n    return relax.op.unary.exp(data)\n\n\ndef log_converter(bbuilder, data, **kwargs):\n    \"\"\"Log converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"log\", kwargs)\n\n    return relax.op.unary.log(data)\n\n\ndef sin_converter(bbuilder, data, **kwargs):\n    \"\"\"Sin converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sin\", kwargs)\n\n    return relax.op.unary.sin(data)\n\n\ndef cos_converter(bbuilder, data, **kwargs):\n    \"\"\"Cos converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"cos\", kwargs)\n\n    return relax.op.unary.cos(data)\n\n\ndef tan_converter(bbuilder, data, **kwargs):\n    \"\"\"Tan converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"tan\", kwargs)\n\n    return relax.op.unary.tan(data)\n\n\ndef sinh_converter(bbuilder, data, **kwargs):\n    \"\"\"Sinh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sinh\", kwargs)\n\n    return relax.op.unary.sinh(data)\n\n\ndef cosh_converter(bbuilder, data, **kwargs):\n    \"\"\"Cosh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"cosh\", kwargs)\n\n    return relax.op.unary.cosh(data)\n\n\ndef tanh_converter(bbuilder, data, **kwargs):\n    \"\"\"Tanh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"tanh\", kwargs)\n\n    return relax.op.unary.tanh(data)\n\n\ndef asin_converter(bbuilder, data, **kwargs):\n    \"\"\"Asin converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"asin\", kwargs)\n\n    return relax.op.unary.asin(data)\n\n\ndef acos_converter(bbuilder, data, **kwargs):\n    \"\"\"Acos converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"acos\", kwargs)\n\n    return relax.op.unary.acos(data)\n\n\ndef atan_converter(bbuilder, data, **kwargs):\n    \"\"\"Atan converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"atan\", kwargs)\n\n    return relax.op.unary.atan(data)\n\n\ndef asinh_converter(bbuilder, data, **kwargs):\n    \"\"\"Asinh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"asinh\", kwargs)\n\n    return relax.op.unary.asinh(data)\n\n\ndef acosh_converter(bbuilder, data, **kwargs):\n    \"\"\"Acosh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"acosh\", kwargs)\n\n    return relax.op.unary.acosh(data)\n\n\ndef atanh_converter(bbuilder, data, **kwargs):\n    \"\"\"Atanh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"atanh\", kwargs)\n\n    return relax.op.unary.atanh(data)\n\n\ndef abs_converter(bbuilder, data, **kwargs):\n    \"\"\"Abs converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"abs\", kwargs)\n\n    return relax.op.unary.abs(data)\n\n\ndef sign_converter(bbuilder, data, **kwargs):\n    \"\"\"Sign converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sign\", kwargs)\n\n    return relax.op.unary.sign(data)\n\n\ndef not_converter(bbuilder, data, **kwargs):\n    \"\"\"Not converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"not\", kwargs)\n\n    return relax.op.unary.logical_not(data)\n\n\ndef floor_converter(bbuilder, data, **kwargs):\n    \"\"\"Floor converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"floor\", kwargs)\n\n    return relax.op.unary.floor(data)\n\n\ndef ceil_converter(bbuilder, data, **kwargs):\n    \"\"\"Ceil converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"ceil\", kwargs)\n\n    return relax.op.unary.ceil(data)\n\n\ndef round_converter(bbuilder, data, **kwargs):\n    \"\"\"Round converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"round\", kwargs)\n\n    return relax.op.unary.round(data)\n\n\n#   # Binary ops\n\n\ndef add_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Add converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"add\", kwargs)\n\n    return relax.op.binary.add(lhs, rhs)\n\n\ndef sub_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Sub converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sub\", kwargs)\n\n    return relax.op.binary.subtract(lhs, rhs)\n\n\ndef mul_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Mul converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"mul\", kwargs)\n\n    lhs = bbuilder.normalize(lhs)\n    rhs = bbuilder.normalize(rhs)\n\n    l_ndim = len(lhs.struct_info.shape)\n    r_ndim = len(rhs.struct_info.shape)\n\n    if l_ndim > r_ndim > 0:\n        rhs = relax.op.expand_dims(rhs, [d + 2 for d in range(l_ndim - r_ndim)])\n    if r_ndim > l_ndim > 0:\n        lhs = relax.op.expand_dims(lhs, [d + 2 for d in range(r_ndim - l_ndim)])\n\n    return relax.op.binary.multiply(lhs, rhs)\n\n\ndef div_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Div converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"div\", kwargs)\n\n    return relax.op.binary.divide(lhs, rhs)\n\n\ndef pow_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Pow converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"pow\", kwargs)\n\n    return relax.op.binary.power(lhs, rhs)\n\n\ndef lt_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Lt converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"lt\", kwargs)\n\n    return relax.op.binary.less(lhs, rhs)\n\n\ndef gt_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Gt converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"gt\", kwargs)\n\n    return relax.op.binary.greater(lhs, rhs)\n\n\ndef le_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Le converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"le\", kwargs)\n\n    return relax.op.binary.less_equal(lhs, rhs)\n\n\ndef ge_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Ge converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"ge\", kwargs)\n\n    return relax.op.binary.greater_equal(lhs, rhs)\n\n\ndef eq_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Eq converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"eq\", kwargs)\n\n    return relax.op.binary.equal(lhs, rhs)\n\n\ndef ne_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Ne converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"ne\", kwargs)\n\n    return relax.op.binary.not_equal(lhs, rhs)\n\n\ndef and_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"And converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"and\", kwargs)\n\n    return relax.op.binary.logical_and(lhs, rhs)\n\n\ndef or_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Or converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"or\", kwargs)\n\n    return relax.op.binary.logical_or(lhs, rhs)\n\n\n#   # Select op\n\n\ndef select_converter(bbuilder, condition, t_val, f_val, **kwargs):\n    \"\"\"Select converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"select\", kwargs)\n\n    return relax.op.where(condition, t_val, f_val)\n\n\n#   # Simplifier ops\n\n\ndef sqr_converter(bbuilder, data, **kwargs):\n    \"\"\"sqr converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sqr\", kwargs)\n\n    d_type = data.struct_info.dtype\n\n    return pow_converter(bbuilder, data, tvm_expr.const(2.0, dtype=d_type))\n\n\ndef sqrt_converter(bbuilder, data, **kwargs):\n    \"\"\"sqrt converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sqrt\", kwargs)\n\n    return relax.op.unary.sqrt(data)\n\n\ndef rsqr_converter(bbuilder, data, **kwargs):\n    \"\"\"rsqr converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"rsqr\", kwargs)\n\n    if isinstance(data, relax.Call):\n        d_type = data.checked_type.dtype\n    else:\n        d_type = data.struct_info.dtype\n\n    return pow_converter(bbuilder, data, tvm_expr.const(-2.0, dtype=d_type))\n\n\ndef rsqrt_converter(bbuilder, data, **kwargs):\n    \"\"\"rsqrt converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"rsqrt\", kwargs)\n\n    return relax.op.unary.rsqrt(data)\n\n\ndef log2_converter(bbuilder, data, **kwargs):\n    \"\"\"log2 converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"log2\", kwargs)\n\n    # no equivalent in Relax, using TOpI\n    return bbuilder.emit_te(topi.log2, data)\n\n\ndef min_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Min converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"min\", kwargs)\n\n    return relax.op.binary.minimum(lhs, rhs)\n\n\ndef max_converter(bbuilder, lhs, rhs, **kwargs):\n    \"\"\"Max converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"max\", kwargs)\n\n    return relax.op.binary.maximum(lhs, rhs)\n\n\ndef clamp_converter(bbuilder, x, a, b, **kwargs):\n    \"\"\"Clamp converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"clamp\", kwargs)\n\n    # only works if b and a are Constant floats, not tensors\n    if isinstance(a, tvm_expr.Constant) and isinstance(b, tvm_expr.Constant):\n        return relax.op.clip(\n            x, tvm_expr.PrimValue(a.data.numpy().item()), tvm_expr.PrimValue(b.data.numpy().item())\n        )\n\n    return max_converter(bbuilder, min_converter(bbuilder, x, b), a)\n\n\n#   # Sliding-window ops\n\n\ndef conv_converter(\n    bbuilder, data, kernel, bias, border, stride, padding, dilation, groups, **kwargs\n):\n    \"\"\"Convolution converter,\n    skips bias if it's 0.0 (no bias)\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"conv\", kwargs)\n\n    if border != \"constant\":\n        print(f\"Currently {border} border is not supported, used `constant` border\")\n\n    kernel_shape = [v.value for v in kernel.struct_info.shape.values]\n    dshape = [v.value for v in data.struct_info.shape.values]\n\n    if hasattr(data.struct_info, \"ndim\"):\n        ndim = data.struct_info.ndim\n    else:\n        ndim = len(data.struct_info.shape)\n\n    strides = _stride_conv(stride, ndim) if stride else (1,) * (ndim - 2)\n\n    dilation = _stride_conv(dilation, ndim) if dilation else (1,) * (ndim - 2)\n\n    if not padding:\n        padding = _calculate_nnef_padding(dshape[2:], strides, kernel_shape[2:], dilation)\n\n    pad = _padding_conv(padding, ndim)\n\n    channels = kernel_shape[0]\n\n    if groups == 0:\n        groups = channels\n\n    if ndim == 3:\n        op = relax.op.nn.conv1d\n    elif ndim == 4:\n        op = relax.op.nn.conv2d\n    elif ndim == 5:\n        op = relax.op.nn.conv3d\n    else:\n        raise NotImplementedError(\"Ndim > 5 not supported for convolution.\")\n\n    conv_out = op(\n        data=data,\n        weight=kernel,\n        strides=strides,\n        padding=pad,\n        dilation=dilation,\n        groups=groups,\n    )\n\n    res = None\n    if isinstance(bias, tvm_expr.Constant):\n        # nnef has bias of 0 if it is not needed\n        if (bias.data.numpy() == 0).all():\n            res = conv_out\n\n    if not res:\n        bias = relax.op.reshape(\n            bias,\n            [1, -1]\n            + [\n                1,\n            ]\n            * (ndim - 2),\n        )\n        res = relax.op.add(conv_out, bias)\n\n    return res\n\n\ndef deconv_converter(\n    bbuilder, data, kernel, bias, border, stride, padding, dilation, output_shape, groups, **kwargs\n):\n    \"\"\"Deconvolution converter, using convxd_transpose\n    skips bias if it's 0.0 (no bias)\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"deconv\", kwargs)\n\n    if border != \"constant\":\n        print(f\"Currently {border} border is not supported, used `constant` border\")\n\n    kernel_shape = [v.value for v in kernel.struct_info.shape.values]\n\n    rank = len(kernel_shape)\n\n    strides = _stride_conv(stride, rank) if stride else (1,) * (rank - 2)\n\n    dilation = _stride_conv(dilation, rank) if dilation else (1,) * (rank - 2)\n\n    total, out_sh = _calculate_nnef_padding_deconv(\n        [v.value for v in data.struct_info.shape.values],\n        strides,\n        kernel_shape,\n        dilation,\n        output_shape,\n    )\n\n    if padding:\n        pad = _padding_conv(padding, rank)\n    else:\n        pad = _padding_conv([(pad // 2, (pad + 1) // 2) for pad in total], rank)\n\n    if groups == 0:\n        groups = kernel_shape[0]\n\n    # limit output padding to modulo stride because of tvm checks\n    out_pad = (\n        [(x - (y - t)) % s for x, y, t, s in zip(output_shape[2:], out_sh, total, stride)]\n        if output_shape\n        else (0, 0)\n    )\n\n    if rank == 3:\n        op = relax.op.nn.conv1d_transpose\n    elif rank == 4:\n        op = relax.op.nn.conv2d_transpose\n    else:\n        raise NotImplementedError(\"Ndim > 4 not supported for deconvolution. 3D WIP.\")\n\n    deconv_out = op(\n        data=data,\n        weight=kernel,\n        strides=strides,\n        padding=pad,\n        dilation=dilation,\n        groups=groups,\n        output_padding=out_pad,\n    )\n\n    res = None\n    if isinstance(bias, tvm_expr.Constant):\n        if (bias.data.numpy() == 0).all():\n            res = deconv_out\n\n    if not res:\n        bias = relax.op.reshape(\n            bias,\n            [1, -1]\n            + [\n                1,\n            ]\n            * (rank - 2),\n        )\n        res = relax.op.add(deconv_out, bias)\n\n    return res\n\n\ndef box_converter(bbuilder, data, size, border, padding, stride, dilation, normalize, **kwargs):\n    \"\"\"Box operator converter,\n    summation over sliding window, equal to conv with constant filter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"box\", kwargs)\n\n    dshape = [v.value for v in data.struct_info.shape.values]\n\n    d_type = data.struct_info.dtype\n\n    if size[:2] == [1, 1]:\n        size[0] = dshape[1]\n        if normalize:\n            kernel = relax.op.full(size, relax.const(1 / math.prod(size[2:]), d_type), d_type)\n        else:\n            kernel = relax.op.ones(size, d_type)\n\n        kernel = bbuilder.normalize(kernel)\n\n        out = conv_converter(\n            bbuilder,\n            data,\n            kernel,\n            tvm_expr.const(0, dtype=d_type),\n            border,\n            stride,\n            padding,\n            dilation,\n            dshape[1],\n        )\n    else:\n        # if boxing on channel or batch dims avg pool can solve with permute\n        # we need permute indexes with inactive shape + active shape format, so active at the back\n\n        def _apply_permutation(items, perm):\n            return [items[ind] for ind in perm]\n\n        inactive = [i for i, s in enumerate(size) if s == 1]\n        active = [i for i, s in enumerate(size) if s != 1]\n        permuted_ins = inactive + active\n        inverse = [0] * len(permuted_ins)\n        for i, p in enumerate(permuted_ins):\n            inverse[p] = i\n\n        data = relax.op.permute_dims(data, permuted_ins)\n        size = _apply_permutation(size, permuted_ins)\n\n        data = bbuilder.normalize(data)\n\n        out = avg_pool_converter(\n            bbuilder, data, size[2:], border, padding, stride[2:], dilation[2:]\n        )\n\n        out = relax.op.permute_dims(out, inverse)\n\n        if not normalize:\n            out = bbuilder.normalize(out)\n            out = mul_converter(\n                bbuilder, out, tvm_expr.const(math.prod(size), dtype=out.struct_info.dtype)\n            )\n\n    return out\n\n\ndef debox_converter(\n    bbuilder, data, size, border, padding, stride, dilation, normalize, output_shape, **kwargs\n):\n    \"\"\"Debox operator converter,\n    inverse of box, equal to deconv with constant filter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"debox\", kwargs)\n\n    dshape = [v.value for v in data.struct_info.shape.values]\n\n    if isinstance(data, relax.Call):\n        d_type = data.checked_type.dtype\n    else:\n        d_type = data.struct_info.dtype\n\n    size[0] = dshape[1]\n    if normalize:\n        kernel = relax.op.full(relax.const(1 / math.prod(size[2:]), d_type), size, d_type)\n    else:\n        kernel = relax.op.ones(size, d_type)\n\n    kernel = bbuilder.normalize(kernel)\n\n    out = deconv_converter(\n        bbuilder,\n        data,\n        kernel,\n        tvm_expr.const(0, dtype=d_type),\n        border,\n        stride,\n        padding,\n        dilation,\n        output_shape,\n        groups=dshape[1],\n    )\n    return out\n\n\ndef nearest_downsample_converter(bbuilder, data, factor, **kwargs):\n    \"\"\"Nearest neighbour downsample converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"nearest_downsample\", kwargs)\n\n    dims = 2 + len(factor)\n\n    return box_converter(\n        bbuilder,\n        data,\n        size=[1] * dims,\n        border=\"constant\",\n        padding=[(0, 0)] * dims,\n        stride=[1, 1] + factor,\n        dilation=(1,) * (dims - 2),\n        normalize=False,\n    )\n\n\ndef area_downsample_converter(bbuilder, data, factor, **kwargs):\n    \"\"\"Area downsample converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"area_downsample\", kwargs)\n\n    dims = 2 + len(factor)\n\n    return box_converter(\n        bbuilder,\n        data,\n        size=[1, 1] + factor,\n        border=\"constant\",\n        padding=[(0, 0)] * dims,\n        stride=[1, 1] + factor,\n        dilation=(1,) * (dims - 2),\n        normalize=True,\n    )\n\n\ndef nearest_upsample_converter(bbuilder, data, factor, **kwargs):\n    \"\"\"Nearest neighbour upsample converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"nearest_upsample\", kwargs)\n\n    dshape = [v.value for v in data.struct_info.shape.values]\n    new_size = [d * f for d, f in zip(dshape[2:], factor)]\n\n    ndims = len(dshape)\n\n    if ndims == 3:\n        op = topi.image.resize1d\n    if ndims == 4:\n        op = topi.image.resize2d\n    if ndims == 5:\n        op = topi.image.resize3d\n\n    return bbuilder.emit_te(\n        op,\n        data,\n        [\n            0,\n        ]\n        * ndims,  # dummy value so typecheck goes through, roi is not used\n        new_size,\n        method=\"nearest_neighbor\",\n        rounding_method=\"round\",\n    )\n\n\ndef multilinear_upsample_converter(bbuilder, data, factor, method, border, **kwargs):\n    \"\"\"Multilinear upsampling converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"linear_upsample\", kwargs)\n\n    # for aligned and symmetric replicate resize can be used\n    dshape = [v.value for v in data.struct_info.shape.values]\n    ndims = len(dshape)\n\n    if ndims == 3:\n        op = topi.image.resize1d\n    if ndims == 4:\n        op = topi.image.resize2d\n    if ndims == 5:\n        op = topi.image.resize3d\n\n    new_size = [d * f for d, f in zip(dshape[2:], factor)]\n    if method == \"aligned\":\n        # conversion from nn.upsampling to image.resizexd, re: discuss:11650\n        return bbuilder.emit_te(\n            op,\n            data,\n            [\n                0,\n            ]\n            * ndims,  # dummy value so typecheck goes through, roi is not used\n            new_size,\n            method=\"linear\",\n            coordinate_transformation_mode=\"align_corners\",\n        )\n    if method == \"symmetric\" and border == \"replicate\":\n        return bbuilder.emit_te(\n            op,\n            data,\n            [\n                0,\n            ]\n            * ndims,  # dummy value so typecheck goes through, roi is not used\n            new_size,\n            method=\"linear\",\n            coordinate_transformation_mode=\"half_pixel\",\n        )\n\n    # other combinations need to be calculated with convolution\n    def _upsample_weights_1d(fact, symm):\n        if symm:\n            _weights = [1 - (i + 0.5) / fact for i in range(fact)]\n            _weights = list(reversed(_weights)) + _weights\n        else:\n            _weights = [1 - abs(i) / float(fact) for i in range(-fact + 1, fact)]\n        return np.array(_weights)\n\n    def _upsample_weights_nd(fact, symm):\n        _weights = [_upsample_weights_1d(f, symm) for f in fact]\n        return reduce(np.multiply, np.ix_(*_weights))\n\n    n, c = dshape[:2]\n\n    symmetric = method == \"symmetric\"\n    weights = _upsample_weights_nd(factor, symmetric)\n    weights = np.reshape(weights, newshape=(1, 1) + weights.shape)\n    kernel = tile_converter(bbuilder, tvm_expr.const(weights), (c, 1) + (1,) * len(factor))\n    kernel = bbuilder.normalize(kernel)\n\n    output_shape = [n, c] + [f * s for f, s in zip(factor, dshape[2:])]\n\n    if symmetric:\n        return deconv_converter(\n            bbuilder,\n            data,\n            kernel,\n            tvm_expr.const(0.0),\n            border=\"constant\",\n            stride=factor,\n            padding=[(f - 1, f - 1) for f in factor],\n            dilation=[],\n            groups=c,\n            output_shape=output_shape,\n        )\n    else:\n        replicate = border == \"replicate\"\n        if replicate:\n            data = pad_converter(\n                bbuilder,\n                data,\n                [(0, 0), (0, 0)] + [(1, 0)] * len(factor),\n                border,\n                tvm_expr.const(0.0),\n            )\n            data = bbuilder.normalize(data)\n            padding = factor\n        else:\n            padding = [f // 2 for f in factor]\n\n        return deconv_converter(\n            bbuilder,\n            data,\n            kernel,\n            tvm_expr.const(0.0),\n            border=\"constant\",\n            stride=factor,\n            padding=[(p, p - 1) for p in padding],\n            dilation=[],\n            groups=c,\n            output_shape=output_shape,\n        )\n\n\n#   # Reduce ops\n\n\ndef sum_reduce_converter(bbuilder, data, axes, normalize, keepdims=True, **kwargs):\n    \"\"\"Sum reduce converter\"\"\"\n\n    if kwargs:\n        __unexpected_attrs(\"sum_reduce\", kwargs)\n\n    out = relax.op.sum(data, axes, keepdims=keepdims)\n    if normalize:\n        return l2_normalization_converter(bbuilder, out, 0, [x - 2 for x in axes], 0.0)\n    return out\n\n\ndef max_reduce_converter(bbuilder, data, axes, keepdims=True, **kwargs):\n    \"\"\"Max reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"max_reduce\", kwargs)\n\n    return relax.op.max(data, axes, keepdims=keepdims)\n\n\ndef min_reduce_converter(bbuilder, data, axes, keepdims=True, **kwargs):\n    \"\"\"Min reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"min_reduce\", kwargs)\n\n    return relax.op.min(data, axes, keepdims=keepdims)\n\n\ndef argmax_reduce_converter(bbuilder, data, axes, keepdims=True, **kwargs):\n    \"\"\"Argmax reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"argmax_reduce\", kwargs)\n\n    # relax.op.argmax only supports singular axis, using TOpI\n    return bbuilder.emit_te(topi.argmax, data, axes, keepdims=keepdims)\n\n\ndef argmin_reduce_converter(bbuilder, data, axes, keepdims=True, **kwargs):\n    \"\"\"Argmin reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"argmin_reduce\", kwargs)\n\n    # relax.op.argmin only supports singular axis, using TOpI\n    return bbuilder.emit_te(topi.argmin, data, axes, keepdims=keepdims)\n\n\ndef all_reduce_converter(bbuilder, data, axes, keepdims=True, **kwargs):\n    \"\"\"All reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"all_reduce\", kwargs)\n\n    # no equivalent in Relax, using TOpI\n    return bbuilder.emit_te(topi.all, data, axes, keepdims)\n\n\ndef any_reduce_converter(bbuilder, data, axes, keepdims=True, **kwargs):\n    \"\"\"Any reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"any_reduce\", kwargs)\n\n    # no equivalent in Relax, using TOpI\n    return bbuilder.emit_te(topi.any, data, axes, keepdims)\n\n\ndef mean_reduce_converter(bbuilder, data, axes, keepdims=True, **kwargs):\n    \"\"\"Mean reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"mean_reduce\", kwargs)\n\n    return relax.op.mean(data, axes, keepdims=keepdims)\n\n\n#   # Tensor shape ops\n\n\ndef reshape_converter(bbuilder, data, shape, axis_start, axis_count, **kwargs):\n    \"\"\"Reshape converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"reshape\", kwargs)\n\n    dshape = [v.value for v in data.struct_info.shape.values]\n    if axis_count == -1:\n        newshape = dshape[:axis_start] + shape\n    else:\n        newshape = dshape\n        newshape[axis_start : axis_start + axis_count] = shape\n\n    return relax.op.reshape(data, newshape)\n\n\ndef squeeze_converter(bbuilder, data, axes, **kwargs):\n    \"\"\"Squeeze converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"squeeze\", kwargs)\n\n    return relax.op.squeeze(data, axes)\n\n\ndef unsqueeze_converter(bbuilder, data, axes, **kwargs):\n    \"\"\"Unsqueeze converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"unsqueeze\", kwargs)\n\n    axes = sorted(axes)\n    for axis in axes:\n        if axis < 0 and isinstance(data, tvm_expr.Var):\n            axis = len(data.type_annotation.concrete_shape) + len(axes) + axis\n\n        data = tvm_op.expand_dims(data, axis=axis)\n    return data\n\n\ndef transpose_converter(bbuilder, data, axes, **kwargs):\n    \"\"\"Transpose converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"transpose\", kwargs)\n\n    return relax.op.permute_dims(data, axes)\n\n\ndef split_converter(bbuilder, data, axis, ratios, **kwargs):\n    \"\"\"Split converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"split\", kwargs)\n\n    axis_len = [v.value for v in data.struct_info.shape.values][axis]\n    rat_mul = axis_len / sum(ratios)\n    ratio_list = [(r * rat_mul) for r in ratios]\n\n    s = 0\n    indices = []\n    for rat in ratio_list[:-1]:\n        s += rat\n        # Strictly needs int\n        indices.append(int(s))\n\n    return relax.op.split(data, indices, axis)\n\n\ndef concat_converter(bbuilder, *data, axis, **kwargs):\n    \"\"\"Concat converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"concat\", kwargs)\n\n    return relax.op.concat(data, axis)\n\n\ndef stack_converter(bbuilder, *data, axis, **kwargs):\n    \"\"\"Stack converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"stack\", kwargs)\n\n    data = [relax.op.expand_dims(d, axis) for d in data]\n\n    return relax.op.concat(data, axis)\n\n\ndef unstack_converter(bbuilder, data, axis, **kwargs):\n    \"\"\"Unstack converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"unstack\", kwargs)\n\n    split = split_converter(\n        bbuilder, data, axis, [1] * [v.value for v in data.struct_info.shape.values][axis]\n    )\n    split = bbuilder.normalize(split)\n    res = []\n\n    for i in range(len(split.struct_info.fields)):\n        res.append(squeeze_converter(bbuilder, split[i], axis))\n    return tvm.relax.Tuple(relax.Tuple(res))\n\n\ndef slice_converter(bbuilder, data, axes, begin, end, stride, **kwargs):\n    \"\"\"Slice converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"slice\", kwargs)\n\n    if not stride:\n        stride = [1] * len(axes)\n\n    return relax.op.strided_slice(data, begin=begin, end=end, strides=stride, axes=axes)\n\n\ndef pad_converter(bbuilder, data, padding, border, value, **kwargs):\n    \"\"\"Pad converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"pad\", kwargs)\n\n    if border not in [\"constant\", \"replicate\", \"reflect\"]:\n        print(f\"{border} border type is not supported in padding. Assumed constant\")\n        border = \"constant\"\n    if border == \"replicate\":\n        border = \"edge\"\n\n    # padding can only be tuple<int> even though docs say tuple<tuple<int>>\n    pad = sum(padding, ())\n    pad_before, pad_after = zip(*padding)\n\n    # reflect can only work with TOPI mirror_pad\n    if border == \"reflect\":\n        return bbuilder.emit_te(tvm.topi.nn.mirror_pad, data, pad_before, pad_after, \"REFLECT\")\n    if border == \"edge\":\n        raise tvm.error.OpNotImplemented(\n            \"Replicate - Edge mode is currently not supperted in TVM relax\"\n        )\n\n    # constant works with normal relax.nn.pad\n    return relax.op.nn.pad(data, pad, border, value)\n\n\ndef tile_converter(bbuilder, data, repeats, **kwargs):\n    \"\"\"Tile converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"tile\", kwargs)\n\n    return relax.op.tile(data, repeats)\n\n\n#   # Region-of-interest ops\n\n\n#   # Matrix multiplication\ndef matmul_converter(bbuilder, a, b, **kwargs):\n    \"\"\"Matmul converter\n    real signature: matmul_converter(a, b, transposeA, transposeB)\"\"\"\n\n    transpose_a = kwargs.pop(\"transposeA\")\n    transpose_b = kwargs.pop(\"transposeB\")\n    if kwargs:\n        __unexpected_attrs(\"matmul\", kwargs)\n\n    if transpose_a:\n        ndim = len(a.struct_info.shape.values)\n        axes = list(range(ndim - 2))\n        axes.append(ndim - 1)\n        axes.append(ndim - 2)\n        a = relax.op.permute_dims(a, axes)\n\n    if transpose_b:\n        ndim = len(a.struct_info.shape.values)\n        axes = list(range(ndim - 2))\n        axes.append(ndim - 1)\n        axes.append(ndim - 2)\n        b = relax.op.permute_dims(b, axes)\n\n    a = bbuilder.normalize(a)\n    b = bbuilder.normalize(b)\n\n    return relax.op.matmul(a, b)\n\n\n#   # Variable updates\n#   # Compound ops\n\n\ndef sigmoid_converter(bbuilder, data, **kwargs):\n    \"\"\"Sigmoid converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sigmoid\", kwargs)\n\n    return relax.op.unary.sigmoid(data)\n\n\ndef relu_converter(bbuilder, data, **kwargs):\n    \"\"\"RELU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"relu\", kwargs)\n\n    return relax.op.nn.relu(data)\n\n\ndef prelu_converter(bbuilder, data, alpha, **kwargs):\n    \"\"\"PRELU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"prelu\", kwargs)\n\n    # prelu can't handle float vals but NNEF supports direct parameter, this is just in case\n    if isinstance(alpha, tvm_expr.Constant):\n        if alpha.data.numpy().size == 1:\n            return relax.op.nn.leakyrelu(data, alpha.data.numpy().item())\n\n    # alpha needs to be a tensor whose rank is the same as of data,\n    # and the only non 1 dim is the channel dims\n    axes = [\n        0,\n    ] + [a + 2 for a in range(data.struct_info.ndim - 2)]\n    alpha = relax.op.expand_dims(alpha, axes)\n\n    # using select for prelu\n    return select_converter(\n        bbuilder, data < tvm_expr.const(0.0), mul_converter(bbuilder, alpha, data), data\n    )\n\n\ndef leaky_relu_converter(bbuilder, data, alpha, **kwargs):\n    \"\"\"Leaky RELU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"leaky_relu\", kwargs)\n\n    return relax.op.nn.leakyrelu(data, alpha)\n\n\ndef elu_converter(bbuilder, data, alpha, **kwargs):\n    \"\"\"ELU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"elu\", kwargs)\n\n    return select_converter(\n        bbuilder,\n        lt_converter(bbuilder, data, tvm_expr.const(0.0)),\n        mul_converter(\n            bbuilder,\n            tvm_expr.const(alpha),\n            sub_converter(bbuilder, exp_converter(bbuilder, data), tvm_expr.const(1.0)),\n        ),\n        data,\n    )\n\n\ndef selu_converter(bbuilder, data, alpha, **kwargs):\n    \"\"\"SELU converter\n    True signature is selu_converter(data, alpha, lambda)\"\"\"\n    lambda_var = kwargs.pop(\"lambda\")\n\n    if kwargs:\n        __unexpected_attrs(\"selu\", kwargs)\n\n    return mul_converter(\n        bbuilder,\n        tvm_expr.const(lambda_var),\n        select_converter(\n            bbuilder,\n            data < tvm_expr.const(0.0),\n            mul_converter(\n                bbuilder,\n                tvm_expr.const(alpha),\n                sub_converter(bbuilder, exp_converter(bbuilder, data), tvm_expr.const(1.0)),\n            ),\n            data,\n        ),\n    )\n\n\ndef gelu_converter(bbuilder, data, **kwargs):\n    \"\"\"GELU converter\n    NNEF definition for GELU:\n    the exact definition of GELU is x * Phi(x) where Phi(x) is the\n    CDF of the standard normal distribution, which can be approximated\n    for example by sigmoid(1.702 * x)\n\n    `mul_converter(data, sigmoid_converter(mul_converter(tvm_expr.const(1.702), data)))`\n\n    But in this case we will use the erf to calculate normcdf (same as to pytorch GELU impl)\n    \"\"\"\n    if kwargs:\n        __unexpected_attrs(\"gelu\", kwargs)\n\n    return relax.op.nn.gelu(data)\n\n\ndef silu_converter(bbuilder, data, **kwargs):\n    \"\"\"SiLU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"silu\", kwargs)\n\n    return mul_converter(bbuilder, data, sigmoid_converter(bbuilder, data))\n\n\ndef softmax_converter(bbuilder, data, axes, **kwargs):\n    \"\"\"Softmax converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"softmax\", kwargs)\n\n    if len(axes) > 1:\n        print(\"Multiple axes not supported, operation has been done along the first axis in axes.\")\n    axis = axes[0]\n\n    return relax.op.nn.softmax(data, axis)\n\n\ndef softplus_converter(bbuilder, data, **kwargs):\n    \"\"\"Softplus converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"softplus\", kwargs)\n\n    return log_converter(\n        bbuilder, add_converter(bbuilder, exp_converter(bbuilder, data), tvm_expr.const(1.0))\n    )\n\n\n#   # linear ops\n\n\ndef linear_converter(bbuilder, data, _filter, bias, **kwargs):\n    \"\"\"Linear converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"linear\", kwargs)\n\n    out = matmul_converter(bbuilder, data, _filter, transposeA=False, transposeB=True)\n    out = bbuilder.normalize(out)\n    res = None\n\n    if isinstance(bias, tvm_expr.Constant):\n        if (bias.data.numpy() == 0).all():\n            res = out\n\n    if hasattr(data.struct_info, \"ndim\"):\n        ndim = data.struct_info.ndim\n    else:\n        ndim = len(data.struct_info.shape)\n\n    if not res:\n        bias = relax.op.reshape(\n            bias,\n            [1, -1]\n            + [\n                1,\n            ]\n            * (ndim - 2),\n        )\n        res = relax.op.add(out, bias)\n\n    return res\n\n\ndef separable_conv_converter(\n    bbuilder,\n    data,\n    plane_filter,\n    point_filter,\n    bias,\n    border,\n    padding,\n    stride,\n    dilation,\n    groups,\n    **kwargs,\n):\n    \"\"\"Separable convolution converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"separable_conv\", kwargs)\n\n    if isinstance(data, relax.Call):\n        d_type = data.checked_type.dtype\n    else:\n        d_type = data.struct_info.dtype\n\n    filtered = conv_converter(\n        bbuilder,\n        data,\n        plane_filter,\n        tvm_expr.const(0, dtype=d_type),\n        border,\n        stride,\n        padding,\n        dilation,\n        0,\n    )\n\n    filtered = bbuilder.normalize(filtered)\n\n    return conv_converter(bbuilder, filtered, point_filter, bias, \"constant\", [], [], [], groups)\n\n\ndef separable_deconv_converter(\n    bbuilder,\n    data,\n    plane_filter,\n    point_filter,\n    bias,\n    border,\n    padding,\n    stride,\n    dilation,\n    output_shape,\n    groups,\n    **kwargs,\n):\n    \"\"\"Separable deconvolution converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"separable_deconv\", kwargs)\n\n    if isinstance(data, relax.Call):\n        d_type = data.checked_type.dtype\n    else:\n        d_type = data.struct_info.dtype\n\n    filtered = deconv_converter(\n        bbuilder,\n        data,\n        point_filter,\n        tvm_expr.const(0, dtype=d_type),\n        \"constant\",\n        [],\n        [],\n        [],\n        [],\n        groups,\n    )\n\n    filtered = bbuilder.normalize(filtered)\n\n    return deconv_converter(\n        bbuilder, filtered, plane_filter, bias, border, stride, padding, dilation, output_shape, 0\n    )\n\n\ndef max_pool_converter(bbuilder, data, size, border, padding, stride, dilation, **kwargs):\n    \"\"\"Max pool converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"max_pool\", kwargs)\n\n    if border != \"constant\":\n        print(f\"Currently {border} border is not supported, used `constant` border\")\n\n    dshape = [v.value for v in data.struct_info.shape.values]\n    rank = len(dshape)\n\n    pool_size = _size_conv(size, rank)\n    strides = _stride_conv(stride, rank) if stride else (1,) * (rank - 2)\n\n    dilation = _stride_conv(dilation, rank) if dilation else (1,) * (rank - 2)\n\n    if not padding:\n        # padding is truncated to `conv style` (only active layers are present)\n        padding = _calculate_nnef_padding(dshape[2:], strides, pool_size, dilation)\n\n    pad = _padding_conv(padding, rank)\n\n    if border == \"constant\":\n        padding = [(0, 0), (0, 0)] + padding\n        data = pad_converter(bbuilder, data, padding, border, 0.0)\n        data = bbuilder.normalize(data)\n        pad = (0, 0)\n\n    if rank == 3:\n        op = relax.op.nn.max_pool1d\n    elif rank == 4:\n        op = relax.op.nn.max_pool2d\n    elif rank == 5:\n        op = relax.op.nn.max_pool3d\n    else:\n        raise NotImplementedError(\"Ndim > 5 not supported for max pool.\")\n\n    return op(\n        data,\n        pool_size=pool_size,\n        strides=strides,\n        dilation=dilation,\n        padding=pad,\n    )\n\n\ndef avg_pool_converter(bbuilder, data, size, border, padding, stride, dilation, **kwargs):\n    \"\"\"Avg pool converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"avg_pool\", kwargs)\n\n    if border not in [\"constant\", \"ignore\"]:\n        print(f\"Currently {border} border is not supported, used `constant` border\")\n\n    dshape = [v.value for v in data.struct_info.shape.values]\n    rank = len(dshape)\n    pool_size = _size_conv(size, rank)\n    strides = _stride_conv(stride, rank) if stride else (1,) * (rank - 2)\n\n    dilation = _stride_conv(dilation, rank) if dilation else (1,) * (rank - 2)\n\n    # padding is truncated to `conv style` (only active layers are present)\n    active_shape = dshape[2:]\n    if not padding:\n        padding = _calculate_nnef_padding(active_shape, strides, pool_size, dilation)\n\n    pad = _padding_conv(padding, rank)\n\n    if rank == 3:\n        op = relax.op.nn.avg_pool1d\n    elif rank == 4:\n        op = relax.op.nn.avg_pool2d\n    elif rank == 5:\n        op = relax.op.nn.avg_pool3d\n    else:\n        raise NotImplementedError(\"Ndim > 5 not supported for avg pool.\")\n\n    return op(\n        data,\n        pool_size=pool_size,\n        strides=strides,\n        dilation=dilation,\n        padding=pad,\n        count_include_pad=border != \"ignore\",\n    )\n\n\ndef rms_pool_converter(bbuilder, data, size, border, padding, stride, dilation, **kwargs):\n    \"\"\"Rms pool converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"rms_pool\", kwargs)\n\n    return sqrt_converter(\n        bbuilder,\n        avg_pool_converter(\n            bbuilder,\n            bbuilder.normalize(sqr_converter(bbuilder, data)),\n            size=size,\n            border=border,\n            padding=padding,\n            stride=stride,\n            dilation=dilation,\n        ),\n    )\n\n\n#   # Normalization\n\n\ndef local_response_normalization_converter(bbuilder, data, size, alpha, beta, bias, **kwargs):\n    \"\"\"LRN converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"local_response_normalization\", kwargs)\n\n    axis = [i for i in range(len(size)) if size[i] > 1]\n    if len(axis) == 1:\n        axis = axis[0]\n    else:\n        print(\"Multi axis LRN is not implemented properly, using first axis where size != 1\")\n        axis = axis[0]\n    size = size[axis]\n\n    return bbuilder.emit_te(topi.nn.lrn, data, size, axis, alpha, beta, bias)\n\n\ndef local_mean_normalization_converter(bbuilder, data, size, **kwargs):\n    \"\"\"LMN converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"local_mean_normalization\", kwargs)\n\n    mean = box_converter(bbuilder, data, size, \"constant\", [], [], [], normalize=True)\n    mean = bbuilder.normalize(mean)\n\n    return sub_converter(bbuilder, data, mean)\n\n\ndef local_variance_normalization_converter(bbuilder, data, size, bias, epsilon, **kwargs):\n    \"\"\"LVN converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"local_variance_normalization\", kwargs)\n\n    sigma = box_converter(\n        bbuilder,\n        bbuilder.normalize(sqr_converter(bbuilder, data)),\n        size,\n        \"constant\",\n        [],\n        [],\n        [],\n        normalize=True,\n    )\n    sigma = bbuilder.normalize(sigma)\n\n    return div_converter(\n        bbuilder,\n        data,\n        max_converter(\n            bbuilder,\n            add_converter(bbuilder, sqrt_converter(bbuilder, sigma), tvm_expr.const(bias)),\n            tvm_expr.const(epsilon),\n        ),\n    )\n\n\ndef local_contrast_normalization_converter(bbuilder, data, size, bias, epsilon, **kwargs):\n    \"\"\"LCN converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"local_contrast_normalization\", kwargs)\n\n    centered = local_mean_normalization_converter(bbuilder, data, size)\n    centered = bbuilder.normalize(centered)\n    return local_variance_normalization_converter(bbuilder, centered, size, bias, epsilon)\n\n\ndef l1_normalization_converter(bbuilder, data, axes, bias, epsilon, **kwargs):\n    \"\"\"L1 norm converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"l1_normalization\", kwargs)\n\n    sigma = sum_reduce_converter(bbuilder, abs_converter(bbuilder, data), axes, False)\n    return div_converter(\n        bbuilder,\n        data,\n        max_converter(\n            bbuilder, add_converter(bbuilder, sigma, tvm_expr.const(bias)), tvm_expr.const(epsilon)\n        ),\n    )\n\n\ndef l2_normalization_converter(bbuilder, data, axes, bias, epsilon, **kwargs):\n    \"\"\"L2 norm converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"l2_normalization\", kwargs)\n\n    # relay style l2_norm not supported, used equation from NNEF\n\n    sigma = sum_reduce_converter(\n        bbuilder, sqr_converter(bbuilder, data), axes=axes, normalize=False\n    )\n\n    res = div_converter(\n        bbuilder,\n        data,\n        max_converter(\n            bbuilder,\n            add_converter(bbuilder, sqrt_converter(bbuilder, sigma), tvm_expr.const(bias)),\n            tvm_expr.const(epsilon),\n        ),\n    )\n    return res\n\n\ndef batch_normalization_converter(bbuilder, data, mean, variance, offset, scale, epsilon, **kwargs):\n    \"\"\"Batch norm converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"batch_normalization\", kwargs)\n\n    mean = squeeze_converter(bbuilder, mean, 0)\n    variance = squeeze_converter(bbuilder, variance, 0)\n    offset = squeeze_converter(bbuilder, offset, 0)\n    scale = squeeze_converter(bbuilder, scale, 0)\n\n    mean = bbuilder.normalize(mean)\n    variance = bbuilder.normalize(variance)\n    offset = bbuilder.normalize(offset)\n    scale = bbuilder.normalize(scale)\n\n    res = bbuilder.emit_te(topi.nn.batch_norm, data, scale, offset, mean, variance, 1, epsilon)\n    return res[0]\n\n\n#   # Misc ops\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/tvm/nnef_frontend/relay/__init__.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport warnings\nimport tvm\nfrom packaging import version\n\nver = version.parse(tvm.__version__)\nif ver.minor > 19:\n    raise ImportError(f\"TVM version 0.19 or lower is required, but found {tvm.__version__}\")\n\nif ver.minor != 19:\n    warnings.warn(f\"TVM version 0.19 is recommended, but found {tvm.__version__}. Some features may not work as expected.\")\n\nfrom .from_nnef import from_nnef\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/tvm/nnef_frontend/relay/from_nnef.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"NNEF: Neural Network Exchange Format frontend for TVM relay\"\"\"\nimport os\nimport typing\nimport nnef\nimport numpy as np\n\nimport tvm\nfrom tvm import relay\nfrom tvm.ir import IRModule\nfrom tvm.relay import expr as tvm_expr\nfrom tvm.relay import analysis, function\nfrom tvm.relay.frontend.common import new_var, fold_constant, set_span, infer_type\n\nfrom .nnef_ops import _get_converter_map\n\n\ndef get_type(elem_type: str):\n    \"\"\"\n    Gives numpy style type for nnef primitive types, uses x32 versions.\n\n    :param elem_type: string, (scalar, integer, logical, string)\n    :return: returns numpy dtype equivalent (float32, int32, bool, string)\n    \"\"\"\n    if elem_type == \"scalar\":\n        return \"float32\"\n    if elem_type == \"integer\":\n        return \"int32\"\n    if elem_type == \"logical\":\n        return \"bool\"\n    if elem_type == \"string\":\n        return \"string\"\n    raise TypeError(f'Type \"{elem_type}\" is not implemented')\n\n\ndef make_parameter_span(source_name_list, name_sep=\".\"):\n    return name_sep.join(source_name_list)\n\n\n# Converter class\nclass NNEFConverter:\n    \"\"\"\n    Helper class for class level attributes, for conversion of NNEF model.\n    Public method to use is from_nnef.\n\n    Parameters\n    ----------\n\n    freeze_vars : bool, optional\n        If this parameter is true, the nnef variables will be converted to\n        constants, and be embedded into the relay model, allowing optimizations\n        at compile time.\n\n    \"\"\"\n\n    def __init__(self, freeze_vars=False):\n        self._nodes = {}\n        self._consts = {}\n        self._inputs = {}\n        self._num_inputs = 0\n        self._params = {}\n        self._num_params = 0\n        self._freeze_vars = freeze_vars\n\n    def from_nnef(self, graph: nnef.Graph) -> typing.Tuple[tvm.IRModule, dict]:\n        \"\"\"\n        Convert an NNEF model into an equivalent TVM Relay IRModule.\n\n        Parameters\n        ----------\n        graph : nnef.Graph\n            An NNEF Graph object that was imported with nnef.load_graph.\n            Shapes should be inferred by nnef.infer_shapes on graph beforehand.\n\n        Returns\n        -------\n        mod : tvm.IRModule\n            The relay module for compilation\n\n        params : dict of str to tvm.nd.NDArray\n            The parameter dictionary to be used\n\n        \"\"\"\n        self._parse_inputs(graph)\n        self._construct_nodes(graph)\n\n        outputs = [self._nodes[n] for n in graph.outputs]\n        outputs = outputs[0] if len(outputs) == 1 else tvm_expr.Tuple(outputs)\n\n        nodes = {v: k for k, v in self._nodes.items()}\n        free_vars = analysis.free_vars(outputs)\n        free_vars = [nodes[var] for var in free_vars]\n        for i_name in self._params.keys():\n            if i_name in free_vars and i_name not in self._inputs:\n                self._inputs[i_name] = self._nodes[i_name]\n        func = function.Function(list(self._inputs.values()), outputs)\n        return IRModule.from_expr(func), self._params\n\n    def _parse_inputs(self, graph):\n        \"\"\"Save inputs into class from inputs attrib of graph\"\"\"\n        for inp in graph.inputs:\n            self._num_inputs += 1\n            tensor = graph.tensors[inp]\n            self._nodes[inp] = new_var(inp, shape=tensor.shape, dtype=get_type(tensor.dtype))\n            self._inputs[inp] = self._nodes[inp]\n\n    def _construct_nodes(self, graph):\n        \"\"\"Construct TVM relay calls from every operation of the nnef graph\"\"\"\n        for op in graph.operations:\n            if op.name == \"external\":\n                # externals are handled as input, not needed,\n                # but nnef treats them as operations as well\n                continue\n\n            if op.name == \"variable\":\n                self._set_variable(graph.tensors[op.outputs[\"output\"]])\n\n            elif op.name == \"constant\":\n                self._set_const(op)\n\n            else:\n                # every other operator can be grouped more easily,\n                # as it does not need self for conversion\n                self._set_operator(op)\n\n    def _set_operator(self, node):\n        self._set_literal_inputs(node)\n        self._set_parameter_span(node, node.name)\n        inputs = []\n        for ink, inv in node.inputs.items():\n            if isinstance(inv, list):\n                for i, linv in enumerate(inv):\n                    if linv in self._nodes.keys():\n                        inputs.append(self._nodes[linv])\n                    else:  # handle literal inputs\n                        name = f\"{node.name}_{ink}_{i}\"\n                        assert name in self._nodes, f\"{name} has not been properly handled\"\n                        inputs.append(self._nodes[name])\n\n            else:\n                if inv in self._nodes.keys():\n                    inputs.append(self._nodes[inv])\n                else:  # handle literal inputs\n                    name = f\"{node.name}_{ink}\"\n                    assert name in self._nodes, f\"{name} has not been properly handled\"\n                    inputs.append(self._nodes[name])\n\n        converted = self._get_relay_op_call(node.name, inputs, node.attribs)\n\n        if not isinstance(converted, tvm_expr.TupleWrapper):\n            outputs_num = 1\n        else:\n            outputs_num = len(converted)\n\n        if outputs_num == 1:\n            if not isinstance(converted, tvm_expr.TupleWrapper):\n                converted = fold_constant(converted)\n            else:\n                converted = fold_constant(converted.astuple())\n        else:\n            converted = tvm_expr.TupleWrapper(fold_constant(converted.astuple()), len(converted))\n\n        converted = set_span(converted, node.name)\n\n        if outputs_num == 1:\n            # check if the singular ret val is a list of only one element\n            ret_val = list(node.outputs.values())[0]\n            if isinstance(ret_val, list):\n                self._nodes[ret_val[0]] = converted\n            else:\n                self._nodes[ret_val] = converted\n        else:\n            for i, out in zip(range(outputs_num), node.outputs[\"values\"]):\n                self._nodes[out] = converted[i]\n\n    def _set_const(self, node):\n        \"\"\"Create a tvm.relay.Constant from a nnef constant tensor\"\"\"\n        name = node.outputs[\"output\"]\n        data = node.attribs[\"value\"]\n        shape = node.attribs[\"shape\"]\n        if len(data) == 1:\n            data = np.full(shape, data, dtype=get_type(node.dtype))\n        else:\n            data = np.array(data, dtype=get_type(node.dtype))\n        self._consts[name] = tvm_expr.const(data)\n        self._nodes[name] = self._consts[name]\n\n    def _set_variable(self, tensor):\n        \"\"\"Create a tvm.relay.Var (or Constant if freeze_vars) from a nnef variable tensor\"\"\"\n        tens_data = tensor.data\n        if self._freeze_vars:\n            self._consts[tensor.name] = tvm_expr.const(tens_data)\n            self._nodes[tensor.name] = self._consts[tensor.name]\n        else:\n            self._nodes[tensor.name] = new_var(\n                tensor.name, shape=tensor.shape, dtype=get_type(tensor.dtype)\n            )\n            self._params[tensor.name] = tens_data\n\n    def _set_literal_inputs(self, node):\n        \"\"\"Checks if node has literal inputs and saves them into a tvm.relay.Constant.\n        naming as {node.name}_{input field name}\"\"\"\n        for field_name, value in node.inputs.items():\n            if isinstance(value, list):\n                for v in value:\n                    if v not in self._nodes.keys():\n                        self._nodes[f\"{node.name}_{v}\"] = tvm_expr.const(v)\n\n            else:\n                if value not in self._nodes.keys():\n                    self._nodes[f\"{node.name}_{field_name}\"] = tvm_expr.const(value)\n\n    def _set_parameter_span(self, node, node_source_name):\n        for field_name, name in node.inputs.items():\n            if isinstance(name, list):\n                for n in name:\n                    self._set_par_span_helper(node, node_source_name, n, field_name)\n            else:\n                self._set_par_span_helper(node, node_source_name, name, field_name)\n\n    def _set_par_span_helper(self, node, node_source_name, name, field_name):\n        if name not in self._nodes.keys():\n            name = f\"{node.name}_{field_name}\"\n\n        expr = self._nodes.get(name)\n        if expr:\n            expr_with_span = set_span(expr, make_parameter_span([node_source_name, name]))\n            self._nodes[name] = expr_with_span\n            if name in self._inputs:\n                self._inputs[name] = expr_with_span\n            if isinstance(expr, relay.Constant):\n                self._consts[name] = expr_with_span\n\n    def _get_relay_op_call(self, name, inputs, attrs):\n        \"\"\"Returns the tvm.Call equivalent to the nnef operator\"\"\"\n        conv_map = _get_converter_map()\n        if name in conv_map:\n            call = conv_map[name](*inputs, **attrs)\n        else:\n            # This error is reached if NNEF is expanded with additional ops\n            raise NotImplementedError(\n                f\"Operator {name} is not implemented, as {name} has been added after 1.0.5.\"\n            )\n        return call\n\n    def _infer_type(self, val):\n        if isinstance(val, bool):\n            return \"bool\", True\n        if isinstance(val, float):\n            return \"float32\", True\n        if isinstance(val, int):\n            return \"int32\", True\n        if isinstance(val, str):\n            # the string vals can be names of nodes in some of the cases\n            if isinstance(val, nnef.Identifier):\n                if val in self._nodes.keys():\n                    node = self._nodes[val]\n                    if isinstance(node, tvm_expr.Var):\n                        return node.type_annotation.dtype, False\n                    if isinstance(node, tvm_expr.Constant):\n                        return node.data.dtype, False\n                    if isinstance(node, tvm_expr.Call):\n                        return infer_type(node).checked_type.dtype, False\n                raise Exception(\n                    f\"{val} has not been loaded into the model \"\n                    \"but it should have been, as a var or call.\"\n                )\n            return \"string\", True\n\n        raise TypeError(f'Value \"{val}\" is not a recognized type')\n\n\ndef from_nnef(\n    model: typing.Union[str, os.PathLike, nnef.Graph],\n    freeze_vars: bool = False,\n) -> typing.Tuple[IRModule, dict]:\n    \"\"\"\n    Convert an NNEF model into an equivalent TVM Relay IRModule.\n\n\n    Parameters\n    ----------\n    model : os.PathLike or str or nnef.Graph\n        Path to an NNEF model directory, containing the graph.nnef (and weight files)\n\n    freeze_vars : bool, optional\n        If this parameter is true, the nnef variables will be converted to\n        constants, and be embedded into the relay model, allowing optimizations\n        at compile time.\n\n    Returns\n    -------\n    mod : tvm.IRModule\n        The relay module for compilation\n\n    params : dict of str to tvm.nd.NDArray\n        The parameter dictionary to be used\n    \"\"\"\n    conv_clss = NNEFConverter(freeze_vars)\n\n    if not isinstance(model, nnef.Graph):\n        model = nnef.load_graph(model)\n\n    # fills in the nnef graph's shape information\n    nnef.infer_shapes(model)\n\n    return conv_clss.from_nnef(graph=model)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/execution/tvm/nnef_frontend/relay/nnef_ops.py",
    "content": "# Copyright (c) 2017-2025 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"NNEF frontend converter helper funcs and ops\"\"\"\nimport math\n\nimport itertools\nfrom functools import reduce\n\nimport numpy as np\n\nimport tvm\nfrom tvm import relay\nfrom tvm.relay import expr as tvm_expr\nfrom tvm.relay import op as tvm_op\nfrom tvm.relay.frontend.common import get_relay_op, infer_shape, infer_type\n\n\n# Base methods\n\n\ndef dimension_picker(prefix, kernel_shape, suffix=\"\"):\n    \"\"\"\n    Returns the correct name for nth dimensional operator. Uses the \"kernel_shape\" attribute.\\n\n    E.g.call: dimension_picker(op_name)(attr)\n\n    :param prefix: the name of the operator (e.g. conv)\n    :param kernel_shape: shape of the tensor to fit the operation\n    :param suffix: optional suffix for ops\n    :return: \"prefix`n`d\" where n is the correct dimension for the kernel\n    \"\"\"\n\n    rank = len(kernel_shape[2:])\n    if rank == 1:\n        return prefix + \"1d\" + suffix\n    if rank == 2:\n        return prefix + \"2d\" + suffix\n    if rank == 3:\n        return prefix + \"3d\" + suffix\n    op_name = prefix + \"1d/2d/3d\"\n    msg = f\"Only 1D, 2D, and 3D kernels are supported for operator {op_name}.\"\n    raise tvm.error.OpAttributeInvalid(msg)\n\n\ndef _size_conv(size, rank):\n    # window of size (DH)W is only possible when it is checked outside,\n    # which is needed for alternative solution\n    if rank == 3:\n        if len(size) == 1:\n            return size\n        if len(size) == 3:\n            assert (\n                size[0] == 1 and size[1] == 1\n            ), \"Incorrect window dimensions, first two dimensions must be 1\"\n            return size[2]\n    if rank == 4:\n        if len(size) == 2:\n            return size\n        if len(size) == 4:\n            assert (\n                size[0] == 1 and size[1] == 1\n            ), \"Incorrect window dimensions, first two dimensions must be 1\"\n            return size[2:]\n    if rank == 5:\n        if len(size) == 3:\n            return size\n        if len(size) == 5:\n            assert (\n                size[0] == 1 and size[1] == 1\n            ), \"Incorrect window dimensions, first two dimensions must be 1\"\n            return size[2:]\n\n    raise ValueError(f\"Unexpected window size, got {len(size)}\")\n\n\ndef _stride_conv(stride, rank):\n    if rank == 3:\n        # {conv style} :: [s] -> [s]\n        if len(stride) == 1:\n            return stride\n        # {pool style} :: [N, C, s] -> asrt N,C == 1; [s]\n        if len(stride) == 3:\n            assert (\n                stride[0] == 1 and stride[1] == 1\n            ), \"Not supported stride dimensions, first two dimensions must be 1\"\n            return stride[2:]\n    if rank == 4:\n        # {conv style} :: [sh, sw] -> [sh, sw]\n        if len(stride) == 2:\n            return stride\n        # {pool style} :: [N, C, sh, sw] -> asrt N,C == 1; [sh, sw]\n        if len(stride) == 4:\n            assert (\n                stride[0] == 1 and stride[1] == 1\n            ), \"Not supported stride dimensions, first two dimensions must be 1\"\n            return stride[2:]\n    if rank == 5:\n        # {conv style} :: [sd, sh, sw] -> [sd, sh, sw]\n        if len(stride) == 3:\n            return stride\n        # {pool style} :: [N, C, sd, sh, sw] -> asrt N,C == 1; [sd, sh, sw]\n        if len(stride) == 5:\n            assert (\n                stride[0] == 1 and stride[1] == 1\n            ), \"Not supported stride dimensions, first two dimensions must be 1\"\n            return stride[2:]\n    raise ValueError(f\"Unexpected stride in {rank - 2}D, got {len(stride)}: {stride}\")\n\n\ndef _padding_conv(padding, rank, keepdims=False):\n    if isinstance(padding[0], (tuple, list)):\n        # 1D\n        if rank == 3:\n            # {conv style} :: [(l,r)] -> (l,r)\n            if len(padding) == 1:\n                return padding[0]\n            if len(padding) == 3:\n                # {pool style} :: [(batch),(channel),(l,r)] -> asrt N,C == 0, (l,r)\n                if not keepdims:\n                    assert padding[0] == (0, 0) and padding[1] == (0, 0), (\n                        \"Incorrect padding. \" \"Padding on C,I dimensions not supported\"\n                    )\n                    return padding[2]\n                # {sliding window style} :: [(batch),(channel),(l,r)] -> [(batch),(channel),(l,r)]\n                else:\n                    return padding\n\n        # 2D\n\n        if rank == 4:\n            # {conv style} :: [(u,d),(l,r)] -> (u, l, d, r)\n            if len(padding) == 2:\n                # change UDLR to ULDR padding, LC is faster here\n                return [x[i] for i in [0, 1] for x in padding]\n\n            if len(padding) == 4:\n                # {pool style} :: [(batch size),(channel),(u,d),(l,r)] ->\n                #                  -> asrt N,C == 0, (u, l, d, r)\n                if not keepdims:\n                    assert padding[0] == (0, 0) and padding[1] == (0, 0), (\n                        \"Incorrect padding. \" \"Padding on C,I dimensions not supported\"\n                    )\n                    # itertools is faster than LC (slicing)\n                    return list(itertools.chain.from_iterable(zip(padding[2], padding[3])))\n                # {sliding window style} :: [(batch),(channel),(u,d),(l,r)] ->\n                #                            -> [(batch),(channel),(u,d),(l,r)]\n                else:\n                    return padding\n\n        # 3D\n\n        if rank == 5:\n            # {conv style} :: [(f,b),(u,d),(l,r)] -> (f, u, l, b, d, r)\n            if len(padding) == 3:\n                # LC is faster\n                return [x[i] for i in [0, 1] for x in padding]\n\n            if len(padding) == 5:\n                # {pool style} :: [(batch size),(channel),(f,b)(u,p),(l,r)] ->\n                #                  -> asrt N,C == 0, (f, u, l, b, d, r)\n                if not keepdims:\n                    assert padding[0] == (0, 0) and padding[1] == (0, 0), (\n                        \"Incorrect padding. \" \"Padding on C,I dimensions not supported\"\n                    )\n                    # itertools faster barely\n                    return list(\n                        itertools.chain.from_iterable(zip(padding[2], padding[3], padding[4]))\n                    )\n                # {s-w style} :: [(batch),(channel),(f,b),(u,d),(l,r)] ->\n                #                 -> [(batch),(channel),(f,b),(u,d),(l,r)]\n                else:\n                    return padding\n\n        raise ValueError(\n            f\"Incorrect padding style for {rank - 2}D operand. Only length of {rank - 2}, {rank} \"\n            f\"supported, got {len(padding)}: {padding}\"\n        )\n\n    raise ValueError(\"nnef should not have singular padding\")\n\n\ndef _calculate_nnef_padding(active_shape, strides, kernel_shape, dilation):\n    \"\"\"Ordering of nnef autopad and tvm autopad are sometimes different,\n    this method calculates nnef like padding from dimensions\n\n    Parameters\n    ----------\n        active_shape\n            the data dimensions\n        strides\n            the strides over the active dimensions\n        kernel_shape\n            the shape of the window, must have the same rank as active shape\n        dilation\n            the dilations over the active dimensions\n    \"\"\"\n    output = [(ui + (s - 1)) // s for ui, s in zip(active_shape, strides)]\n    dilated = [(f - 1) * d + 1 for f, d in zip(kernel_shape, dilation)]\n    total = [\n        max(0, (di - 1) * s + df - ui)\n        for di, s, df, ui in zip(output, strides, dilated, active_shape)\n    ]\n    padding = [(pad // 2, (pad + 1) // 2) for pad in total]\n    return padding\n\n\ndef _calculate_nnef_padding_deconv(data_sh, strides, kernel_active_sh, dilation, output_shape):\n    out_sh = output_shape[2:] if output_shape else [ui * s for ui, s in zip(data_sh, strides)]\n    dilated = [(f - 1) * d + 1 for f, d in zip(kernel_active_sh[2:], dilation)]\n    total = [\n        max(0, (di - 1) * s + df - ui) for di, s, df, ui in zip(data_sh, strides, dilated, out_sh)\n    ]\n    return total, out_sh\n\n\ndef __unexpected_attrs(op, kwargs):\n    raise NotImplementedError(\n        f\"{op} received unexpected attributes(s), possibly mismatched versions. \"\n        \"Attributes(s) ignored: \" + \", \".join(f\"{k} := {v}\" for k, v in kwargs.items())\n    )\n\n\n# Conversion map, operator functions\n\n\ndef _get_converter_map():\n    return {  # Unary\n        \"copy\": copy_converter,  # arithmetic\n        \"neg\": neg_converter,\n        \"rcp\": rcp_converter,\n        \"exp\": exp_converter,\n        \"log\": log_converter,\n        \"sin\": sin_converter,\n        \"cos\": cos_converter,\n        \"tan\": tan_converter,\n        \"sinh\": sinh_converter,\n        \"cosh\": cosh_converter,\n        \"tanh\": tanh_converter,\n        \"asin\": asin_converter,\n        \"acos\": acos_converter,\n        \"atan\": atan_converter,\n        \"asinh\": asinh_converter,\n        \"acosh\": acosh_converter,\n        \"atanh\": atanh_converter,\n        \"abs\": abs_converter,\n        \"sign\": sign_converter,\n        \"not\": not_converter,  # logical\n        \"floor\": floor_converter,  # rounding\n        \"ceil\": ceil_converter,\n        \"round\": round_converter,\n        # Binary\n        \"add\": add_converter,  # arithmetic\n        \"sub\": sub_converter,\n        \"mul\": mul_converter,\n        \"div\": div_converter,\n        \"pow\": pow_converter,\n        \"lt\": lt_converter,  # comparison\n        \"gt\": gt_converter,\n        \"le\": le_converter,\n        \"ge\": ge_converter,\n        \"eq\": eq_converter,\n        \"ne\": ne_converter,\n        \"and\": and_converter,  # logical\n        \"or\": or_converter,\n        # select\n        \"select\": select_converter,\n        # simplifier\n        \"sqr\": sqr_converter,\n        \"sqrt\": sqrt_converter,\n        \"rsqr\": rsqr_converter,\n        \"rsqrt\": rsqrt_converter,\n        \"log2\": log2_converter,\n        \"min\": min_converter,\n        \"max\": max_converter,\n        \"clamp\": clamp_converter,\n        # sliding-window\n        \"conv\": conv_converter,\n        \"deconv\": deconv_converter,\n        \"box\": box_converter,\n        \"debox\": debox_converter,\n        \"argmax_pool\": ndop,\n        \"sample\": ndop,\n        \"desample\": ndop,\n        \"nearest_downsample\": nearest_downsample_converter,\n        \"area_downsample\": area_downsample_converter,\n        \"nearest_upsample\": nearest_upsample_converter,\n        \"multilinear_upsample\": multilinear_upsample_converter,\n        # reduce\n        \"sum_reduce\": sum_reduce_converter,\n        \"max_reduce\": max_reduce_converter,\n        \"min_reduce\": min_reduce_converter,\n        \"argmax_reduce\": argmax_reduce_converter,\n        \"argmin_reduce\": argmin_reduce_converter,\n        \"all_reduce\": all_reduce_converter,\n        \"any_reduce\": any_reduce_converter,\n        \"mean_reduce\": mean_reduce_converter,\n        # tensor shape\n        \"reshape\": reshape_converter,\n        \"squeeze\": squeeze_converter,\n        \"unsqueeze\": unsqueeze_converter,\n        \"transpose\": transpose_converter,\n        \"split\": split_converter,\n        \"concat\": concat_converter,\n        \"stack\": stack_converter,\n        \"unstack\": unstack_converter,\n        \"slice\": slice_converter,\n        \"pad\": pad_converter,\n        \"tile\": tile_converter,\n        # region-of-interest - not needed - not supported\n        \"avg_roi_pool\": ndop,\n        \"max_roi_pool\": ndop,\n        \"roi_resample\": ndop,\n        \"avg_roi_align\": ndop,\n        \"max_roi_align\": ndop,\n        # matrix multiplication\n        \"matmul\": matmul_converter,\n        # variables\n        \"update\": ndop,  # --- not used\n        # Compound\n        \"sigmoid\": sigmoid_converter,  # activation\n        \"relu\": relu_converter,\n        \"prelu\": prelu_converter,\n        \"leaky_relu\": leaky_relu_converter,\n        \"elu\": elu_converter,\n        \"selu\": selu_converter,\n        \"gelu\": gelu_converter,\n        \"silu\": silu_converter,\n        \"softmax\": softmax_converter,\n        \"softplus\": softplus_converter,\n        \"linear\": linear_converter,  # linear\n        \"separable_conv\": separable_conv_converter,\n        \"separable_deconv\": separable_deconv_converter,\n        \"max_pool_with_index\": ndop,  # pooling\n        \"max_pool\": max_pool_converter,\n        \"avg_pool\": avg_pool_converter,\n        \"rms_pool\": rms_pool_converter,\n        \"local_response_normalization\": local_response_normalization_converter,  # normalization\n        \"local_mean_normalization\": local_mean_normalization_converter,\n        \"local_variance_normalization\": local_variance_normalization_converter,\n        \"local_contrast_normalization\": local_contrast_normalization_converter,\n        \"l1_normalization\": l1_normalization_converter,\n        \"l2_normalization\": l2_normalization_converter,\n        \"batch_normalization\": batch_normalization_converter,\n        \"min_max_linear_quantize\": ndop,  # quantization\n        \"zero_point_linear_quantize\": ndop,\n        \"linear_quantize\": ndop,\n        \"logarithmic_quantize\": ndop,\n        # MISC\n        \"copy_n\": ndop,\n        \"add_n\": ndop,\n        \"moments\": ndop,\n    }\n\n\n# not implemented ops\ndef ndop(*args, **kwargs):\n    raise Exception(\"Not supported operator was called, please check for compatibility\")\n\n\n#   # Unary ops\n\n\ndef copy_converter(data, **kwargs):\n    \"\"\"Copy converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"copy\", kwargs)\n\n    return get_relay_op(\"copy\")(data)\n\n\ndef neg_converter(data, **kwargs):\n    \"\"\"Neg converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"neg\", kwargs)\n\n    return get_relay_op(\"negative\")(data)\n\n\ndef rcp_converter(data, **kwargs):\n    \"\"\"Rcp converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"rcp\", kwargs)\n\n    if isinstance(data, relay.Call):\n        d_type = infer_type(data).checked_type.dtype\n    else:\n        d_type = data.type_annotation.dtype\n\n    return div_converter(tvm_expr.const(1, dtype=d_type), data)\n\n\ndef exp_converter(data, **kwargs):\n    \"\"\"Exp converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"exp\", kwargs)\n\n    return get_relay_op(\"exp\")(data)\n\n\ndef log_converter(data, **kwargs):\n    \"\"\"Log converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"log\", kwargs)\n\n    return get_relay_op(\"log\")(data)\n\n\ndef sin_converter(data, **kwargs):\n    \"\"\"Sin converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sin\", kwargs)\n\n    return get_relay_op(\"sin\")(data)\n\n\ndef cos_converter(data, **kwargs):\n    \"\"\"Cos converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"cos\", kwargs)\n\n    return get_relay_op(\"cos\")(data)\n\n\ndef tan_converter(data, **kwargs):\n    \"\"\"Tan converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"tan\", kwargs)\n\n    return get_relay_op(\"tan\")(data)\n\n\ndef sinh_converter(data, **kwargs):\n    \"\"\"Sinh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sinh\", kwargs)\n\n    return get_relay_op(\"sinh\")(data)\n\n\ndef cosh_converter(data, **kwargs):\n    \"\"\"Cosh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"cosh\", kwargs)\n\n    return get_relay_op(\"cosh\")(data)\n\n\ndef tanh_converter(data, **kwargs):\n    \"\"\"Tanh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"tanh\", kwargs)\n\n    return get_relay_op(\"tanh\")(data)\n\n\ndef asin_converter(data, **kwargs):\n    \"\"\"Asin converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"asin\", kwargs)\n\n    return get_relay_op(\"asin\")(data)\n\n\ndef acos_converter(data, **kwargs):\n    \"\"\"Acos converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"acos\", kwargs)\n\n    return get_relay_op(\"acos\")(data)\n\n\ndef atan_converter(data, **kwargs):\n    \"\"\"Atan converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"atan\", kwargs)\n\n    return get_relay_op(\"atan\")(data)\n\n\ndef asinh_converter(data, **kwargs):\n    \"\"\"Asinh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"asinh\", kwargs)\n\n    return get_relay_op(\"asinh\")(data)\n\n\ndef acosh_converter(data, **kwargs):\n    \"\"\"Acosh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"acosh\", kwargs)\n\n    return get_relay_op(\"acosh\")(data)\n\n\ndef atanh_converter(data, **kwargs):\n    \"\"\"Atanh converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"atanh\", kwargs)\n\n    return get_relay_op(\"atanh\")(data)\n\n\ndef abs_converter(data, **kwargs):\n    \"\"\"Abs converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"abs\", kwargs)\n\n    return get_relay_op(\"abs\")(data)\n\n\ndef sign_converter(data, **kwargs):\n    \"\"\"Sign converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sign\", kwargs)\n\n    return get_relay_op(\"sign\")(data)\n\n\ndef not_converter(data, **kwargs):\n    \"\"\"Not converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"not\", kwargs)\n\n    return get_relay_op(\"logical_not\")(data)\n\n\ndef floor_converter(data, **kwargs):\n    \"\"\"Floor converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"floor\", kwargs)\n\n    return get_relay_op(\"floor\")(data)\n\n\ndef ceil_converter(data, **kwargs):\n    \"\"\"Ceil converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"ceil\", kwargs)\n\n    return get_relay_op(\"ceil\")(data)\n\n\ndef round_converter(data, **kwargs):\n    \"\"\"Round converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"round\", kwargs)\n\n    return get_relay_op(\"round\")(data)\n\n\n#   # Binary ops\n\n\ndef add_converter(lhs, rhs, **kwargs):\n    \"\"\"Add converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"add\", kwargs)\n\n    return get_relay_op(\"add\")(lhs, rhs)\n\n\ndef sub_converter(lhs, rhs, **kwargs):\n    \"\"\"Sub converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sub\", kwargs)\n\n    return get_relay_op(\"subtract\")(lhs, rhs)\n\n\ndef mul_converter(lhs, rhs, **kwargs):\n    \"\"\"Mul converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"mul\", kwargs)\n\n    return get_relay_op(\"multiply\")(lhs, rhs)\n\n\ndef div_converter(lhs, rhs, **kwargs):\n    \"\"\"Div converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"div\", kwargs)\n\n    return get_relay_op(\"divide\")(lhs, rhs)\n\n\ndef pow_converter(lhs, rhs, **kwargs):\n    \"\"\"Pow converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"pow\", kwargs)\n\n    return get_relay_op(\"power\")(lhs, rhs)\n\n\ndef lt_converter(lhs, rhs, **kwargs):\n    \"\"\"Lt converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"lt\", kwargs)\n\n    return get_relay_op(\"less\")(lhs, rhs)\n\n\ndef gt_converter(lhs, rhs, **kwargs):\n    \"\"\"Gt converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"gt\", kwargs)\n\n    return get_relay_op(\"greater\")(lhs, rhs)\n\n\ndef le_converter(lhs, rhs, **kwargs):\n    \"\"\"Le converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"le\", kwargs)\n\n    return get_relay_op(\"less_equal\")(lhs, rhs)\n\n\ndef ge_converter(lhs, rhs, **kwargs):\n    \"\"\"Ge converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"ge\", kwargs)\n\n    return get_relay_op(\"greater_equal\")(lhs, rhs)\n\n\ndef eq_converter(lhs, rhs, **kwargs):\n    \"\"\"Eq converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"eq\", kwargs)\n\n    return get_relay_op(\"equal\")(lhs, rhs)\n\n\ndef ne_converter(lhs, rhs, **kwargs):\n    \"\"\"Ne converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"ne\", kwargs)\n\n    return get_relay_op(\"not_equal\")(lhs, rhs)\n\n\ndef and_converter(lhs, rhs, **kwargs):\n    \"\"\"And converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"and\", kwargs)\n\n    return get_relay_op(\"logical_and\")(lhs, rhs)\n\n\ndef or_converter(lhs, rhs, **kwargs):\n    \"\"\"Or converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"or\", kwargs)\n\n    return get_relay_op(\"logical_or\")(lhs, rhs)\n\n\n#   # Select op\n\n\ndef select_converter(condition, t_val, f_val, **kwargs):\n    \"\"\"Select converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"select\", kwargs)\n\n    return get_relay_op(\"where\")(condition, t_val, f_val)\n\n\n#   # Simplifier ops\n\n\ndef sqr_converter(data, **kwargs):\n    \"\"\"sqr converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sqr\", kwargs)\n\n    if isinstance(data, relay.Call):\n        d_type = infer_type(data).checked_type.dtype\n    else:\n        d_type = data.type_annotation.dtype\n\n    return get_relay_op(\"power\")(data, tvm_expr.const(2.0, dtype=d_type))\n\n\ndef sqrt_converter(data, **kwargs):\n    \"\"\"sqrt converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sqrt\", kwargs)\n\n    return get_relay_op(\"sqrt\")(data)\n\n\ndef rsqr_converter(data, **kwargs):\n    \"\"\"rsqr converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"rsqr\", kwargs)\n\n    if isinstance(data, relay.Call):\n        d_type = infer_type(data).checked_type.dtype\n    else:\n        d_type = data.type_annotation.dtype\n\n    return get_relay_op(\"power\")(data, tvm_expr.const(-2.0, dtype=d_type))\n\n\ndef rsqrt_converter(data, **kwargs):\n    \"\"\"rsqrt converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"rsqrt\", kwargs)\n\n    return get_relay_op(\"rsqrt\")(data)\n\n\ndef log2_converter(data, **kwargs):\n    \"\"\"log2 converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"log2\", kwargs)\n\n    return get_relay_op(\"log2\")(data)\n\n\ndef min_converter(lhs, rhs, **kwargs):\n    \"\"\"Min converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"min\", kwargs)\n\n    return get_relay_op(\"minimum\")(lhs, rhs)\n\n\ndef max_converter(lhs, rhs, **kwargs):\n    \"\"\"Max converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"max\", kwargs)\n\n    return get_relay_op(\"maximum\")(lhs, rhs)\n\n\ndef clamp_converter(x, a, b, **kwargs):\n    \"\"\"Clamp converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"clamp\", kwargs)\n\n    # only works if b and a are Constant floats, not tensors\n    if isinstance(a, tvm_expr.Constant) and isinstance(b, tvm_expr.Constant):\n        return get_relay_op(\"clip\")(x, float(a.data.numpy()), float(b.data.numpy()))\n\n    return max_converter(min_converter(x, b), a)\n\n\n#   # Sliding-window ops\n\n\ndef conv_converter(data, kernel, bias, border, stride, padding, dilation, groups, **kwargs):\n    \"\"\"Convolution converter,\n    skips bias if it's 0.0 (no bias)\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"conv\", kwargs)\n\n    if border != \"constant\":\n        print(f\"Currently {border} border is not supported, used `constant` border\")\n\n    kernel_shape = infer_shape(kernel)\n    dshape = infer_shape(data)\n\n    strides = _stride_conv(stride, len(kernel_shape)) if stride else (1,) * (len(kernel_shape) - 2)\n\n    dilation = dilation if dilation else ((1,) * (len(kernel_shape) - 2))\n\n    if not padding:\n        padding = _calculate_nnef_padding(dshape[2:], strides, kernel_shape[2:], dilation)\n\n    pad = _padding_conv(padding, len(kernel_shape))\n\n    channels = kernel_shape[0]\n\n    if groups == 0:\n        groups = channels\n\n    op = get_relay_op(dimension_picker(\"conv\", kernel_shape))\n    conv_out = op(\n        data=data,\n        weight=kernel,\n        strides=strides,\n        padding=pad,\n        dilation=dilation,\n        groups=groups,\n        channels=channels,\n        kernel_size=kernel_shape[2:],\n    )\n\n    res = None\n    if isinstance(bias, tvm_expr.Constant):\n        # nnef has bias of 0 if it is not needed\n        if (bias.data.numpy() == 0).all():\n            res = conv_out\n\n    if not res:\n        # squeeze needed as nnef has bias of shape [1, channel]\n        res = tvm_op.nn.bias_add(conv_out, relay.squeeze(bias, axis=0))\n\n    return res\n\n\ndef deconv_converter(\n    data, kernel, bias, border, stride, padding, dilation, output_shape, groups, **kwargs\n):\n    \"\"\"Deconvolution converter, using convxd_transpose\n    skips bias if it's 0.0 (no bias)\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"deconv\", kwargs)\n\n    if border != \"constant\":\n        print(f\"Currently {border} border is not supported, used `constant` border\")\n\n    kernel_shape = infer_shape(kernel)\n\n    rank = len(kernel_shape)\n\n    strides = _stride_conv(stride, rank) if stride else (1,) * (rank - 2)\n\n    dilation = dilation if dilation else ((1,) * (rank - 2))\n\n    total, out_sh = _calculate_nnef_padding_deconv(\n        infer_shape(data), strides, kernel_shape, dilation, output_shape\n    )\n\n    if padding:\n        pad = _padding_conv(padding, rank)\n    else:\n        pad = _padding_conv([(pad // 2, (pad + 1) // 2) for pad in total], rank)\n\n    if groups == 0:\n        groups = kernel_shape[0]\n    channels = kernel_shape[1] * groups\n\n    # limit output padding to modulo stride because of tvm checks\n    out_pad = (\n        [(x - (y - t)) % s for x, y, t, s in zip(output_shape[2:], out_sh, total, stride)]\n        if output_shape\n        else (0, 0)\n    )\n\n    op = get_relay_op(dimension_picker(\"conv\", kernel_shape, suffix=\"_transpose\"))\n    deconv_out = op(\n        data=data,\n        weight=kernel,\n        strides=strides,\n        padding=pad,\n        dilation=dilation,\n        groups=groups,\n        channels=channels,\n        kernel_size=kernel_shape[2:],\n        output_padding=out_pad,\n    )\n\n    res = None\n    if isinstance(bias, tvm_expr.Constant):\n        if bias.data.numpy() == np.array([0.0]):\n            res = deconv_out\n\n    if not res:\n        # squeeze needed bc nnef has bias of shape [1, channel]\n        res = tvm_op.nn.bias_add(deconv_out, relay.squeeze(bias, axis=0))\n\n    return res\n\n\ndef box_converter(data, size, border, padding, stride, dilation, normalize, **kwargs):\n    \"\"\"Box operator converter,\n    summation over sliding window, equal to conv with constant filter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"box\", kwargs)\n\n    dshape = infer_shape(data)\n\n    if isinstance(data, relay.Call):\n        d_type = infer_type(data).checked_type.dtype\n    else:\n        d_type = data.type_annotation.dtype\n\n    size[0] = dshape[1]\n    if normalize:\n        kernel = relay.full(tvm_op.const(1 / math.prod(size[2:]), d_type), size, d_type)\n    else:\n        kernel = relay.ones(size, d_type)\n\n    out = conv_converter(\n        data, kernel, tvm_expr.const(0, dtype=d_type), border, stride, padding, dilation, dshape[1]\n    )\n    return out\n\n\ndef debox_converter(\n    data, size, border, padding, stride, dilation, normalize, output_shape, **kwargs\n):\n    \"\"\"Debox operator converter,\n    inverse of box, equal to deconv with constant filter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"debox\", kwargs)\n\n    dshape = infer_shape(data)\n\n    if isinstance(data, relay.Call):\n        d_type = infer_type(data).checked_type.dtype\n    else:\n        d_type = data.type_annotation.dtype\n\n    size[0] = dshape[1]\n    if normalize:\n        kernel = relay.full(tvm_op.const(1 / math.prod(size[2:]), d_type), size, d_type)\n    else:\n        kernel = relay.ones(size, d_type)\n    out = deconv_converter(\n        data,\n        kernel,\n        tvm_expr.const(0, dtype=d_type),\n        border,\n        stride,\n        padding,\n        dilation,\n        output_shape,\n        groups=dshape[1],\n    )\n    return out\n\n\ndef nearest_downsample_converter(data, factor, **kwargs):\n    \"\"\"Nearest neighbour downsample converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"nearest_downsample\", kwargs)\n\n    dims = 2 + len(factor)\n\n    return box_converter(\n        data,\n        size=[1] * dims,\n        border=\"constant\",\n        padding=[(0, 0)] * dims,\n        stride=[1, 1] + factor,\n        dilation=(1,) * (dims - 2),\n        normalize=False,\n    )\n\n\ndef area_downsample_converter(data, factor, **kwargs):\n    \"\"\"Area downsample converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"area_downsample\", kwargs)\n\n    dims = 2 + len(factor)\n\n    return box_converter(\n        data,\n        size=[1, 1] + factor,\n        border=\"constant\",\n        padding=[(0, 0)] * dims,\n        stride=[1, 1] + factor,\n        dilation=(1,) * (dims - 2),\n        normalize=True,\n    )\n\n\ndef nearest_upsample_converter(data, factor, **kwargs):\n    \"\"\"Nearest neighbour upsample converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"nearest_upsample\", kwargs)\n\n    # conversion from nn.upsampling to image.resizexd, re: discuss:11650\n    #\n    dshape = infer_shape(data)\n    new_size = [d * f for d, f in zip(dshape[2:], factor)]\n    return get_relay_op(dimension_picker(\"resize\", dshape))(\n        data,\n        new_size,\n        method=\"nearest_neighbor\",\n        # coordinate_transformation_mode=\"asymmetric\",\n        rounding_method=\"round\",\n    )\n\n\ndef multilinear_upsample_converter(data, factor, method, border, **kwargs):\n    \"\"\"Multilinear upsampling converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"linear_upsample\", kwargs)\n\n    # for aligned and symmetric replicate resize can be used\n    dshape = infer_shape(data)\n    new_size = [d * f for d, f in zip(dshape[2:], factor)]\n    if method == \"aligned\":\n        # conversion from nn.upsampling to image.resizexd, re: discuss:11650\n        return get_relay_op(dimension_picker(\"resize\", dshape))(\n            data,\n            new_size,\n            method=\"linear\",\n            coordinate_transformation_mode=\"align_corners\",\n        )\n    if method == \"symmetric\" and border == \"replicate\":\n        return get_relay_op(dimension_picker(\"resize\", dshape))(\n            data,\n            new_size,\n            method=\"linear\",\n            coordinate_transformation_mode=\"half_pixel\",\n        )\n\n    # other combinations need to be calculated with convolution\n    def _upsample_weights_1d(fact, symm):\n        if symm:\n            _weights = [1 - (i + 0.5) / fact for i in range(fact)]\n            _weights = list(reversed(_weights)) + _weights\n        else:\n            _weights = [1 - abs(i) / float(fact) for i in range(-fact + 1, fact)]\n        return np.array(_weights)\n\n    def _upsample_weights_nd(fact, symm):\n        _weights = [_upsample_weights_1d(f, symm) for f in fact]\n        return reduce(np.multiply, np.ix_(*_weights))\n\n    n, c = dshape[:2]\n\n    symmetric = method == \"symmetric\"\n    weights = _upsample_weights_nd(factor, symmetric)\n    weights = np.reshape(weights, newshape=(1, 1) + weights.shape)\n    kernel = tile_converter(tvm_expr.const(weights), (c, 1) + (1,) * len(factor))\n\n    output_shape = [n, c] + [f * s for f, s in zip(factor, dshape[2:])]\n\n    if symmetric:\n        return deconv_converter(\n            data,\n            kernel,\n            tvm_expr.const(0.0),\n            border=\"constant\",\n            stride=factor,\n            padding=[(f - 1, f - 1) for f in factor],\n            dilation=[],\n            groups=c,\n            output_shape=output_shape,\n        )\n    else:\n        replicate = border == \"replicate\"\n        if replicate:\n            data = pad_converter(\n                data, [(0, 0), (0, 0)] + [(1, 0)] * len(factor), border, tvm_expr.const(0.0)\n            )\n            padding = factor\n        else:\n            padding = [f // 2 for f in factor]\n\n        return deconv_converter(\n            data,\n            kernel,\n            tvm_expr.const(0.0),\n            border=\"constant\",\n            stride=factor,\n            padding=[(p, p - 1) for p in padding],\n            dilation=[],\n            groups=c,\n            output_shape=output_shape,\n        )\n\n\n#   # Reduce ops\n\n\ndef sum_reduce_converter(data, axes, normalize, keepdims=True, **kwargs):\n    \"\"\"Sum reduce converter\"\"\"\n\n    if kwargs:\n        __unexpected_attrs(\"sum_reduce\", kwargs)\n\n    out = get_relay_op(\"sum\")(data, axes, keepdims=keepdims)\n    if normalize:\n        return l2_normalization_converter(out, 0, [x - 2 for x in axes], 0.0)\n    return out\n\n\ndef max_reduce_converter(data, axes, keepdims=True, **kwargs):\n    \"\"\"Max reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"max_reduce\", kwargs)\n\n    return get_relay_op(\"max\")(data, axes, keepdims=keepdims)\n\n\ndef min_reduce_converter(data, axes, keepdims=True, **kwargs):\n    \"\"\"Min reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"min_reduce\", kwargs)\n\n    return get_relay_op(\"min\")(data, axes, keepdims=keepdims)\n\n\ndef argmax_reduce_converter(data, axes, keepdims=True, **kwargs):\n    \"\"\"Argmax reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"argmax_reduce\", kwargs)\n\n    return get_relay_op(\"argmax\")(data, axes, keepdims=keepdims)\n\n\ndef argmin_reduce_converter(data, axes, keepdims=True, **kwargs):\n    \"\"\"Argmin reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"argmin_reduce\", kwargs)\n\n    return get_relay_op(\"argmin\")(data, axes, keepdims=keepdims)\n\n\ndef all_reduce_converter(data, axes, keepdims=True, **kwargs):\n    \"\"\"All reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"all_reduce\", kwargs)\n\n    return get_relay_op(\"all\")(data, axes, keepdims=keepdims)\n\n\ndef any_reduce_converter(data, axes, keepdims=True, **kwargs):\n    \"\"\"Any reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"any_reduce\", kwargs)\n\n    return get_relay_op(\"any\")(data, axes, keepdims=keepdims)\n\n\ndef mean_reduce_converter(data, axes, keepdims=True, **kwargs):\n    \"\"\"Mean reduce converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"mean_reduce\", kwargs)\n\n    return get_relay_op(\"mean\")(data, axes, keepdims=keepdims)\n\n\n#   # Tensor shape ops\n\n\ndef reshape_converter(data, shape, axis_start, axis_count, **kwargs):\n    \"\"\"Reshape converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"reshape\", kwargs)\n\n    dshape = list(infer_shape(data))\n    if axis_count == -1:\n        newshape = dshape[:axis_start] + shape\n    else:\n        newshape = dshape\n        newshape[axis_start : axis_start + axis_count] = shape\n\n    return get_relay_op(\"reshape\")(data, newshape)\n\n\ndef squeeze_converter(data, axes, **kwargs):\n    \"\"\"Squeeze converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"squeeze\", kwargs)\n    return relay.squeeze(data, axes)\n\n\ndef unsqueeze_converter(data, axes, **kwargs):\n    \"\"\"Unsqueeze converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"unsqueeze\", kwargs)\n\n    axes = sorted(axes)\n    for axis in axes:\n        if axis < 0 and isinstance(data, tvm_expr.Var):\n            axis = len(data.type_annotation.concrete_shape) + len(axes) + axis\n\n        data = tvm_op.expand_dims(data, axis=axis, num_newaxis=1)\n    return data\n\n\ndef transpose_converter(data, axes, **kwargs):\n    \"\"\"Transpose converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"transpose\", kwargs)\n\n    return get_relay_op(\"transpose\")(data, axes)\n\n\ndef split_converter(data, axis, ratios, **kwargs):\n    \"\"\"Split converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"split\", kwargs)\n\n    axis_len = infer_shape(data)[axis]\n    rat_mul = axis_len / sum(ratios)\n    ratio_list = [(r * rat_mul) for r in ratios]\n\n    s = 0\n    indices = []\n    for rat in ratio_list[:-1]:\n        s += rat\n        # Strictly needs int\n        indices.append(int(s))\n\n    return get_relay_op(\"split\")(data, indices, axis)\n\n\ndef concat_converter(*data, axis, **kwargs):\n    \"\"\"Concat converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"concat\", kwargs)\n\n    return get_relay_op(\"concatenate\")(data, axis)\n\n\ndef stack_converter(*data, axis, **kwargs):\n    \"\"\"Stack converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"stack\", kwargs)\n\n    return get_relay_op(\"stack\")(data, axis)\n\n\ndef unstack_converter(data, axis, **kwargs):\n    \"\"\"Unstack converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"unstack\", kwargs)\n\n    split = split_converter(data, axis, [1] * infer_shape(data)[axis])\n    res = []\n    for i in range(len(split)):\n        res.append(squeeze_converter(split[i], axis))\n    return tvm_expr.TupleWrapper(relay.Tuple(res), len(res))\n\n\ndef slice_converter(data, axes, begin, end, stride, **kwargs):\n    \"\"\"Slice converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"slice\", kwargs)\n\n    if not stride:\n        stride = [1] * len(axes)\n\n    return get_relay_op(\"strided_slice\")(data, begin, end, strides=stride, axes=axes)\n\n\ndef pad_converter(data, padding, border, value, **kwargs):\n    \"\"\"Pad converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"pad\", kwargs)\n\n    if border not in [\"constant\", \"replicate\", \"reflect\"]:\n        print(f\"{border} border type is not supported in padding. Assumed constant\")\n        border = \"constant\"\n    if border == \"replicate\":\n        border = \"edge\"\n\n    return get_relay_op(\"pad\")(data, padding, value, border)\n\n\ndef tile_converter(data, repeats, **kwargs):\n    \"\"\"Tile converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"tile\", kwargs)\n\n    return get_relay_op(\"tile\")(data, repeats)\n\n\n#   # Region-of-interest ops\n\n\n#   # Matrix multiplication\ndef matmul_converter(a, b, **kwargs):\n    \"\"\"Matmul converter\n    real signature: matmul_converter(a, b, transposeA, transposeB)\"\"\"\n\n    transpose_a = kwargs.pop(\"transposeA\")\n    transpose_b = kwargs.pop(\"transposeB\")\n    if kwargs:\n        __unexpected_attrs(\"matmul\", kwargs)\n\n    a_shape = infer_shape(a)\n    b_shape = infer_shape(b)\n    a_rank = len(a_shape)\n    b_rank = len(b_shape)\n\n    if a_rank == 2 and b_rank == 2:\n        out = get_relay_op(\"matmul\")(a, b, transpose_a=transpose_a, transpose_b=transpose_b)\n    else:\n        batch_shape = [1] * (max(a_rank, b_rank) - 2)\n\n        for i, j in enumerate(reversed(a_shape[:-2])):\n            batch_shape[i] = j\n\n        for i, j in enumerate(reversed(b_shape[:-2])):\n            # Need to check if axis can be broadcasted\n            if batch_shape[i] == 1 or j == 1 or batch_shape[i] == j:\n                batch_shape[i] = max(batch_shape[i], j)\n            else:\n                msg = \"Batch dimensions are not broadcastable.\"\n                raise AssertionError(msg)\n\n        batch_shape = batch_shape[::-1]\n\n        a = tvm_op.broadcast_to(a, batch_shape + list(a_shape[-2:]))\n        b = tvm_op.broadcast_to(b, batch_shape + list(b_shape[-2:]))\n\n        out = get_relay_op(\"batch_matmul\")(\n            tvm_op.reshape(a, [-1, *a_shape[-2:]]),\n            tvm_op.reshape(b, [-1, *b_shape[-2:]]),\n            transpose_b=transpose_b,\n            transpose_a=transpose_a,\n        )\n\n        out_shape = batch_shape + [a_shape[-2]] + [b_shape[-1]]\n        out = tvm_op.reshape(out, out_shape)\n\n    return out\n\n\n#   # Variable updates\n#   # Compound ops\n\n\ndef sigmoid_converter(data, **kwargs):\n    \"\"\"Sigmoid converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"sigmoid\", kwargs)\n\n    return get_relay_op(\"sigmoid\")(data)\n\n\ndef relu_converter(data, **kwargs):\n    \"\"\"RELU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"relu\", kwargs)\n\n    return get_relay_op(\"relu\")(data)\n\n\ndef prelu_converter(data, alpha, **kwargs):\n    \"\"\"PRELU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"prelu\", kwargs)\n\n    # prelu can\"t handle float vals but NNEF supports direct parameter, this is just in case\n    if isinstance(alpha, tvm_expr.Constant):\n        if alpha.data.numpy().size == 1:\n            return get_relay_op(\"leaky_relu\")(data, alpha.data.numpy().item())\n\n    return get_relay_op(\"prelu\")(data, alpha)\n\n\ndef leaky_relu_converter(data, alpha, **kwargs):\n    \"\"\"Leaky RELU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"leaky_relu\", kwargs)\n\n    return get_relay_op(\"leaky_relu\")(data, alpha)\n\n\ndef elu_converter(data, alpha, **kwargs):\n    \"\"\"ELU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"elu\", kwargs)\n\n    return select_converter(\n        lt_converter(data, tvm_expr.const(0.0)),\n        mul_converter(\n            tvm_expr.const(alpha), sub_converter(exp_converter(data), tvm_expr.const(1.0))\n        ),\n        data,\n    )\n\n\ndef selu_converter(data, alpha, **kwargs):\n    \"\"\"SELU converter\n    True signature is selu_converter(data, alpha, lambda)\"\"\"\n    lambda_var = kwargs.pop(\"lambda\")\n\n    if kwargs:\n        __unexpected_attrs(\"selu\", kwargs)\n\n    return mul_converter(\n        tvm_expr.const(lambda_var),\n        select_converter(\n            data < tvm_expr.const(0.0),\n            mul_converter(\n                tvm_expr.const(alpha), sub_converter(exp_converter(data), tvm_expr.const(1.0))\n            ),\n            data,\n        ),\n    )\n\n\ndef gelu_converter(data, **kwargs):\n    \"\"\"GELU converter\n    NNEF definition for GELU:\n    the exact definition of GELU is x * Phi(x) where Phi(x) is the\n    CDF of the standard normal distribution, which can be approximated\n    for example by sigmoid(1.702 * x)\n\n    `mul_converter(data, sigmoid_converter(mul_converter(tvm_expr.const(1.702), data)))`\n\n    But in this case we will use the erf to calculate normcdf (same as to pytorch GELU impl)\n    \"\"\"\n    if kwargs:\n        __unexpected_attrs(\"gelu\", kwargs)\n\n    return data * (\n        tvm_expr.const(0.5) + tvm_op.erf(data * tvm_expr.const(0.5**0.5)) * tvm_expr.const(0.5)\n    )\n\n\ndef silu_converter(data, **kwargs):\n    \"\"\"SiLU converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"silu\", kwargs)\n\n    return mul_converter(data, sigmoid_converter(data))\n\n\ndef softmax_converter(data, axes, **kwargs):\n    \"\"\"Softmax converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"softmax\", kwargs)\n\n    if len(axes) > 1:\n        print(\"Multiple axes not supported, operation has been done along the first axis in axes.\")\n    axis = axes[0]\n\n    return get_relay_op(\"softmax\")(data, axis)\n\n\ndef softplus_converter(data, **kwargs):\n    \"\"\"Softplus converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"softplus\", kwargs)\n\n    return log_converter(add_converter(exp_converter(data), tvm_expr.const(1.0)))\n\n\n#   # linear ops\n\n\ndef linear_converter(data, _filter, bias, **kwargs):\n    \"\"\"Linear converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"linear\", kwargs)\n\n    out = get_relay_op(\"matmul\")(data, _filter, transpose_b=True)\n    res = None\n\n    if isinstance(bias, tvm_expr.Constant):\n        if (bias.data.numpy() == 0).all():\n            res = out\n\n    if not res:\n        # squeeze needed because nnef has bias of shape [1, channel]\n        res = tvm_op.nn.bias_add(out, relay.squeeze(bias, axis=0))\n\n    return res\n\n\ndef separable_conv_converter(\n    data, plane_filter, point_filter, bias, border, padding, stride, dilation, groups, **kwargs\n):\n    \"\"\"Separable convolution converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"separable_conv\", kwargs)\n\n    if isinstance(data, relay.Call):\n        d_type = infer_type(data).checked_type.dtype\n    else:\n        d_type = data.type_annotation.dtype\n\n    filtered = conv_converter(\n        data, plane_filter, tvm_expr.const(0, dtype=d_type), border, stride, padding, dilation, 0\n    )\n\n    return conv_converter(filtered, point_filter, bias, \"constant\", [], [], [], groups)\n\n\ndef separable_deconv_converter(\n    data,\n    plane_filter,\n    point_filter,\n    bias,\n    border,\n    padding,\n    stride,\n    dilation,\n    output_shape,\n    groups,\n    **kwargs,\n):\n    \"\"\"Separable deconvolution converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"separable_deconv\", kwargs)\n\n    if isinstance(data, relay.Call):\n        d_type = infer_type(data).checked_type.dtype\n    else:\n        d_type = data.type_annotation.dtype\n\n    filtered = deconv_converter(\n        data, point_filter, tvm_expr.const(0, dtype=d_type), \"constant\", [], [], [], [], groups\n    )\n\n    return deconv_converter(\n        filtered, plane_filter, bias, border, stride, padding, dilation, output_shape, 0\n    )\n\n\ndef max_pool_converter(data, size, border, padding, stride, dilation, **kwargs):\n    \"\"\"Max pool converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"max_pool\", kwargs)\n\n    if border != \"constant\":\n        print(f\"Currently {border} border is not supported, used `constant` border\")\n\n    dshape = infer_shape(data)\n    rank = len(dshape)\n\n    pool_size = _size_conv(size, rank)\n    strides = _stride_conv(stride, rank) if stride else (1,) * (rank - 2)\n\n    dilation = dilation if dilation else ((1,) * (rank - 2))\n\n    if not padding:\n        # padding is truncated to `conv style` (only active layers are present)\n        padding = _calculate_nnef_padding(dshape[2:], strides, pool_size, dilation)\n\n    pad = _padding_conv(padding, rank)\n\n    if border == \"constant\":\n        padding = [(0, 0), (0, 0)] + padding\n        data = pad_converter(data, padding, border, tvm_expr.const(0.0))\n        pad = (0, 0)\n\n    op = get_relay_op(dimension_picker(\"max_pool\", dshape))\n    return op(\n        data,\n        pool_size=pool_size,\n        strides=strides,\n        dilation=dilation,\n        padding=pad,\n    )\n\n\ndef avg_pool_converter(data, size, border, padding, stride, dilation, **kwargs):\n    \"\"\"Avg pool converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"avg_pool\", kwargs)\n\n    if border not in [\"constant\", \"ignore\"]:\n        print(f\"Currently {border} border is not supported, used `constant` border\")\n\n    dshape = infer_shape(data)\n    rank = len(dshape)\n    pool_size = _size_conv(size, rank)\n    strides = _stride_conv(stride, rank) if stride else (1,) * (rank - 2)\n\n    dilation = dilation if dilation else ((1,) * (rank - 2))\n\n    # padding is truncated to `conv style` (only active layers are present)\n    active_shape = dshape[2:]\n    if not padding:\n        padding = _calculate_nnef_padding(active_shape, strides, pool_size, dilation)\n\n    pad = _padding_conv(padding, rank)\n\n    op = get_relay_op(dimension_picker(\"avg_pool\", dshape))\n    return op(\n        data,\n        pool_size=pool_size,\n        strides=strides,\n        dilation=dilation,\n        padding=pad,\n        count_include_pad=border != \"ignore\",\n    )\n\n\ndef rms_pool_converter(data, size, border, padding, stride, dilation, **kwargs):\n    \"\"\"Rms pool converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"rms_pool\", kwargs)\n\n    return sqrt_converter(\n        avg_pool_converter(\n            sqr_converter(data),\n            size=size,\n            border=border,\n            padding=padding,\n            stride=stride,\n            dilation=dilation,\n        )\n    )\n\n\n#   # Normalization\n\n\ndef local_response_normalization_converter(data, size, alpha, beta, bias):\n    \"\"\"LRN converter\"\"\"\n    axis = [i for i in range(len(size)) if size[i] > 1]\n    if len(axis) == 1:\n        axis = axis[0]\n    else:\n        print(\"Multi axis LRN is not implemented properly, using first axis where size != 1\")\n        axis = axis[0]\n    size = size[axis]\n    return get_relay_op(\"lrn\")(data, size, axis, bias, alpha, beta)\n\n\ndef local_mean_normalization_converter(data, size, **kwargs):\n    \"\"\"LMN converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"local_mean_normalization\", kwargs)\n\n    mean = box_converter(data, size, \"constant\", [], [], [], normalize=True)\n    return sub_converter(data, mean)\n\n\ndef local_variance_normalization_converter(data, size, bias, epsilon, **kwargs):\n    \"\"\"LVN converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"local_variance_normalization\", kwargs)\n\n    sigma = box_converter(sqr_converter(data), size, \"constant\", [], [], [], normalize=True)\n    return div_converter(\n        data,\n        max_converter(\n            add_converter(sqrt_converter(sigma), tvm_expr.const(bias)), tvm_expr.const(epsilon)\n        ),\n    )\n\n\ndef local_contrast_normalization_converter(data, size, bias, epsilon, **kwargs):\n    \"\"\"LCN converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"local_contrast_normalization\", kwargs)\n\n    centered = local_mean_normalization_converter(data, size)\n    return local_variance_normalization_converter(centered, size, bias, epsilon)\n\n\ndef l1_normalization_converter(data, axes, bias, epsilon, **kwargs):\n    \"\"\"L1 norm converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"l1_normalization\", kwargs)\n\n    sigma = sum_reduce_converter(abs_converter(data), axes, False)\n    return div_converter(\n        data, max_converter(add_converter(sigma, tvm_expr.const(bias)), tvm_expr.const(epsilon))\n    )\n\n\ndef l2_normalization_converter(data, axes, bias, epsilon, **kwargs):\n    \"\"\"L2 norm converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"l2_normalization\", kwargs)\n\n    epsilon = epsilon**2\n    if bias != 0.0:\n        print(\"Bias is not supported, assumed 0.0.\")\n    #     data = add_converter(data, tvm_expr.const(bias))\n\n    return get_relay_op(\"l2_normalize\")(data, epsilon, axes)\n\n\n# ok ish\n\n\ndef batch_normalization_converter(data, mean, variance, offset, scale, epsilon, **kwargs):\n    \"\"\"Batch norm converter\"\"\"\n    if kwargs:\n        __unexpected_attrs(\"batch_normalization\", kwargs)\n\n    mean = squeeze_converter(mean, 0)\n    variance = squeeze_converter(variance, 0)\n    offset = squeeze_converter(offset, 0)\n    scale = squeeze_converter(scale, 0)\n\n    return get_relay_op(\"batch_norm\")(data, scale, offset, mean, variance, epsilon=epsilon)[0]\n\n\n#   # Misc ops\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/generate.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport numpy as np\nimport nnef\nimport sys\nimport os\n\n\ndef _is_lambda(value):\n    LAMBDA = lambda: 0\n    return isinstance(value, type(LAMBDA)) and value.__name__ == LAMBDA.__name__\n\n\ndef _ensure_lambda(value):\n    return value() if not _is_lambda(value) else value\n\n\ndef uniform(min=0.0, max=1.0):\n    return lambda shape: np.random.uniform(min, max, shape).astype(np.float32)\n\n\ndef normal(mean=0.0, std=1.0):\n    return lambda shape: np.random.normal(mean, std, shape).astype(np.float32)\n\n\ndef bernoulli(prob=0.5):\n    return lambda shape: np.random.uniform(0.0, 1.0, shape) > prob\n\n\ndef integers(min=0, max=100):\n    return lambda shape: np.random.randint(min, max, shape).astype(np.int32)\n\n\ndef main(args):\n    if args.seed is not None:\n        np.random.seed(args.seed)\n\n    distributions = {\n        'scalar': uniform(0.0, 1.0),\n        'integer': integers(0, 100),\n        'logical': bernoulli(0.5),\n    }\n\n    try:\n        random = eval(args.random)\n        if isinstance(random, dict):\n            distributions.update({key: _ensure_lambda(value) for key, value in random.items()})\n        else:\n            random = _ensure_lambda(random)\n            if args.random.startswith('integers'):\n                distributions['integer'] = random\n            elif args.random.startswith('bernoulli'):\n                distributions['logical'] = random\n            else:\n                distributions['scalar'] = random\n    except Exception as e:\n        print(\"Could not evaluate distribution: \" + str(e), file=sys.stderr)\n        return -1\n\n    graph = nnef.parse_file(os.path.join(args.model, 'graph.nnef'))\n\n    for op in graph.operations:\n        if args.weights and op.name == 'variable':\n            label = op.attribs['label']\n            shape = op.attribs['shape']\n            data = distributions[op.dtype](shape)\n            filename = os.path.join(args.model, label + '.dat')\n\n            os.makedirs(os.path.split(filename)[0], exist_ok=True)\n            with open(filename, 'wb') as file:\n                nnef.write_tensor(file, data)\n\n            if args.verbose:\n                print(\"Generated weight '{}'\".format(filename))\n\n        if args.inputs and op.name == 'external':\n            name = op.outputs['output']\n            shape = op.attribs['shape']\n            data = distributions[op.dtype](shape)\n            filename = os.path.join(args.model, args.inputs, name + '.dat')\n\n            os.makedirs(os.path.split(filename)[0], exist_ok=True)\n            with open(filename, 'wb') as file:\n                nnef.write_tensor(file, data)\n\n            if args.verbose:\n                print(\"Generated input '{}'\".format(filename))\n\n    return 0\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('model', type=str,\n                        help='The model to generate')\n    parser.add_argument('--random', type=str, required=True,\n                        help='Random distribution for input generation, possibly per dtype')\n    parser.add_argument('--seed', type=int, default=None,\n                        help='Random seed for input generation')\n    parser.add_argument('--weights', action='store_true',\n                        help='Generate weights')\n    parser.add_argument('--inputs', type=str, nargs='?', default=None, const='.',\n                        help='Generate inputs')\n    parser.add_argument('--verbose', action='store_true',\n                        help='Weather to print generated file names')\n    exit(main(parser.parse_args()))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/gmac.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nfrom functools import reduce\nfrom .io.nnef import Reader\n\n\ndef _volume(shape):\n    return reduce(lambda x, y: x * y, shape, 1)\n\n\ndef _count_macs(op, include_pooling, include_upsampling, include_normalization, include_reduction):\n    if len(op.inputs) == 0 or len(op.outputs) == 0:\n        return 0\n    \n    input_volume = _volume(op.inputs[0].shape)\n    output_volume = _volume(op.outputs[0].shape)\n\n    if op.type in ['conv', 'deconv']:\n        volume = input_volume if op.type == 'deconv' else output_volume\n        filter_shape = op.inputs[1].shape\n        return volume * _volume(filter_shape[1:])\n    elif op.type in ['separable_conv', 'separable_deconv']:\n        volume = input_volume if op.type == 'separable_deconv' else output_volume\n        filter_shape = op.inputs[1].shape\n        inter_channels = filter_shape[0]\n        inter_volume = output_volume / op.outputs[0].shape[1] * inter_channels if op.type == 'separable_deconv' else \\\n                        input_volume / op.inputs[0].shape[1] * inter_channels\n        return inter_volume * _volume(filter_shape[2:]) + volume * inter_channels\n    elif op.type == 'linear':\n        filter_shape = op.inputs[1].shape\n        return output_volume * filter_shape[-1]\n    elif op.type == 'matmul':\n        filter_shape = op.inputs[1].shape\n        return output_volume * (filter_shape[-1] if op.attribs['transposeB'] else filter_shape[-2])\n    elif op.type in ['max_pool', 'avg_pool', 'rms_pool', 'max_pool_with_index', 'box', 'debox'] and include_pooling:\n        volume = input_volume if op.type == 'debox' else output_volume\n        kernel_size = op.attribs['size']\n        return volume * _volume(kernel_size)\n    elif op.type == 'multilinear_upsample' and include_upsampling:\n        factor = op.attribs['factor']\n        method = op.attribs['method']\n        if method == 'symmetric':\n            kernel_size = [2 * f for f in factor]\n        elif method == 'asymmetric':\n            kernel_size = [2 * f - 1 for f in factor]\n        else:\n            kernel_size = factor\n        return input_volume * _volume(kernel_size)\n    elif op.type in ['local_response_normalization', 'local_mean_normalization',\n                     'local_variance_normalization', 'local_contrast_normalization'] and include_normalization:\n        kernel_size = op.attribs['size']\n        return output_volume * _volume(kernel_size)\n    elif op.type in ['l1_normalization', 'l2_normalization', 'batch_normalization'] and include_normalization:\n        return output_volume\n    elif op.type in ['sum_reduce', 'max_reduce', 'min_reduce',\n                     'mean_reduce', 'all_reduce', 'any_reduce'] and include_reduction:\n        return input_volume\n    else:\n        return 0\n\n\ndef get_custom_shapes(module_names):\n    import importlib\n\n    CUSTOM_SHAPES = \"CUSTOM_SHAPES\"\n\n    shapes = {}\n    for module_name in module_names:\n        module = importlib.import_module(module_name)\n        if hasattr(module, CUSTOM_SHAPES):\n            shapes.update(getattr(module, CUSTOM_SHAPES))\n\n    return shapes\n\n\ndef main(args):\n    custom_shapes = get_custom_shapes(args.custom_shapes) if args.custom_shapes is not None else None\n    reader = Reader(infer_shapes=True, custom_shapes=custom_shapes)\n    graph = reader(args.model)\n\n    macs = 0\n    for op in graph.operations:\n        macs += _count_macs(op, args.include_pooling, args.include_upsampling,\n                            args.include_normalization, args.include_reduction)\n\n    volume = 0\n    for tensor in graph.tensors:\n        volume += _volume(tensor.shape)\n\n    gmacs = macs / 1000 / 1000 / 1000\n    mbytes = volume * 4 / 1000 / 1000\n    print('GMACs = {}'.format(gmacs))\n    print('Total memory in Mbytes (supposing float32) = {}'.format(mbytes))\n    return 0\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('model', type=str,\n                        help='The model to visualize')\n    parser.add_argument('--include-pooling', action='store_true',\n                        help='Whether to include pooling operations in the calculation')\n    parser.add_argument('--include-upsampling', action='store_true',\n                        help='Whether to include (linear) upsampling operations in the calculation')\n    parser.add_argument('--include-normalization', action='store_true',\n                        help='Whether to include normalization operations in the calculation')\n    parser.add_argument('--include-reduction', action='store_true',\n                        help='Whether to include reduction operations in the calculation')\n    parser.add_argument('--custom-shapes', type=str, nargs='+',\n                        help='Module(s) containing custom shape inference code (when converting to NNEF)')\n    exit(main(parser.parse_args()))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/image_tensor.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .utils import stdio\nimport numpy as np\nimport argparse\nimport nnef\nimport sys\nimport skimage\nimport skimage.io\nimport skimage.color\nimport skimage.transform\nimport glob\nimport os\n\n\ndef transform_image(img, color, range, mean, std, size, dtype, data_format):\n    img = img.astype(np.float32) / 255.0\n\n    if color.upper() == 'RGB':\n        img = img[..., (0, 1, 2)]  # remove alpha channel if present\n    else:\n        img = img[..., (2, 1, 0)]\n\n    if range is not None:\n        min = np.array(range[0], dtype=np.float32)\n        max = np.array(range[1], dtype=np.float32)\n        img *= max - min\n        img += min\n\n    if mean is not None:\n        mean = np.array(mean, dtype=np.float32)\n        img -= mean\n\n    if std is not None:\n        std = np.array(std, dtype=np.float32)\n        img /= std\n\n    if size is not None:\n        img = skimage.transform.resize(img, size,\n                                       preserve_range=True,\n                                       anti_aliasing=True,\n                                       mode='reflect')\n    if dtype is not None:\n        img = img.astype(dtype)\n\n    if data_format.upper() == 'NCHW':\n        img = img.transpose((2, 0, 1))\n\n    return img\n\n\ndef main(args):\n    if args.output is None:\n        if not stdio.is_stdout_piped():\n            print(\"Output must be piped\", file=sys.stderr)\n            return -1\n        stdio.set_stdout_to_binary()\n\n    images = []\n    for pattern in args.images:\n        filenames = sorted(glob.glob(os.path.expanduser(pattern)))\n        assert filenames, \"No files found for path: {}\".format(pattern)\n        for filename in filenames:\n            img = skimage.img_as_ubyte(skimage.io.imread(filename))\n            if len(img.shape) == 2:\n                img = skimage.color.gray2rgb(img)\n\n            img = transform_image(img, args.color, args.range, args.mean, args.std, args.size,\n                                  np.dtype(args.dtype), args.format)\n            images.append(img)\n\n    if not all(img.shape == images[0].shape for img in images):\n        print(\"The size of all images must be the same, or --size must be specified\", file=sys.stderr)\n        return -1\n\n    tensor = np.stack(images)\n\n    if args.output is not None:\n        with open(args.output, 'wb') as file:\n            nnef.write_tensor(file, tensor)\n    else:\n        nnef.write_tensor(sys.stdout, tensor)\n    \n    return 0\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('images', type=str, nargs='+',\n                        help='The path(s) of images to turn into a tensor; may include wildcard expressions')\n    parser.add_argument('--size', type=int, nargs=2, default=None,\n                        help='The spatial size of the resulting tensor')\n    parser.add_argument('--dtype', type=str, default='float32',\n                        help='The data-type of the resulting tensor')\n    parser.add_argument(\"--color\", type=str.upper, choices=['RGB', 'BGR'], default='RGB',\n                        help=\"The resulting color-format\")\n    parser.add_argument(\"--format\", type=str.upper, choices=['NCHW', 'NHWC'], default='NCHW',\n                        help=\"The resulting data-format\")\n    parser.add_argument(\"--range\", type=float, nargs=2, default=[0, 1],\n                        help=\"Resulting range for representing the image\")\n    parser.add_argument(\"--mean\", type=float, nargs='+', default=None,\n                        help=\"Mean to subtract from the image; may be per-channel\")\n    parser.add_argument(\"--std\", type=float, nargs='+', default=None,\n                        help=\"Standard deviation to divide the image with; may be per-channel\")\n    parser.add_argument('--output', type=str, default=None,\n                        help='File name to save the result into')\n    exit(main(parser.parse_args()))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/interpreter/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport math\n\n\nclass Statistics:\n\n    def __init__(self, num, min, max, sum, ssum):\n        self.num = num\n        self.min = min\n        self.max = max\n        self.sum = sum\n        self.ssum = ssum\n\n    def __add__(self, other):\n        self.num += other.num\n        self.min = min(self.min, other.min)\n        self.max = max(self.max, other.max)\n        self.sum += other.sum\n        self.ssum += other.ssum\n        return self\n\n    def mean(self):\n        return self.sum / self.num if self.num != 0 else 0.0\n\n    def variance(self, unbiased=True):\n        if self.num <= 1:\n            return 0.0\n\n        count = self.num - 1 if unbiased else self.num\n        return self.ssum / count - self.sum * self.sum / (self.num * count)\n\n    def std(self, unbiased=True):\n        return math.sqrt(max(self.variance(unbiased), 0))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/interpreter/pytorch/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nimport torch\nimport nnef\nimport os\n\nfrom .nnef_module import NNEFModule\nfrom .. import Statistics\n\n\nclass Interpreter:\n\n    def __init__(self, model, device=None, decomposed=None, custom_operators=None):\n        if isinstance(model, nnef.Graph):\n            self._nnef_graph = model\n        else:\n            self._nnef_graph = nnef.parse_file(os.path.join(model, 'graph.nnef'), lowered=decomposed)\n        self._init_input_shapes(self._nnef_graph)\n\n        self._nnef_module = NNEFModule(model=model, custom_operators=custom_operators, decomposed=decomposed)\n\n        if device is None:\n            device = 'cuda' if torch.cuda.is_available() else 'cpu'\n\n        self._nnef_module.to(device)\n        self._nnef_module.eval()\n        self._device = device\n\n    def __call__(self, inputs, output_names=None, collect_statistics=False):\n        outputs = {}\n        statistics = {} if collect_statistics else None\n\n        def callback(name, tensor):\n            if output_names is not None and name in output_names:\n                outputs[name] = tensor.detach().cpu().numpy()\n            if collect_statistics:\n                statistics[name] = self._compute_statistics(tensor)\n\n        if output_names is not None:\n            assert all(name in self._nnef_graph.tensors for name in output_names), \\\n                \"could not find tensor(s) named {}\".format({name for name in output_names\n                                                            if name not in self._nnef_graph.tensors})\n\n        if output_names is not None or collect_statistics:\n            self._nnef_module.activation_callback = callback\n\n        torch_inputs = [torch.tensor(input).to(self._device) for input in inputs]\n        with torch.no_grad():  # Without this, gradients are calculated even in eval mode\n            torch_outputs = self._nnef_module.forward(*torch_inputs)\n\n        self._nnef_module.activation_callback = None\n\n        if output_names is None:\n            outputs = {name: torch_tensor.detach().cpu().numpy()\n                       for name, torch_tensor in zip(self._nnef_graph.outputs, torch_outputs)}\n\n        return (outputs, statistics) if collect_statistics else outputs\n\n    def input_details(self):\n        return [self._nnef_graph.tensors[name] for name in self._nnef_graph.inputs]\n\n    def output_details(self):\n        return [self._nnef_graph.tensors[name] for name in self._nnef_graph.outputs]\n\n    def tensor_details(self):\n        return self._nnef_graph.tensors.values()\n\n    @staticmethod\n    def _compute_statistics(torch_tensor):\n        num = torch_tensor.numel()\n        if num == 0:\n            return Statistics(num=0, min=0.0, max=0.0, sum=0.0, ssum=0.0)\n        else:\n            return Statistics(\n                num=num,\n                min=float(torch.min(torch_tensor)),\n                max=float(torch.max(torch_tensor)),\n                sum=float(torch.sum(torch_tensor)),\n                ssum=float(torch.sum(torch_tensor * torch_tensor)),\n            )\n\n    @staticmethod\n    def _init_input_shapes(graph):\n        from nnef.shapes import _set_shape\n        for op in graph.operations:\n            if op.name == 'external':\n                _set_shape(graph, op.outputs['output'], op.attribs['shape'])\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/interpreter/pytorch/nnef_module.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nimport nnef\nimport torch\nimport keyword\n\nfrom . import nnef_operators\nfrom ...io import nnef as nnef_io\nfrom ...io.nnef.reader import _build_graph\nfrom ...model.graph import *\n\n\nclass NNEFModule(torch.nn.Module):\n\n    \"\"\"\n    A torch.nn.Module that interprets the given NNEF model\n    \"\"\"\n\n    def __init__(self,\n                 model,  # type: str\n                 decomposed=None,  # type: typing.Optional[typing.List[str]]\n                 custom_operators=None,  # type: typing.Optional[typing.Dict[str, typing.Callable]]\n                 activation_callback=None,  # type: typing.Optional[typing.Callable[[str, torch.Tensor], None]]\n                 training_attributes=None,  # type: typing.Optional[typing.Dict[str, typing.Dict[str, typing.Any]]]\n                 ):\n        # type: (...)->None\n        \"\"\"\n            nnef_graph might be modified by this class if training and write_nnef is used\n        \"\"\"\n        super(NNEFModule, self).__init__()\n        if isinstance(model, nnef.Graph):\n            self._nnef_graph = _build_graph(model)\n        else:\n            reader = nnef_io.Reader(decomposed=decomposed, infer_shapes=False)\n            self._nnef_graph = reader(model)\n\n        self._name_inline_constants(self._nnef_graph)\n\n        for nnef_tensor in self._nnef_graph.tensors:\n            if self._is_variable(nnef_tensor):\n                name = self._registered_name(nnef_tensor.name)\n                data = self._dequantize(nnef_tensor.data, nnef_tensor.quant, channel_axis=0) \\\n                    if nnef_tensor.quant else nnef_tensor.data\n                data = self.normalize_dtype(data)\n                self.register_parameter(name, torch.nn.Parameter(torch.tensor(data), requires_grad=data.dtype == np.float32))\n            elif self._is_constant(nnef_tensor):\n                name = self._registered_name(nnef_tensor.name)\n                data = nnef_tensor.data if not nnef_tensor.producer else \\\n                    self._as_numpy(nnef_tensor.producer.attribs['value'],\n                                   nnef_tensor.producer.attribs['shape'],\n                                   nnef_tensor.producer.attribs['dtype'])\n                data = self.normalize_dtype(data)\n                self.register_buffer(name, torch.tensor(data))\n\n        self._operators = {}\n        self._operators.update(nnef_operators.Operators)\n        if custom_operators:\n            self._operators.update(custom_operators)\n        self._activation_callback = activation_callback\n        self._training_attributes = training_attributes or {}\n\n    def forward(self, *inputs):\n        assert len(inputs) == len(self._nnef_graph.inputs)\n        activations = {nnef_tensor.name: torch_tensor for torch_tensor, nnef_tensor\n                       in zip(inputs, self._nnef_graph.inputs)}\n\n        def get_tensor(name):\n            if hasattr(self, self._registered_name(name)):\n                return getattr(self, self._registered_name(name))\n            else:\n                return activations[name]\n\n        def has_tensor(name):\n            return hasattr(self, self._registered_name(name)) or name in activations\n\n        for op in self._nnef_graph.operations:\n            if op.type == 'external' or op.type == 'variable' or op.type == 'constant':\n                output = get_tensor(op.output.name)\n                if self._activation_callback:\n                    self._activation_callback(op.output.name, output)\n            else:\n                assert op.type in self._operators, \"Unsupported operation: {}\".format(op.type)\n                func = self._operators[op.type]\n\n                assert all(has_tensor(tensor.name) for tensor in op.inputs),\\\n                    \"could not fetch input tensor(s) {} for operation {}\"\\\n                        .format({tensor.name for tensor in op.inputs}, op.type)\n\n                training_attribs = self._training_attributes.get(op.type, {})\n                attribs = {**op.attribs, **training_attribs}\n                attribs = {self._escape_keyword(name): value for name, value in six.iteritems(attribs)}\n\n                if 'dtype' in attribs and op.type != 'constant' and op.type != 'cast':\n                    del attribs['dtype']\n\n                inputs = [get_tensor(tensor.name) if tensor.name else torch.tensor(tensor.data) for tensor in op.inputs]\n                outputs = func(*inputs, **attribs) if isinstance(op.inputs, tuple) else func(inputs, **attribs)\n\n                if not isinstance(outputs, (list, tuple)):\n                    outputs = (outputs,)\n\n                for nnef_tensor, output in zip(op.outputs, outputs):\n                    if nnef_tensor.quant and not self._is_variable(nnef_tensor):\n                        output = self._fake_quantize(output, nnef_tensor.quant, channel_axis=0)\n\n                    activations[nnef_tensor.name] = output\n                    if self._activation_callback:\n                        self._activation_callback(nnef_tensor.name, output)\n\n                for nnef_tensor in op.inputs:\n                    if nnef_tensor.name in activations and op is nnef_tensor.consumers[-1] and \\\n                            nnef_tensor not in self._nnef_graph.outputs:\n                        del activations[nnef_tensor.name]\n\n        return tuple(get_tensor(nnef_tensor.name) for nnef_tensor in self._nnef_graph.outputs)\n\n    def save_nnef(self, path):\n        for nnef_tensor in self._nnef_graph.tensors:\n            if self._is_variable(nnef_tensor.name):\n                torch_tensor = getattr(self, self._registered_name(nnef_tensor.name))\n                nnef_tensor.data = torch_tensor.detach().cpu().numpy().astype(nnef_tensor.dtype)\n\n        writer = nnef_io.Writer()\n        writer(self._nnef_graph, path)\n\n    @property\n    def activation_callback(self):\n        return self._activation_callback\n\n    @activation_callback.setter\n    def activation_callback(self, callback):\n        self._activation_callback = callback\n\n    @staticmethod\n    def _is_variable(tensor):\n        return tensor.producer and tensor.producer.type == 'variable'\n\n    @staticmethod\n    def _is_constant(tensor):\n        return not tensor.producer or tensor.producer.type == 'constant'\n\n    @staticmethod\n    def _as_numpy(value, shape, dtype):\n        if isinstance(value, list):\n            if len(value) == 1 and int(np.prod(shape)) != 1:\n                return np.full(shape, value[0], dtype=dtype)\n            else:\n                return np.array(value, dtype=dtype).reshape(shape)\n        else:\n            return np.full(shape, value, dtype=dtype)\n\n    @staticmethod\n    def _escape_keyword(name):\n        return name if not keyword.iskeyword(name) else '_' + name + '_'\n\n    @staticmethod\n    def _name_inline_constants(graph):\n        constants = 0\n        for tensor in graph.tensors:\n            if not tensor.name:\n                assert not tensor.producer\n                tensor.name = '$' + str(constants)\n                constants += 1\n\n    @staticmethod\n    def _registered_name(name):\n        return '_nnef_' + name\n\n    @staticmethod\n    def normalize_dtype(data):\n        dtype = NNEFModule._dtypeRemap.get(data.dtype.type)\n        return data.astype(dtype) if dtype is not None else data\n\n    @staticmethod\n    def _dequantize(data, quant, channel_axis):\n        op_name = quant['op-name']\n        rank = len(data.shape)\n        if op_name == 'zero_point_linear_quantize':\n            return NNEFModule._dequantize_zero_point(data,\n                                                     NNEFModule._ensure_rank(quant['zero_point'], rank, channel_axis),\n                                                     NNEFModule._ensure_rank(quant['scale'], rank, channel_axis))\n        elif op_name == 'min_max_linear_quantize' or op_name == 'linear_quantize':\n            return NNEFModule._dequantize_min_max(data,\n                                                  NNEFModule._ensure_rank(quant['min'], rank, channel_axis),\n                                                  NNEFModule._ensure_rank(quant['max'], rank, channel_axis),\n                                                  quant['signed'], quant['symmetric'], quant['bits'])\n        else:\n            raise ValueError(\"Quantization operation '{}' not implemented\".format(op_name))\n\n    @staticmethod\n    def _dequantize_zero_point(data, zero_point, scale):\n        return (data - zero_point) * scale\n\n    @staticmethod\n    def _dequantize_min_max(data, min, max, signed, symmetric, bits):\n        if signed:\n            data += 2 ** (bits - 1) - int(symmetric)\n        r = 2 ** bits - 1 - int(signed and symmetric)\n        return data * ((max - min) / r) + min\n\n    def _fake_quantize(self, tensor, quant, channel_axis):\n        op_type = quant['op-name']\n        rank = len(tensor.shape)\n        attribs = {key: NNEFModule._ensure_rank(value, rank, channel_axis) if isinstance(value, np.ndarray) else value\n                   for key, value in six.iteritems(quant) if key != 'op-name'}\n\n        assert op_type in self._operators, \"Unsupported quantization operation: {}\".format(op_type)\n        func = self._operators[op_type]\n        return func(tensor, **attribs)\n\n    @staticmethod\n    def _ensure_rank(value, rank, offset=0):\n        array = np.array(value)\n        return np.reshape(array, newshape=(1,) * offset + array.shape + (1,) * (rank - offset - len(array.shape)))\n\n    _dtypeRemap = {\n        np.float16: np.float32,\n        np.float64: np.float32,\n        np.int8: np.int64,\n        np.uint8: np.int64,\n        np.int16: np.int64,\n        np.uint16: np.int64,\n        np.int32: np.int64,\n        np.uint32: np.int64,\n        np.uint64: np.int64,\n    }\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/interpreter/pytorch/nnef_operators.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nfrom typing import Optional, List, Tuple, Callable, Any\nfrom functools import reduce\nimport numpy as np\nimport functools\nimport torch\nimport torch.nn.functional as F\nimport nnef\nimport math\n\n\n# Helpers\n\n\n_numpy_dtype_to_torch = {\n    np.int8: torch.int8,\n    np.int16: torch.int16,\n    np.int32: torch.int32,\n    np.int64: torch.int64,\n    np.uint8: torch.uint8,\n    np.double: torch.double,\n    np.float16: torch.float16,\n    np.float32: torch.float32,\n    np.float64: torch.float64,\n    np.short: torch.short,\n    np.longlong: torch.long,\n    int: torch.int,\n    bool: torch.bool,\n    float: torch.float,\n}\n\n\ndef _clamp(x, a, b):\n    return max(a, min(b, x))\n\n\ndef _expand_to_rank(input, rank):\n    # type: (torch.Tensor, int)->torch.Tensor\n    rank_diff = rank - len(input.shape)\n    return input.reshape(tuple(input.shape) + rank_diff * (1,))\n\n\ndef _expand_binary(input1, input2):\n    # type: (torch.Tensor, torch.Tensor)->Tuple[torch.Tensor, torch.Tensor]\n    rank = max(len(input1.shape), len(input2.shape))\n    return _expand_to_rank(input1, rank), _expand_to_rank(input2, rank)\n\n\ndef _binary(f):\n    def g(x, y):\n        x, y = _expand_binary(x, y)\n        return f(x, y)\n\n    return g\n\n\ndef _prod(items):\n    return functools.reduce(lambda x, y: x * y, items, 1)\n\n\ndef _same_padding(input, filter, stride, dilation):\n    assert len(input) == len(filter) == len(stride) == len(dilation)\n\n    output = [(ui + (s - 1)) // s for ui, s in zip(input, stride)]\n    dilated = [(f - 1) * d + 1 for f, d in zip(filter, dilation)]\n    total = [max(0, (di - 1) * s + df - ui) for di, s, df, ui in zip(output, stride, dilated, input)]\n\n    return [(pad // 2, (pad + 1) // 2) for pad in total]\n\n\ndef _inverse_permutation(perm):\n    inverse = [0] * len(perm)\n    for i, p in enumerate(perm):\n        inverse[p] = i\n    return inverse\n\n\ndef _apply_permutation(items, perm):\n    return [items[ind] for ind in perm]\n\n\n# Operations\n\ndef _positive_pad(input, padding, border='constant', value=0.0):\n    # type: (torch.Tensor, List[Tuple[int, int]], str, float)->torch.Tensor\n\n    assert all(p >= 0 and q >= 0 for p, q in padding), \"Negative padding is not supported \"\n\n    assert padding\n    assert len(input.shape) in (3, 4, 5)\n    assert padding[:2] == [(0, 0), (0, 0)] or (padding[0] == (0, 0) and padding[-1] == (0, 0))\n    assert border in (\"constant\", \"reflect\", \"replicate\")\n\n    rank = len(input.shape)\n    needs_transpose = padding[0] == (0, 0) and padding[1] != (0, 0) and padding[-1] == (0, 0)\n    if needs_transpose:\n        padding = padding[1:-1]\n        input = input.permute([0, rank - 1] + list(range(1, rank - 1)))\n    else:\n        padding = padding[2:]\n\n    pad = []\n    for p, q in reversed(padding):\n        pad += [p, q]\n\n    padded = F.pad(input=input, pad=pad, mode=border, value=value) if not all(p == 0 for p in pad) else input\n\n    if needs_transpose:\n        padded = padded.permute([0] + list(range(2, rank)) + [1])\n\n    return padded\n\n\ndef nnef_pad(input, padding, border='constant', value=0.0):\n    # type: (torch.Tensor, List[Tuple[int, int]], str, float)->torch.Tensor\n\n    assert padding, \\\n        \"nnef.pad does not support empty list as padding\"\n    assert len(input.shape) in (3, 4, 5), \\\n        \"nnef.pad is only implemented for 3D, 4D, 5D tensors; got: {}D.\".format(len(input.shape))\n    assert padding[:2] == [(0, 0), (0, 0)] or (padding[0] == (0, 0) and padding[-1] == (0, 0)), \\\n        \"nnef.pad is not implemented in N, C dimensions; got: {}.\".format(padding)\n\n    if all(p <= 1 and q <= 1 for p, q in padding) and border == \"reflect-even\":\n        border = \"replicate\"\n\n    assert border in (\"constant\", \"reflect\", \"replicate\"), \\\n        \"nnef.pad is only implemented with 'constant', 'reflect' and 'replicate' border; got: {}.\".format(border)\n\n    input = _positive_pad(input,\n                          padding=[(p if p > 0 else 0, q if q > 0 else 0) for p, q in padding],\n                          border=border,\n                          value=value)\n\n    return nnef_slice(input,\n                      axes=list(range(len(input.shape))),\n                      begin=[-p if p < 0 else 0 for p, _q in padding],\n                      end=[q if q < 0 else 0 for _p, q in padding])\n\n\nnnef_add = _binary(lambda x, y: x + y)\n\n\ndef nnef_add_n(values):\n    return nnef_add(values[0], nnef_add_n(values[1:])) if len(values) > 1 else values[0]\n\n\ndef nnef_conv(input,  # type: torch.Tensor\n              filter,  # type: torch.Tensor\n              bias,  # type: torch.Tensor\n              border='constant',  # type: str\n              padding=None,  # type: Optional[List[Tuple[int, int]]]\n              stride=None,  # type: Optional[List[int]]\n              dilation=None,  # type: Optional[List[int]]\n              groups=1,  # type: int\n              ):\n    # type: (...)->torch.Tensor\n\n    assert len(input.shape) in (3, 4, 5), \"nnef.conv is only implemented for 3D, 4D, 5D tensors, given: {}D.\".format(len(input.shape))\n\n    bias = bias.reshape(1, 1).expand((1, filter.shape[0])) if _prod(bias.size()) == 1 else bias\n\n    spatial_dims = len(input.shape[2:])\n    groups = input.shape[1] if groups == 0 else groups\n    stride = [1] * spatial_dims if not stride else stride\n    dilation = [1] * spatial_dims if not dilation else dilation\n    if not padding:\n        padding = _same_padding(input=input.shape[2:],\n                                filter=filter.shape[2:],\n                                stride=stride,\n                                dilation=dilation)\n\n    pad = nnef_pad(input=input, padding=[(0, 0)] * 2 + padding, border=border)\n    conv = {1: F.conv1d, 2: F.conv2d, 3: F.conv3d}[spatial_dims](input=pad,\n                                                                 weight=filter,\n                                                                 bias=bias.squeeze(dim=0).contiguous(),\n                                                                 stride=tuple(stride),\n                                                                 padding=0,\n                                                                 dilation=tuple(dilation),\n                                                                 groups=groups)\n\n    return conv\n\n\ndef nnef_deconv(input,  # type: torch.Tensor\n                filter,  # type: torch.Tensor\n                bias,  # type: torch.Tensor\n                border='constant',  # type: str\n                padding=None,  # type: Optional[List[Tuple[int, int]]]\n                stride=None,  # type: Optional[List[int]]\n                dilation=None,  # type: Optional[List[int]]\n                output_shape=None,  # type: Optional[List[int]]\n                groups=1,  # type: int\n                ):\n    # type: (...)->torch.Tensor\n\n    assert border == 'constant' or border == 'replicate', \"nnef.deconv: '{}' border unsupported.\".format(border)\n\n    if output_shape and output_shape[0] != input.shape[0]:\n        output_shape = list(output_shape)\n        output_shape[0] = input.shape[0]\n\n    rank = len(input.shape)\n    assert rank in (3, 4, 5), \"nnef.deconv is only implemented for 3D, 4D, 5D tensors, given: {}D.\".format(len(input.shape))\n\n    spatial_dims = len(input.shape[2:])\n    stride = [1] * spatial_dims if not stride else stride\n    dilation = [1] * spatial_dims if not dilation else dilation\n\n    if groups == 0:\n        if output_shape:\n            groups = output_shape[1]\n        else:\n            # Planewise deconvolution without output_size, assuming that #(input channels) = #(output channels)\n            groups = filter.shape[0]\n\n    output_channels = filter.shape[1] * groups\n    if output_shape:\n        assert output_shape[1] == output_channels\n    else:\n        output_shape = nnef.shapes.deconv_shape(input=list(input.shape),\n                                                filter=filter.shape,\n                                                bias=bias.shape,\n                                                border=border,\n                                                padding=padding,\n                                                stride=stride,\n                                                dilation=dilation,\n                                                groups=groups,\n                                                output_shape=None)\n    if not padding:\n        padding = _same_padding(input=output_shape[2:],\n                                filter=filter.shape[2:],\n                                stride=stride,\n                                dilation=dilation)\n\n    if border == 'replicate':\n        input = F.pad(input=input, pad=(1,) * 2 * spatial_dims, mode='replicate')\n        padding = [(p + s, q + s) for (p, q), s in zip(padding, stride)]\n\n    uncropped_output_shape = nnef.shapes.deconv_shape(input=list(input.shape),\n                                                      filter=filter.shape,\n                                                      bias=bias.shape,\n                                                      border=border,\n                                                      padding=[(0, 0)] * (rank - 2),\n                                                      stride=stride,\n                                                      dilation=dilation,\n                                                      groups=groups,\n                                                      output_shape=None)\n\n    crop_before = [p for p, _q in padding]\n    crop_after = [uncropped - out - before\n                  for uncropped, out, before\n                  in zip(uncropped_output_shape[2:], output_shape[2:], crop_before)]\n\n    bias = bias.reshape(1, 1).expand((1, output_channels)) if _prod(bias.size()) == 1 else bias\n\n    deconv = {1: F.conv_transpose1d,\n              2: F.conv_transpose2d,\n              3: F.conv_transpose3d}[spatial_dims](input=input,\n                                                   weight=filter,\n                                                   bias=bias.squeeze(dim=0).contiguous(),\n                                                   stride=tuple(stride),\n                                                   padding=0,\n                                                   output_padding=0,\n                                                   groups=groups,\n                                                   dilation=tuple(dilation))\n\n    return nnef_pad(deconv, padding=[(0, 0), (0, 0)] + [(-cb, -ca) for cb, ca in zip(crop_before, crop_after)])\n\n\ndef _evaluate_max_pool_or_box_params(input_shape, size, padding, stride, dilation):\n    rank = len(input_shape)\n    stride = [1] * rank if not stride else stride\n    dilation = [1] * rank if not dilation else dilation\n    padding = _same_padding(input=input_shape,\n                            filter=size,\n                            stride=stride,\n                            dilation=dilation) if not padding else padding\n    return padding, stride, dilation\n\n\ndef _max_pool_impl(input,  # type: torch.Tensor\n                   size,  # type: List[int]\n                   border='constant',  # type: str\n                   padding=None,  # type: Optional[List[Tuple[int, int]]]\n                   stride=None,  # type: Optional[List[int]]\n                   dilation=None,  # type: Optional[List[int]]\n                   with_index=False,  # type: bool\n                   ):\n    # type: (...)->torch.Tensor\n\n    spatial_dims = len(input.shape) - 2\n    value = float('-inf') if border == 'ignore' else 0.0\n    border = 'constant' if border == 'ignore' else border\n\n    pad = nnef_pad(input=input, padding=padding, border=border, value=value)\n\n    result = {1: F.max_pool1d, 2: F.max_pool2d, 3: F.max_pool3d}[spatial_dims](input=pad,\n                                                                               kernel_size=size[2:],\n                                                                               stride=stride[2:],\n                                                                               padding=0,\n                                                                               dilation=dilation[2:],\n                                                                               return_indices=with_index)\n    return result\n\n\ndef _box_impl(input,  # type: torch.Tensor\n              size,  # type: List[int]\n              border,  # type: str\n              padding,  # type: List[Tuple[int, int]]\n              stride,  # type: List[int]\n              dilation,  # type: List[int]\n              normalize,  # type: bool\n              ):\n    # type: (...)->torch.Tensor\n\n    assert 3 <= len(input.shape) <= 5\n    assert len(input.shape) == len(size) == len(padding) == len(stride) == len(dilation)\n    assert padding[:2] == [(0, 0), (0, 0)]\n    assert size[:2] == stride[:2] == dilation[:2]\n\n    assert not dilation or all(d == 1 for d in dilation), \\\n        \"nnef.box (avg or sum pooling) is only implemented for dilation = 1.\"\n\n    spatial_dims = len(input.shape) - 2\n\n    pad = nnef_pad(input=input, padding=padding, border='constant' if border == 'ignore' else border)\n\n    avg_pool = {1: F.avg_pool1d, 2: F.avg_pool2d, 3: F.avg_pool3d}[spatial_dims](\n        input=pad,\n        kernel_size=size[2:],\n        stride=stride[2:],\n        padding=0)\n\n    if border == 'ignore' and normalize:\n        ones = torch.ones_like(input)\n        padded_ones = nnef_pad(input=ones, padding=padding, border='constant')\n        avg_pool_ones = {1: F.avg_pool1d, 2: F.avg_pool2d, 3: F.avg_pool3d}[spatial_dims](\n            input=padded_ones,\n            kernel_size=size[2:],\n            stride=stride[2:],\n            padding=0)\n        # If padding is big, zero averages can happen on the border, don't divide by zero\n        avg_pool_ones = nnef_select(avg_pool_ones > 0, avg_pool_ones, torch.ones_like(avg_pool_ones))\n        avg_pool /= avg_pool_ones\n\n    if normalize:\n        return avg_pool\n    else:\n        return avg_pool * _prod(size)\n\n\ndef _get_transform_for_box_or_max_pool(input_shape, active):\n    # type: (List[int], List[bool])->Any\n    assert len(input_shape) >= 3\n    assert len(input_shape) == len(active)\n    assert sum(active) <= 3, \\\n        \"Sliding window operations are not supported if they have more than 3 'active' dimensions; got {}\".format(sum(active))\n\n    if 3 <= len(input_shape) <= 5 and not active[0] and not active[1]:  # Direct support\n        return None, None, None, None\n    else:\n        inactive_dims = [i for i, a in enumerate(active) if not a]\n        active_dims = [i for i, a in enumerate(active) if a]\n        inactive_shape = [s for i, s in enumerate(input_shape) if i not in active_dims]\n        active_shape = [s for i, s in enumerate(input_shape) if i in active_dims]\n        perm = inactive_dims + active_dims\n        perm_inv = _inverse_permutation(perm)\n    return perm, perm_inv, inactive_shape, active_shape\n\n\ndef _box_or_max_pool(input,  # type: torch.Tensor\n                     size,  # type: List[int]\n                     border='constant',  # type: str\n                     padding=None,  # type: Optional[List[Tuple[int, int]]],\n                     stride=None,  # type: Optional[List[int]],\n                     dilation=None,  # type: Optional[List[int]]\n                     normalize=False,  # type: bool\n                     is_max_pool=False,  # type: bool\n                     ):\n    assert not (normalize and is_max_pool)\n\n    rank = len(input.shape)\n    padding, stride, dilation = _evaluate_max_pool_or_box_params(input_shape=list(input.shape),\n                                                                 size=size,\n                                                                 padding=padding,\n                                                                 stride=stride,\n                                                                 dilation=dilation)\n    active = [size_ != 1 or padding_ != (0, 0) or stride_ != 1 or dilation_ != 1\n              for size_, padding_, stride_, dilation_\n              in zip(size, padding, stride, dilation)]\n\n    if sum(active) == 0:\n        return input\n\n    if rank < 3:\n        perm, perm_inv, inactive_shape, active_shape = None, None, None, None\n    else:\n        perm, perm_inv, inactive_shape, active_shape = _get_transform_for_box_or_max_pool(list(input.shape), active)\n\n    if rank < 3:\n        input = input.unsqueeze(0).unsqueeze(0)\n        size = [1, 1] + size\n        padding = [(0, 0), (0, 0)] + padding\n        stride = [1, 1] + stride\n        dilation = [1, 1] + dilation\n    elif perm is not None:\n        input = input.permute(*perm)\n        size = _apply_permutation(size, perm)\n        padding = _apply_permutation(padding, perm)\n        stride = _apply_permutation(stride, perm)\n        dilation = _apply_permutation(dilation, perm)\n\n        active_rank = len(active_shape)\n        input = input.reshape(*[_prod(inactive_shape), 1] + active_shape)\n        size = [1, 1] + size[-active_rank:]\n        padding = [(0, 0), (0, 0)] + padding[-active_rank:]\n        stride = [1, 1] + stride[-active_rank:]\n        dilation = [1, 1] + dilation[-active_rank:]\n\n    if is_max_pool:\n        output = _max_pool_impl(\n            input=input, size=size, border=border, padding=padding, stride=stride, dilation=dilation, with_index=False)\n    else:\n        output = _box_impl(input=input,\n                           size=size,\n                           border=border,\n                           padding=padding,\n                           stride=stride,\n                           dilation=dilation,\n                           normalize=normalize)\n\n    if rank < 3:\n        output = output.squeeze(0).squeeze(0)\n    elif perm is not None:\n        active_rank = len(active_shape)\n        output = output.reshape(inactive_shape + list(output.shape)[-active_rank:])\n        output = output.permute(*perm_inv)\n\n    return output\n\n\ndef nnef_max_pool(input,  # type: torch.Tensor\n                  size,  # type: List[int]\n                  border='constant',  # type: str\n                  padding=None,  # type: Optional[List[Tuple[int, int]]]\n                  stride=None,  # type: Optional[List[int]]\n                  dilation=None,  # type: Optional[List[int]]\n                  ):\n    # type: (...)->torch.Tensor\n    return _box_or_max_pool(\n        input, size=size, border=border, padding=padding, stride=stride, dilation=dilation, is_max_pool=True)\n\n\ndef nnef_max_pool_with_index(input,  # type: torch.Tensor\n                             size,  # type: List[int]\n                             border='constant',  # type: str\n                             padding=None,  # type: Optional[List[Tuple[int, int]]]\n                             stride=None,  # type: Optional[List[int]]\n                             dilation=None,  # type: Optional[List[int]]\n                             ):\n    # type: (...)->torch.Tensor\n\n    input_shape = list(input.shape)\n    padding, stride, dilation = _evaluate_max_pool_or_box_params(input_shape=input_shape,\n                                                                 size=size,\n                                                                 padding=padding,\n                                                                 stride=stride,\n                                                                 dilation=dilation)\n\n    assert len(input_shape) in (3, 4, 5), \\\n        \"nnef.max_pool_with_index is only implemented for 3D, 4D, 5D tensors, given: {}D\".format(len(input_shape))\n    assert size[:2] == [1, 1], \\\n        \"nnef.max_pool_with_index is only implemented for size = 1 in N and C dimensions\"\n    assert padding[:2] == [(0, 0), (0, 0)],\\\n        \"nnef.max_pool_with_index is only implemented for padding = (0, 0) in N and C dimensions.\"\n    assert stride[:2] == [1, 1], \\\n        \"nnef.max_pool_with_index is only implemented for stride = 1 in N and C dimensions\"\n    assert dilation[:2] == [1, 1], \\\n        \"nnef.max_pool_with_index is only implemented for dilation = 1 in N and C dimensions\"\n\n    return _max_pool_impl(input, size=size, border=border, padding=padding, stride=stride, dilation=dilation,\n                          with_index=True)\n\n\ndef nnef_argmax_pool(input,  # type: torch.Tensor\n                     size,  # type: List[int]\n                     border='constant',  # type: str\n                     padding=None,  # type: Optional[List[Tuple[int, int]]]\n                     stride=None,  # type: Optional[List[int]]\n                     dilation=None,  # type: Optional[List[int]]\n                     ):\n    # type: (...)->torch.Tensor\n    _, index = nnef_max_pool_with_index(\n        input, size=size, border=border, padding=padding, stride=stride, dilation=dilation)\n    return index\n\n\ndef nnef_box(input,  # type: torch.Tensor\n             size,  # type: List[int]\n             border='constant',  # type: str\n             padding=None,  # type: Optional[List[Tuple[int, int]]]\n             stride=None,  # type: Optional[List[int]]\n             dilation=None,  # type: Optional[List[int]]\n             normalize=False,  # type: bool\n             ):\n    # type: (...)->torch.Tensor\n    return _box_or_max_pool(\n        input, size=size, border=border, padding=padding, stride=stride, dilation=dilation, normalize=normalize)\n\n\ndef nnef_debox(input,  # type: torch.Tensor\n               size,  # type: List[int]\n               border='constant',  # type: str\n               padding=None,  # type: Optional[List[Tuple[int, int]]]\n               stride=None,  # type: Optional[List[int]]\n               dilation=None,  # type: Optional[List[int]]\n               output_shape=None,  # type: Optional[List[int]]\n               normalize=False,  # type: bool\n               ):\n    assert border in ('constant', 'ignore'), \\\n        \"nnef.debox: '{}' border unsupported\".format(border)\n    assert len(size) in (3, 4, 5), \\\n        \"nnef.debox is only implemented for 3D, 4D, 5D tensors, given: {}D\".format(len(size))\n    assert size[:2] == [1, 1], \\\n        \"nnef.debox is only implemented for size = 1 in N and C dimensions\"\n    assert not padding or padding[:2] == [(0, 0), (0, 0)], \\\n        \"nnef.debox is only implemented for padding = (0, 0) in N and C dimensions\"\n    assert not stride or stride[:2] == [1, 1], \\\n        \"nnef.debox is only implemented for stride = 1 in N and C dimensions\"\n    assert not dilation or dilation[:2] == [1, 1], \\\n        \"nnef.debox is only implemented for dilation = 1 in N and C dimensions\"\n\n    filter = torch.full(size=[input.shape[1], 1] + list(size)[2:],\n                        fill_value=(1.0 / _prod(size) if normalize else 1.0),\n                        device=input.device,\n                        dtype=input.dtype)\n    bias = torch.zeros(size=tuple(), device=input.device, dtype=input.dtype)\n\n    return nnef_deconv(input=input,\n                       filter=filter,\n                       bias=bias,\n                       border='constant',\n                       padding=padding[2:] if padding else padding,\n                       stride=stride[2:] if stride else stride,\n                       dilation=dilation[2:] if dilation else dilation,\n                       output_shape=output_shape,\n                       groups=input.shape[1])\n\n\ndef nnef_avg_pool(input,  # type: torch.Tensor\n                  size,  # type: List[int]\n                  border='constant',  # type: str\n                  padding=None,  # type: Optional[List[Tuple[int, int]]],\n                  stride=None,  # type: Optional[List[int]],\n                  dilation=None,  # type: Optional[List[int]]\n                  ):\n    # type: (...)->torch.Tensor\n    return nnef_box(input, size=size, border=border, padding=padding, stride=stride, dilation=dilation, normalize=True)\n\n\ndef nnef_rms_pool(input,  # type: torch.Tensor\n                  size,  # type: List[int]\n                  border='constant',  # type: str\n                  padding=None,  # type: Optional[List[Tuple[int, int]]],\n                  stride=None,  # type: Optional[List[int]],\n                  dilation=None,  # type: Optional[List[int]]\n                  ):\n    # type: (...)->torch.Tensor\n    return torch.sqrt(nnef_avg_pool(torch.pow(input, 2.0),\n                                    size=size,\n                                    border=border,\n                                    padding=padding,\n                                    stride=stride,\n                                    dilation=dilation))\n\n\ndef nnef_desample(input,  # type: torch.Tensor\n                  index,  # type: torch.Tensor\n                  size,  # type: List[int]\n                  border='constant',  # type: str\n                  padding=None,  # type: Optional[List[Tuple[int, int]]]\n                  stride=None,  # type: Optional[List[int]]\n                  dilation=None,  # type: Optional[List[int]]\n                  output_shape=None,  # type: Optional[List[int]]\n                  ):\n    # type: (...)->torch.Tensor\n\n    if output_shape and output_shape[0] != input.shape[0]:\n        output_shape = list(output_shape)\n        output_shape[0] = input.shape[0]\n\n    input_shape = list(input.shape)\n    index_shape = list(index.shape)\n    rank = len(input_shape)\n    spatial_dims = len(input_shape[2:])\n\n    assert len(input_shape) in (3, 4, 5), \\\n        \"nnef.desample is only implemented for 3D, 4D, 5D tensors, given: {}D\".format(len(input_shape))\n    assert not size or size[:2] == [1, 1], \\\n        \"nnef.desample is only implemented for size = 1 in N and C dimensions\"\n    assert not padding or padding[:2] == [(0, 0), (0, 0)], \\\n        \"nnef.desample is only implemented for padding = (0, 0) in N and C dimensions\"\n    assert not stride or stride[:2] == [1, 1], \\\n        \"nnef.desample is only implemented for stride = 1 in N and C dimensions\"\n    assert not dilation or all(d == 1 for d in dilation), \\\n        \"nnef.desample is only implemented for dilation = 1\"\n\n    stride = [1] * rank if not stride else stride\n    dilation = [1] * rank if not dilation else dilation\n\n    if not padding:\n        calculated_output_shape = [i * s for i, s in zip(input_shape, stride)]\n        padding = _same_padding(input=calculated_output_shape,\n                                filter=size,\n                                stride=stride,\n                                dilation=dilation)\n    else:\n        calculated_output_shape = nnef.shapes.desample_shape(input_shape, index_shape,\n                                                             size=size, border=border, padding=padding,\n                                                             stride=stride, dilation=dilation, output_shape=None)\n\n    output_shape = output_shape if output_shape else calculated_output_shape\n    padded_output_shape = [s + p + q for s, (p, q) in zip(output_shape, padding)]\n    unpooled = {1: F.max_unpool1d, 2: F.max_unpool2d, 3: F.max_unpool3d}[spatial_dims](\n        input=input, indices=index, kernel_size=size[2:], stride=stride[2:], padding=0, output_size=padded_output_shape)\n    return nnef_slice(unpooled,\n                      axes=list(range(rank)),\n                      begin=[p for p, _q in padding],\n                      end=[p + s for (p, _q), s in zip(padding, output_shape)])\n\n\ndef nnef_batch_normalization(input,  # type: torch.Tensor\n                             mean,  # type: torch.Tensor\n                             variance,  # type: torch.Tensor\n                             offset,  # type: torch.Tensor\n                             scale,  # type: torch.Tensor\n                             epsilon,  # type: float\n                             is_training=False,  # type: bool\n                             momentum=0.1,  # type: float\n                             ):\n    # type: (...)->torch.Tensor\n\n    if isinstance(mean, torch.nn.Parameter):\n        mean.requires_grad = False\n    if isinstance(variance, torch.nn.Parameter):\n        variance.requires_grad = False\n\n    return F.batch_norm(input=input,\n                        running_mean=nnef_squeeze(mean, axes=[0]),\n                        running_var=nnef_squeeze(variance, axes=[0]),\n                        weight=nnef_squeeze(scale, axes=[0]),\n                        bias=nnef_squeeze(offset, axes=[0]),\n                        training=is_training,\n                        momentum=momentum,\n                        eps=epsilon)\n\n\ndef _upsample_weights_1d(factor, symmetric):\n    if symmetric:\n        weights = [1 - (i + 0.5) / factor for i in range(factor)]\n        weights = list(reversed(weights)) + weights\n    else:\n        weights = [1 - abs(i) / float(factor) for i in range(-factor + 1, factor)]\n    return np.array(weights)\n\n\ndef _upsample_weights_2d(factor, symmetric):\n    w0 = _upsample_weights_1d(factor[0], symmetric)\n    w1 = _upsample_weights_1d(factor[1], symmetric)\n    return np.outer(w0, w1)\n\n\ndef _upsample_weights_nd(factor, symmetric):\n    ws = [_upsample_weights_1d(f, symmetric) for f in factor]\n    return reduce(np.multiply, np.ix_(*ws))\n\n\ndef nnef_multilinear_upsample(input, factor, method='symmetric', border='replicate'):\n    # type: (torch.Tensor, List[int], str, str)->torch.Tensor\n\n    rank = len(factor)\n    assert len(input.shape) == rank + 2\n\n    mode = 'linear' if rank == 1 else 'bilinear'\n\n    if method == 'aligned':\n        return F.interpolate(input=input, scale_factor=tuple(factor), mode=mode, align_corners=True)\n    elif method == 'symmetric' and border == 'replicate':\n        return F.interpolate(input=input, scale_factor=tuple(factor), mode=mode, align_corners=False)\n\n    n, c, = input.shape[:2]\n\n    symmetric = method == 'symmetric'\n    replicate = border == 'replicate'\n    weights = _upsample_weights_nd(factor, symmetric)\n    weights = np.tile(np.reshape(weights, newshape=(1, 1) + weights.shape), reps=(c, 1) + (1,) * rank)\n    filter = torch.from_numpy(weights).to(device=input.device, dtype=input.dtype)\n    bias = torch.zeros(size=tuple(), device=input.device, dtype=input.dtype)\n\n    output_shape = [n, c] + [f * s for f, s in zip(factor, input.shape[2:])]\n\n    if symmetric:\n        return nnef_deconv(input, filter, bias, stride=factor, padding=[(f - 1, f - 1) for f in factor],\n                           border='constant', groups=c, output_shape=output_shape)\n    else:\n        if replicate:\n            input = nnef_pad(input, padding=[(0, 0), (0, 0)] + [(1, 0)] * rank, border=border)\n\n        padding = factor if replicate else [f // 2 for f in factor]\n        return nnef_deconv(input, filter, bias, stride=factor, padding=[(p, p) for p in padding],\n                           border='constant', groups=c, output_shape=output_shape)\n\n\ndef nnef_nearest_upsample(input, factor):\n    # type: (torch.Tensor, List[int])->torch.Tensor\n\n    assert len(input.shape) in (3, 4, 5), \\\n        \"nnef.nearest_upsample is only implemented for 3D, 4D, 5D tensors, given: {}D.\".format(len(input.shape))\n\n    return F.interpolate(input=input, scale_factor=tuple(factor), mode='nearest')\n\n\ndef nnef_softmax(x, axes=None):\n    # type: (torch.Tensor, Optional[List[int]])->torch.Tensor\n\n    axes = [1] if axes is None else axes\n\n    if len(axes) == 0:\n        return x\n    elif len(axes) == 1:\n        return F.softmax(x, dim=axes[0])\n    else:\n        m = nnef_max_reduce(x, axes=axes)\n        e = torch.exp(x - m)\n        return e / nnef_sum_reduce(x, axes=axes)\n\n\ndef nnef_local_response_normalization(input, size, alpha=1.0, beta=0.5, bias=1.0):\n    # type: (torch.Tensor, List[int], float, float, float)->torch.Tensor\n\n    sigma = bias + alpha * nnef_box(torch.pow(input, 2.0), size=size, normalize=True)\n    return input / torch.pow(sigma, beta)\n\n\ndef nnef_local_mean_normalization(input, size):\n    # type: (torch.Tensor, List[int])->torch.Tensor\n    mean = nnef_box(input, size=size, normalize=True)\n    return input - mean\n\n\ndef nnef_local_variance_normalization(input, size, bias=0.0, epsilon=0.0):\n    # type: (torch.Tensor, List[int], float, float)->torch.Tensor\n    sigma = torch.sqrt(nnef_box(torch.pow(input, 2.0), size=size, normalize=True))\n    return input / torch.max(sigma + bias,\n                             torch.full(size=[], fill_value=epsilon, device=input.device, dtype=input.dtype))\n\n\ndef nnef_local_contrast_normalization(input, size, bias=0.0, epsilon=0.0):\n    # type: (torch.Tensor, List[int], float, float)->torch.Tensor\n    centered = nnef_local_mean_normalization(input, size=size)\n    return nnef_local_variance_normalization(centered, size=size, bias=bias, epsilon=epsilon)\n\n\ndef nnef_l1_normalization(input, axes, bias=0.0, epsilon=0.0):\n    # type: (torch.Tensor, List[int], float, float)->torch.Tensor\n    sigma = nnef_sum_reduce(torch.abs(input), axes=axes)\n    return input / torch.max(sigma + bias,\n                             torch.full(size=[], fill_value=epsilon, device=input.device, dtype=input.dtype))\n\n\ndef nnef_l2_normalization(input, axes, bias=0.0, epsilon=0.0):\n    # type: (torch.Tensor, List[int], float, float)->torch.Tensor\n    sigma = torch.sqrt(nnef_sum_reduce(torch.pow(input, 2.0), axes=axes))\n    return input / torch.max(sigma + bias,\n                             torch.full(size=[], fill_value=epsilon, device=input.device, dtype=input.dtype))\n\n\ndef nnef_matmul(A, B, transposeA=False, transposeB=False):\n    # type:(torch.Tensor, torch.Tensor, bool, bool)->torch.Tensor\n\n    return torch.matmul(torch.transpose(A, len(A.shape) - 2, len(A.shape) - 1) if transposeA else A,\n                        torch.transpose(B, len(B.shape) - 2, len(B.shape) - 1) if transposeB else B)\n\n\ndef nnef_split(value, axis, ratios):\n    # type:(torch.Tensor, int, List[int])->torch.Tensor\n    assert value.shape[axis] % sum(ratios) == 0\n\n    multiplier = value.shape[axis] // sum(ratios)\n    sections = [ratio * multiplier for ratio in ratios]\n    return torch.split(value, split_size_or_sections=sections, dim=axis)\n\n\ndef nnef_slice(input, axes, begin, end, stride=None):\n    # type:(torch.Tensor, List[int], List[int], List[int], List[int])->torch.Tensor\n\n    if stride is None:\n        stride = [1] * len(axes)\n\n    shape = list(input.shape)\n    slices = [slice(None)] * len(shape)\n\n    for axis, b, e, s in zip(axes, begin, end, stride):\n        if b < 0:\n            b += shape[axis]\n        if e < 0:\n            e += shape[axis]\n        elif e == 0 and s == 1:\n            e = shape[axis]\n\n        b = _clamp(b, -1, shape[axis])\n        e = _clamp(e, -1, shape[axis])\n\n        if s > 0:\n            slices[axis] = slice(b, e, s)\n        else:\n            offs = (b - e - 1) % (-s) + 1 if b != e else 1\n            slices[axis] = slice(e+offs, b+1, -s)\n\n    input = input[slices]\n\n    flip_axes = [axis for axis, s in zip(axes, stride) if s < 0]\n    if len(flip_axes) != 0:\n        input = torch.flip(input, dims=flip_axes)\n\n    return input\n\n\ndef nnef_select(condition, true_value, false_value):\n    # type:(torch.Tensor, torch.Tensor, torch.Tensor)->torch.Tensor\n    rank = max(len(condition.shape), len(true_value.shape), len(false_value.shape))\n    return torch.where(_expand_to_rank(condition, rank),\n                       _expand_to_rank(true_value, rank),\n                       _expand_to_rank(false_value, rank))\n\n\ndef _nnef_generic_reduce(input, axes, f):\n    # type:(torch.Tensor, List[int], Callable)->torch.Tensor\n    if not axes:\n        return input\n    for axis in reversed(sorted(axes)):\n        input = f(input=input, dim=axis, keepdim=True)\n    return input\n\n\ndef nnef_sum_reduce(input, axes, normalize=False):\n    # type:(torch.Tensor, List[int], bool)->torch.Tensor\n    return _nnef_generic_reduce(input=input, axes=axes, f=torch.mean if normalize else torch.sum)\n\n\ndef nnef_max_reduce(input, axes):\n    # type:(torch.Tensor, List[int])->torch.Tensor\n    return _nnef_generic_reduce(input=input, axes=axes,\n                                f=lambda input, dim, keepdim: torch.max(input, dim=dim, keepdim=keepdim)[0])\n\n\ndef nnef_min_reduce(input, axes):\n    # type:(torch.Tensor, List[int])->torch.Tensor\n    return _nnef_generic_reduce(input=input, axes=axes,\n                                f=lambda input, dim, keepdim: torch.min(input, dim=dim, keepdim=keepdim)[0])\n\n\ndef nnef_mean_reduce(input, axes):\n    # type:(torch.Tensor, List[int])->torch.Tensor\n    return _nnef_generic_reduce(input=input, axes=axes, f=torch.mean)\n\n\ndef _nnef_argminmax_reduce(input, axes, argmin=False):\n    # type:(torch.Tensor, List[int], bool)->torch.Tensor\n    if len(axes) == 1:\n        return _nnef_generic_reduce(input=input, axes=axes, f=torch.argmin if argmin else torch.argmax)\n    else:\n        axes = sorted(axes)\n        consecutive_axes = list(range(axes[0], axes[0] + len(axes)))\n\n        assert axes == consecutive_axes, \\\n            \"{} is only implemented for consecutive axes.\".format(\"argmin_reduce\" if argmin else \"argmax_reduce\")\n\n        reshaped = nnef_reshape(input,\n                                shape=(list(input.shape)[:axes[0]]\n                                       + [-1]\n                                       + list(input.shape[axes[0] + len(axes):])))\n        reduced = _nnef_generic_reduce(input=reshaped, axes=[axes[0]], f=torch.argmin if argmin else torch.argmax)\n        reshaped = nnef_reshape(reduced, shape=list(dim if axis not in axes else 1\n                                                    for axis, dim in enumerate(input.shape)))\n        return reshaped\n\n\ndef nnef_argmax_reduce(input, axes):\n    # type:(torch.Tensor, List[int])->torch.Tensor\n    return _nnef_argminmax_reduce(input, axes, argmin=False)\n\n\ndef nnef_argmin_reduce(input, axes):\n    # type:(torch.Tensor, List[int])->torch.Tensor\n    return _nnef_argminmax_reduce(input, axes, argmin=True)\n\n\ndef nnef_clamp(x, a, b):\n    # type:(torch.Tensor, torch.Tensor, torch.Tensor)->torch.Tensor\n    rank = max(len(x.shape), len(a.shape), len(b.shape))\n    x = _expand_to_rank(x, rank)\n    a = _expand_to_rank(a, rank)\n    b = _expand_to_rank(b, rank)\n    return torch.max(torch.min(x, b), a)\n\n\ndef nnef_nearest_downsample(input, factor):\n    # type: (torch.Tensor, List[int])->torch.Tensor\n    dims = len(input.shape)\n    return nnef_box(input, size=[1] * dims, stride=[1, 1] + factor, padding=[(0, 0)] * dims)\n\n\ndef nnef_area_downsample(input, factor):\n    # type: (torch.Tensor, List[int])->torch.Tensor\n    dims = len(input.shape)\n    return nnef_box(input, size=[1, 1] + factor, stride=[1, 1] + factor, padding=[(0, 0)] * dims, normalize=True)\n\n\ndef nnef_moments(input, axes):\n    # type: (torch.Tensor, List[int])->Tuple[torch.Tensor, torch.Tensor]\n    mean = nnef_mean_reduce(input, axes=axes)\n    variance = nnef_mean_reduce(torch.pow(input - mean, 2.0), axes=axes)\n    return mean, variance\n\n\ndef nnef_linear(input, filter, bias):\n    # type: (torch.Tensor, torch.Tensor, torch.Tensor)->torch.Tensor\n    matmul = nnef_matmul(A=input, B=filter, transposeB=True)\n    matmul, bias = _expand_binary(matmul, bias)\n    return matmul + bias\n\n\ndef nnef_separable_conv(input,  # type: torch.Tensor\n                        plane_filter,  # type: torch.Tensor\n                        point_filter,  # type: torch.Tensor\n                        bias,  # type: torch.Tensor\n                        border='constant',  # type: str\n                        padding=None,  # type: Optional[List[Tuple[int, int]]]\n                        stride=None,  # type: Optional[List[int]]\n                        dilation=None,  # type: Optional[List[int]]\n                        groups=1,  # type: int\n                        ):\n    # type: (...)->torch.Tensor\n    filtered = nnef_conv(input, plane_filter,\n                         bias=torch.zeros(size=tuple(), device=input.device, dtype=input.dtype),\n                         border=border,\n                         padding=padding,\n                         stride=stride,\n                         dilation=dilation,\n                         groups=0)\n    return nnef_conv(filtered, point_filter, bias, groups=groups)\n\n\ndef nnef_separable_deconv(input,  # type: torch.Tensor\n                          plane_filter,  # type: torch.Tensor\n                          point_filter,  # type: torch.Tensor\n                          bias,  # type: torch.Tensor\n                          border='constant',  # type: str\n                          padding=None,  # type: Optional[List[Tuple[int, int]]]\n                          stride=None,  # type: Optional[List[int]]\n                          dilation=None,  # type: Optional[List[int]]\n                          output_shape=None,  # type: Optional[List[int]]\n                          groups=1,  # type: int\n                          ):\n    # type: (...)->torch.Tensor\n    filtered = nnef_deconv(input,\n                           point_filter,\n                           torch.zeros(size=tuple(), device=input.device, dtype=input.dtype),\n                           groups=groups)\n    return nnef_deconv(filtered, plane_filter, bias,\n                       border=border,\n                       padding=padding,\n                       stride=stride,\n                       dilation=dilation,\n                       output_shape=output_shape,\n                       groups=0)\n\n\ndef nnef_copy_n(x, times):\n    # type: (torch.Tensor, int)->List[torch.Tensor]\n    return [x.clone() for _ in range(times)]\n\n\ndef nnef_zero_point_linear_quantize(x, zero_point, scale, bits, signed, symmetric):\n    # type: (torch.Tensor, torch.Tensor, torch.Tensor, int, bool, bool)->torch.Tensor\n\n    z = torch.round(x / scale) + zero_point\n    r = 2 ** (bits - 1) - 1 if signed else 2 ** bits - 1\n    q = torch.clamp(z, 0 if not signed else -r if symmetric else -r - 1, r)\n    y = (q - zero_point) * scale\n    return y.type(x.dtype)\n\n\ndef nnef_min_max_linear_quantize(x, min, max, bits, signed, symmetric):\n    # type: (torch.Tensor, torch.Tensor, torch.Tensor, int, bool, bool)->torch.Tensor\n\n    r = float(2 ** bits - 1 - int(signed and symmetric))\n    z = torch.clamp(x, min, max)\n    q = torch.round((z - min) / (max - min) * r)\n    return q * ((max - min) / r) + min\n\n\ndef nnef_logarithmic_quantize(x, max, bits):\n    # type: (torch.Tensor, torch.Tensor, int)->torch.Tensor\n\n    r = float(2 ** bits - 1)\n    m = math.ceil(math.log2(max))\n    q = torch.round(torch.clamp(torch.log2(torch.abs(x)), m - r, m))\n    return torch.sign(x) * torch.pow(2.0, q)\n\n\ndef nnef_reshape(input, shape, axis_start=0, axis_count=-1):\n    # type: (torch.Tensor, List[int], int, int)->torch.Tensor\n\n    return input.reshape(nnef.shapes.reshape_shape(input=list(input.shape),\n                                                   shape=shape,\n                                                   axis_start=axis_start,\n                                                   axis_count=axis_count))\n\n\ndef nnef_update(variable, value):\n    # type: (torch.Tensor, torch.Tensor)->torch.Tensor\n    return value\n\n\ndef nnef_transpose(input, axes):\n    return input.permute(*(axes + list(range(len(axes), len(input.shape)))))\n\n\ndef nnef_squeeze(input, axes):\n    return input.reshape(nnef.shapes.squeeze_shape(input.shape, axes))\n\n\ndef nnef_unsqueeze(input, axes):\n    return input.reshape(nnef.shapes.unsqueeze_shape(input.shape, axes))\n\n\ndef nnef_cast(input, dtype):\n    return input.to(_numpy_dtype_to_torch[dtype])\n\n\ndef nnef_gather(input, indices, axis):\n    shape = tuple(indices.shape)\n    if len(shape) != 1:\n        indices = torch.flatten(indices)\n    result = input.index_select(dim=axis, index=indices.to(torch.int64))\n    if len(shape) != 1:\n        result = torch.reshape(result, shape=input.shape[:axis] + shape + input.shape[axis + 1:])\n    return result\n\n\n\"\"\"\nThe supported operators\n\"\"\"\nOperators = {\n    'update': nnef_update,\n    'reshape': nnef_reshape,\n    'transpose': nnef_transpose,\n    'concat': lambda values, axis: torch.cat(values, axis),\n    'split': nnef_split,\n    'slice': nnef_slice,\n    'squeeze': nnef_squeeze,\n    'unsqueeze': nnef_unsqueeze,\n    'stack': lambda values, axis: torch.stack(values, axis),\n    'unstack': lambda value, axis: torch.unbind(value, axis),\n    'add': nnef_add,\n    'add_n': nnef_add_n,\n    'sub': _binary(lambda x, y: x - y),\n    'mul': _binary(lambda x, y: x * y),\n    'div': _binary(lambda x, y: x / y),\n    'pow': _binary(torch.pow),\n    'exp': torch.exp,\n    'log': torch.log,\n    'abs': torch.abs,\n    'sign': torch.sign,\n    'rcp': torch.reciprocal,\n    'neg': torch.neg,\n    'copy': torch.clone,\n    'lt': _binary(lambda x, y: x < y),\n    'gt': _binary(lambda x, y: x > y),\n    'le': _binary(lambda x, y: x <= y),\n    'ge': _binary(lambda x, y: x >= y),\n    'eq': _binary(torch.eq),\n    'ne': _binary(torch.ne),\n    'and': _binary(lambda x, y: x & y),\n    'or': _binary(lambda x, y: x | y),\n    'not': lambda x: ~x,\n    'floor': torch.floor,\n    'ceil': torch.ceil,\n    'round': torch.round,\n    'select': nnef_select,\n    'sqr': lambda x: torch.pow(x, 2.0),\n    'sqrt': torch.sqrt,\n    'rsqr': lambda x: torch.pow(x, -2.0),\n    'rsqrt': torch.rsqrt,\n    'log2': torch.log2,\n    'min': _binary(torch.min),\n    'max': _binary(torch.max),\n    'clamp': nnef_clamp,\n    'matmul': nnef_matmul,\n    'conv': nnef_conv,\n    'deconv': nnef_deconv,\n    'box': nnef_box,\n    'debox': nnef_debox,\n    'argmax_pool': nnef_argmax_pool,\n    # 'sample': unsupported,\n    'desample': nnef_desample,\n    'nearest_downsample': nnef_nearest_downsample,\n    'area_downsample': nnef_area_downsample,\n    'nearest_upsample': nnef_nearest_upsample,\n    'multilinear_upsample': nnef_multilinear_upsample,\n    'sum_reduce': nnef_sum_reduce,\n    'max_reduce': nnef_max_reduce,\n    'min_reduce': nnef_min_reduce,\n    'argmax_reduce': nnef_argmax_reduce,\n    'argmin_reduce': nnef_argmin_reduce,\n    'mean_reduce': nnef_mean_reduce,\n    'moments': nnef_moments,\n    'relu': F.relu,\n    'sigmoid': torch.sigmoid,\n    'softabs': lambda x, epsilon: torch.sqrt(torch.pow(x, 2.0) + epsilon),\n    'softmax': nnef_softmax,\n    'softplus': lambda x: torch.log(torch.exp(x) + 1.0),\n    'elu': F.elu,\n    'selu': lambda x, alpha, _lambda_: F.selu(x),\n    'gelu': F.gelu,\n    'silu': lambda x: x * torch.sigmoid(x),\n    'prelu': lambda x, alpha: F.prelu(x, alpha),\n    'leaky_relu': lambda x, alpha: F.leaky_relu(x, alpha),\n    'max_pool_with_index': nnef_max_pool_with_index,\n    'max_pool': nnef_max_pool,\n    'avg_pool': nnef_avg_pool,\n    'rms_pool': nnef_rms_pool,\n    'linear': nnef_linear,\n    'separable_conv': nnef_separable_conv,\n    'separable_deconv': nnef_separable_deconv,\n    'local_response_normalization': nnef_local_response_normalization,\n    'local_mean_normalization': nnef_local_mean_normalization,\n    'local_variance_normalization': nnef_local_variance_normalization,\n    'local_contrast_normalization': nnef_local_contrast_normalization,\n    'l1_normalization': nnef_l1_normalization,\n    'l2_normalization': nnef_l2_normalization,\n    'batch_normalization': nnef_batch_normalization,\n    # 'avg_roi_pool': unsupported,\n    # 'max_roi_pool': unsupported,\n    # 'roi_resample': unsupported,\n    # 'avg_roi_align': unsupported,\n    # 'max_roi_align': unsupported,\n    'linear_quantize': nnef_min_max_linear_quantize,\n    'min_max_linear_quantize': nnef_min_max_linear_quantize,\n    'zero_point_linear_quantize': nnef_zero_point_linear_quantize,\n    'logarithmic_quantize': nnef_logarithmic_quantize,\n    'copy_n': nnef_copy_n,\n    'sin': lambda x: torch.sin(x),\n    'cos': lambda x: torch.cos(x),\n    'tan': lambda x: torch.tan(x),\n    'asin': lambda x: torch.asin(x),\n    'acos': lambda x: torch.acos(x),\n    'atan': lambda x: torch.atan(x),\n    'sinh': lambda x: torch.sinh(x),\n    'cosh': lambda x: torch.cosh(x),\n    'tanh': lambda x: torch.tanh(x),\n    'asinh': lambda x: torch.asinh(x),\n    'acosh': lambda x: torch.acosh(x),\n    'atanh': lambda x: torch.atanh(x),\n    'tile': lambda input, repeats: input.repeat(*repeats),\n    'pad': nnef_pad,\n    'cast': nnef_cast,\n    'gather': nnef_gather,\n    'any_reduce': lambda input, axes: _nnef_generic_reduce(input, axes=axes, f=torch.any),\n    'all_reduce': lambda input, axes: _nnef_generic_reduce(input, axes=axes, f=torch.all),\n}\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/caffe2/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .reader import Reader\nfrom .writer import Writer\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/caffe2/caffe/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/caffe2/caffe/proto/__init__.py",
    "content": ""
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/caffe2/caffe/proto/caffe.proto",
    "content": "syntax = \"proto2\";\n\npackage caffe;\n\n// Specifies the shape (dimensions) of a Blob.\nmessage BlobShape {\n  repeated int64 dim = 1 [packed = true];\n}\n\nmessage BlobProto {\n  optional BlobShape shape = 7;\n  repeated float data = 5 [packed = true];\n  repeated float diff = 6 [packed = true];\n  repeated double double_data = 8 [packed = true];\n  repeated double double_diff = 9 [packed = true];\n\n  // 4D dimensions -- deprecated.  Use \"shape\" instead.\n  optional int32 num = 1 [default = 0];\n  optional int32 channels = 2 [default = 0];\n  optional int32 height = 3 [default = 0];\n  optional int32 width = 4 [default = 0];\n}\n\n// The BlobProtoVector is simply a way to pass multiple blobproto instances\n// around.\nmessage BlobProtoVector {\n  repeated BlobProto blobs = 1;\n}\n\nmessage Datum {\n  optional int32 channels = 1;\n  optional int32 height = 2;\n  optional int32 width = 3;\n  // the actual image data, in bytes\n  optional bytes data = 4;\n  optional int32 label = 5;\n  // Optionally, the datum could also hold float data.\n  repeated float float_data = 6;\n  // If true data contains an encoded image that need to be decoded\n  optional bool encoded = 7 [default = false];\n}\n\nmessage FillerParameter {\n  // The filler type.\n  optional string type = 1 [default = 'constant'];\n  optional float value = 2 [default = 0]; // the value in constant filler\n  optional float min = 3 [default = 0]; // the min value in uniform filler\n  optional float max = 4 [default = 1]; // the max value in uniform filler\n  optional float mean = 5 [default = 0]; // the mean value in Gaussian filler\n  optional float std = 6 [default = 1]; // the std value in Gaussian filler\n  // The expected number of non-zero output weights for a given input in\n  // Gaussian filler -- the default -1 means don't perform sparsification.\n  optional int32 sparse = 7 [default = -1];\n  // Normalize the filler variance by fan_in, fan_out, or their average.\n  // Applies to 'xavier' and 'msra' fillers.\n  enum VarianceNorm {\n    FAN_IN = 0;\n    FAN_OUT = 1;\n    AVERAGE = 2;\n  }\n  optional VarianceNorm variance_norm = 8 [default = FAN_IN];\n}\n\nmessage NetParameter {\n  optional string name = 1; // consider giving the network a name\n  // DEPRECATED. See InputParameter. The input blobs to the network.\n  repeated string input = 3;\n  // DEPRECATED. See InputParameter. The shape of the input blobs.\n  repeated BlobShape input_shape = 8;\n\n  // 4D input dimensions -- deprecated.  Use \"input_shape\" instead.\n  // If specified, for each input blob there should be four\n  // values specifying the num, channels, height and width of the input blob.\n  // Thus, there should be a total of (4 * #input) numbers.\n  repeated int32 input_dim = 4;\n\n  // Whether the network will force every layer to carry out backward operation.\n  // If set False, then whether to carry out backward is determined\n  // automatically according to the net structure and learning rates.\n  optional bool force_backward = 5 [default = false];\n  // The current \"state\" of the network, including the phase, level, and stage.\n  // Some layers may be included/excluded depending on this state and the states\n  // specified in the layers' include and exclude fields.\n  optional NetState state = 6;\n\n  // Print debugging information about results while running Net::Forward,\n  // Net::Backward, and Net::Update.\n  optional bool debug_info = 7 [default = false];\n\n  // The layers that make up the net.  Each of their configurations, including\n  // connectivity and behavior, is specified as a LayerParameter.\n  repeated LayerParameter layer = 100;  // ID 100 so layers are printed last.\n\n  // DEPRECATED: use 'layer' instead.\n  repeated V1LayerParameter layers = 2;\n}\n\n// NOTE\n// Update the next available ID when you add a new SolverParameter field.\n//\n// SolverParameter next available ID: 43 (last added: weights)\nmessage SolverParameter {\n  //////////////////////////////////////////////////////////////////////////////\n  // Specifying the train and test networks\n  //\n  // Exactly one train net must be specified using one of the following fields:\n  //     train_net_param, train_net, net_param, net\n  // One or more test nets may be specified using any of the following fields:\n  //     test_net_param, test_net, net_param, net\n  // If more than one test net field is specified (e.g., both net and\n  // test_net are specified), they will be evaluated in the field order given\n  // above: (1) test_net_param, (2) test_net, (3) net_param/net.\n  // A test_iter must be specified for each test_net.\n  // A test_level and/or a test_stage may also be specified for each test_net.\n  //////////////////////////////////////////////////////////////////////////////\n\n  // Proto filename for the train net, possibly combined with one or more\n  // test nets.\n  optional string net = 24;\n  // Inline train net param, possibly combined with one or more test nets.\n  optional NetParameter net_param = 25;\n\n  optional string train_net = 1; // Proto filename for the train net.\n  repeated string test_net = 2; // Proto filenames for the test nets.\n  optional NetParameter train_net_param = 21; // Inline train net params.\n  repeated NetParameter test_net_param = 22; // Inline test net params.\n\n  // The states for the train/test nets. Must be unspecified or\n  // specified once per net.\n  //\n  // By default, train_state will have phase = TRAIN,\n  // and all test_state's will have phase = TEST.\n  // Other defaults are set according to the NetState defaults.\n  optional NetState train_state = 26;\n  repeated NetState test_state = 27;\n\n  // The number of iterations for each test net.\n  repeated int32 test_iter = 3;\n\n  // The number of iterations between two testing phases.\n  optional int32 test_interval = 4 [default = 0];\n  optional bool test_compute_loss = 19 [default = false];\n  // If true, run an initial test pass before the first iteration,\n  // ensuring memory availability and printing the starting value of the loss.\n  optional bool test_initialization = 32 [default = true];\n  optional float base_lr = 5; // The base learning rate\n  // the number of iterations between displaying info. If display = 0, no info\n  // will be displayed.\n  optional int32 display = 6;\n  // Display the loss averaged over the last average_loss iterations\n  optional int32 average_loss = 33 [default = 1];\n  optional int32 max_iter = 7; // the maximum number of iterations\n  // accumulate gradients over `iter_size` x `batch_size` instances\n  optional int32 iter_size = 36 [default = 1];\n\n  // The learning rate decay policy. The currently implemented learning rate\n  // policies are as follows:\n  //    - fixed: always return base_lr.\n  //    - step: return base_lr * gamma ^ (floor(iter / step))\n  //    - exp: return base_lr * gamma ^ iter\n  //    - inv: return base_lr * (1 + gamma * iter) ^ (- power)\n  //    - multistep: similar to step but it allows non uniform steps defined by\n  //      stepvalue\n  //    - poly: the effective learning rate follows a polynomial decay, to be\n  //      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)\n  //    - sigmoid: the effective learning rate follows a sigmod decay\n  //      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))\n  //\n  // where base_lr, max_iter, gamma, step, stepvalue and power are defined\n  // in the solver parameter protocol buffer, and iter is the current iteration.\n  optional string lr_policy = 8;\n  optional float gamma = 9; // The parameter to compute the learning rate.\n  optional float power = 10; // The parameter to compute the learning rate.\n  optional float momentum = 11; // The momentum value.\n  optional float weight_decay = 12; // The weight decay.\n  // regularization types supported: L1 and L2\n  // controlled by weight_decay\n  optional string regularization_type = 29 [default = \"L2\"];\n  // the stepsize for learning rate policy \"step\"\n  optional int32 stepsize = 13;\n  // the stepsize for learning rate policy \"multistep\"\n  repeated int32 stepvalue = 34;\n\n  // Set clip_gradients to >= 0 to clip parameter gradients to that L2 norm,\n  // whenever their actual L2 norm is larger.\n  optional float clip_gradients = 35 [default = -1];\n\n  optional int32 snapshot = 14 [default = 0]; // The snapshot interval\n  // The prefix for the snapshot.\n  // If not set then is replaced by prototxt file path without extension.\n  // If is set to directory then is augmented by prototxt file name\n  // without extention.\n  optional string snapshot_prefix = 15;\n  // whether to snapshot diff in the results or not. Snapshotting diff will help\n  // debugging but the final protocol buffer size will be much larger.\n  optional bool snapshot_diff = 16 [default = false];\n  enum SnapshotFormat {\n    HDF5 = 0;\n    BINARYPROTO = 1;\n  }\n  optional SnapshotFormat snapshot_format = 37 [default = BINARYPROTO];\n  // the mode solver will use: 0 for CPU and 1 for GPU. Use GPU in default.\n  enum SolverMode {\n    CPU = 0;\n    GPU = 1;\n  }\n  optional SolverMode solver_mode = 17 [default = GPU];\n  // the device_id will that be used in GPU mode. Use device_id = 0 in default.\n  optional int32 device_id = 18 [default = 0];\n  // If non-negative, the seed with which the Solver will initialize the Caffe\n  // random number generator -- useful for reproducible results. Otherwise,\n  // (and by default) initialize using a seed derived from the system clock.\n  optional int64 random_seed = 20 [default = -1];\n\n  // type of the solver\n  optional string type = 40 [default = \"SGD\"];\n\n  // numerical stability for RMSProp, AdaGrad and AdaDelta and Adam\n  optional float delta = 31 [default = 1e-8];\n  // parameters for the Adam solver\n  optional float momentum2 = 39 [default = 0.999];\n\n  // RMSProp decay value\n  // MeanSquare(t) = rms_decay*MeanSquare(t-1) + (1-rms_decay)*SquareGradient(t)\n  optional float rms_decay = 38 [default = 0.99];\n\n  // If true, print information about the state of the net that may help with\n  // debugging learning problems.\n  optional bool debug_info = 23 [default = false];\n\n  // If false, don't save a snapshot after training finishes.\n  optional bool snapshot_after_train = 28 [default = true];\n\n  // DEPRECATED: old solver enum types, use string instead\n  enum SolverType {\n    SGD = 0;\n    NESTEROV = 1;\n    ADAGRAD = 2;\n    RMSPROP = 3;\n    ADADELTA = 4;\n    ADAM = 5;\n  }\n  // DEPRECATED: use type instead of solver_type\n  optional SolverType solver_type = 30 [default = SGD];\n\n  // Overlap compute and communication for data parallel training\n  optional bool layer_wise_reduce = 41 [default = true];\n\n  // Path to caffemodel file(s) with pretrained weights to initialize finetuning.\n  // Tha same as command line --weights parameter for caffe train command.\n  // If command line --weights parameter is specified, it has higher priority\n  // and overwrites this one(s).\n  // If --snapshot command line parameter is specified, this one(s) are ignored.\n  // If several model files are expected, they can be listed in a one \n  // weights parameter separated by ',' (like in a command string) or\n  // in repeated weights parameters separately.\n  repeated string weights = 42;\n}\n\n// A message that stores the solver snapshots\nmessage SolverState {\n  optional int32 iter = 1; // The current iteration\n  optional string learned_net = 2; // The file that stores the learned net.\n  repeated BlobProto history = 3; // The history for sgd solvers\n  optional int32 current_step = 4 [default = 0]; // The current step for learning rate\n}\n\nenum Phase {\n   TRAIN = 0;\n   TEST = 1;\n}\n\nmessage NetState {\n  optional Phase phase = 1 [default = TEST];\n  optional int32 level = 2 [default = 0];\n  repeated string stage = 3;\n}\n\nmessage NetStateRule {\n  // Set phase to require the NetState have a particular phase (TRAIN or TEST)\n  // to meet this rule.\n  optional Phase phase = 1;\n\n  // Set the minimum and/or maximum levels in which the layer should be used.\n  // Leave undefined to meet the rule regardless of level.\n  optional int32 min_level = 2;\n  optional int32 max_level = 3;\n\n  // Customizable sets of stages to include or exclude.\n  // The net must have ALL of the specified stages and NONE of the specified\n  // \"not_stage\"s to meet the rule.\n  // (Use multiple NetStateRules to specify conjunctions of stages.)\n  repeated string stage = 4;\n  repeated string not_stage = 5;\n}\n\n// Specifies training parameters (multipliers on global learning constants,\n// and the name and other settings used for weight sharing).\nmessage ParamSpec {\n  // The names of the parameter blobs -- useful for sharing parameters among\n  // layers, but never required otherwise.  To share a parameter between two\n  // layers, give it a (non-empty) name.\n  optional string name = 1;\n\n  // Whether to require shared weights to have the same shape, or just the same\n  // count -- defaults to STRICT if unspecified.\n  optional DimCheckMode share_mode = 2;\n  enum DimCheckMode {\n    // STRICT (default) requires that num, channels, height, width each match.\n    STRICT = 0;\n    // PERMISSIVE requires only the count (num*channels*height*width) to match.\n    PERMISSIVE = 1;\n  }\n\n  // The multiplier on the global learning rate for this parameter.\n  optional float lr_mult = 3 [default = 1.0];\n\n  // The multiplier on the global weight decay for this parameter.\n  optional float decay_mult = 4 [default = 1.0];\n}\n\n// NOTE\n// Update the next available ID when you add a new LayerParameter field.\n//\n// LayerParameter next available layer-specific ID: 149 (last added: clip_param)\nmessage LayerParameter {\n  optional string name = 1; // the layer name\n  optional string type = 2; // the layer type\n  repeated string bottom = 3; // the name of each bottom blob\n  repeated string top = 4; // the name of each top blob\n\n  // The train / test phase for computation.\n  optional Phase phase = 10;\n\n  // The amount of weight to assign each top blob in the objective.\n  // Each layer assigns a default value, usually of either 0 or 1,\n  // to each top blob.\n  repeated float loss_weight = 5;\n\n  // Specifies training parameters (multipliers on global learning constants,\n  // and the name and other settings used for weight sharing).\n  repeated ParamSpec param = 6;\n\n  // The blobs containing the numeric parameters of the layer.\n  repeated BlobProto blobs = 7;\n\n  // Specifies whether to backpropagate to each bottom. If unspecified,\n  // Caffe will automatically infer whether each input needs backpropagation\n  // to compute parameter gradients. If set to true for some inputs,\n  // backpropagation to those inputs is forced; if set false for some inputs,\n  // backpropagation to those inputs is skipped.\n  //\n  // The size must be either 0 or equal to the number of bottoms.\n  repeated bool propagate_down = 11;\n\n  // Rules controlling whether and when a layer is included in the network,\n  // based on the current NetState.  You may specify a non-zero number of rules\n  // to include OR exclude, but not both.  If no include or exclude rules are\n  // specified, the layer is always included.  If the current NetState meets\n  // ANY (i.e., one or more) of the specified rules, the layer is\n  // included/excluded.\n  repeated NetStateRule include = 8;\n  repeated NetStateRule exclude = 9;\n\n  // Parameters for data pre-processing.\n  optional TransformationParameter transform_param = 100;\n\n  // Parameters shared by loss layers.\n  optional LossParameter loss_param = 101;\n\n  // Layer type-specific parameters.\n  //\n  // Note: certain layers may have more than one computational engine\n  // for their implementation. These layers include an Engine type and\n  // engine parameter for selecting the implementation.\n  // The default for the engine is set by the ENGINE switch at compile-time.\n  optional AccuracyParameter accuracy_param = 102;\n  optional ArgMaxParameter argmax_param = 103;\n  optional BatchNormParameter batch_norm_param = 139;\n  optional BiasParameter bias_param = 141;\n  optional ClipParameter clip_param = 148;\n  optional ConcatParameter concat_param = 104;\n  optional ContrastiveLossParameter contrastive_loss_param = 105;\n  optional ConvolutionParameter convolution_param = 106;\n  optional CropParameter crop_param = 144;\n  optional DataParameter data_param = 107;\n  optional DropoutParameter dropout_param = 108;\n  optional DummyDataParameter dummy_data_param = 109;\n  optional EltwiseParameter eltwise_param = 110;\n  optional ELUParameter elu_param = 140;\n  optional EmbedParameter embed_param = 137;\n  optional ExpParameter exp_param = 111;\n  optional FlattenParameter flatten_param = 135;\n  optional HDF5DataParameter hdf5_data_param = 112;\n  optional HDF5OutputParameter hdf5_output_param = 113;\n  optional HingeLossParameter hinge_loss_param = 114;\n  optional ImageDataParameter image_data_param = 115;\n  optional InfogainLossParameter infogain_loss_param = 116;\n  optional InnerProductParameter inner_product_param = 117;\n  optional InputParameter input_param = 143;\n  optional LogParameter log_param = 134;\n  optional LRNParameter lrn_param = 118;\n  optional MemoryDataParameter memory_data_param = 119;\n  optional MVNParameter mvn_param = 120;\n  optional ParameterParameter parameter_param = 145;\n  optional PoolingParameter pooling_param = 121;\n  optional PowerParameter power_param = 122;\n  optional PReLUParameter prelu_param = 131;\n  optional PythonParameter python_param = 130;\n  optional RecurrentParameter recurrent_param = 146;\n  optional ReductionParameter reduction_param = 136;\n  optional ReLUParameter relu_param = 123;\n  optional ReshapeParameter reshape_param = 133;\n  optional ScaleParameter scale_param = 142;\n  optional SigmoidParameter sigmoid_param = 124;\n  optional SoftmaxParameter softmax_param = 125;\n  optional SPPParameter spp_param = 132;\n  optional SliceParameter slice_param = 126;\n  optional SwishParameter swish_param = 147;\n  optional TanHParameter tanh_param = 127;\n  optional ThresholdParameter threshold_param = 128;\n  optional TileParameter tile_param = 138;\n  optional WindowDataParameter window_data_param = 129;\n}\n\n// Message that stores parameters used to apply transformation\n// to the data layer's data\nmessage TransformationParameter {\n  // For data pre-processing, we can do simple scaling and subtracting the\n  // data mean, if provided. Note that the mean subtraction is always carried\n  // out before scaling.\n  optional float scale = 1 [default = 1];\n  // Specify if we want to randomly mirror data.\n  optional bool mirror = 2 [default = false];\n  // Specify if we would like to randomly crop an image.\n  optional uint32 crop_size = 3 [default = 0];\n  // mean_file and mean_value cannot be specified at the same time\n  optional string mean_file = 4;\n  // if specified can be repeated once (would subtract it from all the channels)\n  // or can be repeated the same number of times as channels\n  // (would subtract them from the corresponding channel)\n  repeated float mean_value = 5;\n  // Force the decoded image to have 3 color channels.\n  optional bool force_color = 6 [default = false];\n  // Force the decoded image to have 1 color channels.\n  optional bool force_gray = 7 [default = false];\n}\n\n// Message that stores parameters shared by loss layers\nmessage LossParameter {\n  // If specified, ignore instances with the given label.\n  optional int32 ignore_label = 1;\n  // How to normalize the loss for loss layers that aggregate across batches,\n  // spatial dimensions, or other dimensions.  Currently only implemented in\n  // SoftmaxWithLoss and SigmoidCrossEntropyLoss layers.\n  enum NormalizationMode {\n    // Divide by the number of examples in the batch times spatial dimensions.\n    // Outputs that receive the ignore label will NOT be ignored in computing\n    // the normalization factor.\n    FULL = 0;\n    // Divide by the total number of output locations that do not take the\n    // ignore_label.  If ignore_label is not set, this behaves like FULL.\n    VALID = 1;\n    // Divide by the batch size.\n    BATCH_SIZE = 2;\n    // Do not normalize the loss.\n    NONE = 3;\n  }\n  // For historical reasons, the default normalization for\n  // SigmoidCrossEntropyLoss is BATCH_SIZE and *not* VALID.\n  optional NormalizationMode normalization = 3 [default = VALID];\n  // Deprecated.  Ignored if normalization is specified.  If normalization\n  // is not specified, then setting this to false will be equivalent to\n  // normalization = BATCH_SIZE to be consistent with previous behavior.\n  optional bool normalize = 2;\n}\n\n// Messages that store parameters used by individual layer types follow, in\n// alphabetical order.\n\nmessage AccuracyParameter {\n  // When computing accuracy, count as correct by comparing the true label to\n  // the top k scoring classes.  By default, only compare to the top scoring\n  // class (i.e. argmax).\n  optional uint32 top_k = 1 [default = 1];\n\n  // The \"label\" axis of the prediction blob, whose argmax corresponds to the\n  // predicted label -- may be negative to index from the end (e.g., -1 for the\n  // last axis).  For example, if axis == 1 and the predictions are\n  // (N x C x H x W), the label blob is expected to contain N*H*W ground truth\n  // labels with integer values in {0, 1, ..., C-1}.\n  optional int32 axis = 2 [default = 1];\n\n  // If specified, ignore instances with the given label.\n  optional int32 ignore_label = 3;\n}\n\nmessage ArgMaxParameter {\n  // If true produce pairs (argmax, maxval)\n  optional bool out_max_val = 1 [default = false];\n  optional uint32 top_k = 2 [default = 1];\n  // The axis along which to maximise -- may be negative to index from the\n  // end (e.g., -1 for the last axis).\n  // By default ArgMaxLayer maximizes over the flattened trailing dimensions\n  // for each index of the first / num dimension.\n  optional int32 axis = 3;\n}\n\n// Message that stores parameters used by ClipLayer\nmessage ClipParameter {\n  required float min = 1;\n  required float max = 2;\n}\n\nmessage ConcatParameter {\n  // The axis along which to concatenate -- may be negative to index from the\n  // end (e.g., -1 for the last axis).  Other axes must have the\n  // same dimension for all the bottom blobs.\n  // By default, ConcatLayer concatenates blobs along the \"channels\" axis (1).\n  optional int32 axis = 2 [default = 1];\n\n  // DEPRECATED: alias for \"axis\" -- does not support negative indexing.\n  optional uint32 concat_dim = 1 [default = 1];\n}\n\nmessage BatchNormParameter {\n  // If false, normalization is performed over the current mini-batch\n  // and global statistics are accumulated (but not yet used) by a moving\n  // average.\n  // If true, those accumulated mean and variance values are used for the\n  // normalization.\n  // By default, it is set to false when the network is in the training\n  // phase and true when the network is in the testing phase.\n  optional bool use_global_stats = 1;\n  // What fraction of the moving average remains each iteration?\n  // Smaller values make the moving average decay faster, giving more\n  // weight to the recent values.\n  // Each iteration updates the moving average @f$S_{t-1}@f$ with the\n  // current mean @f$ Y_t @f$ by\n  // @f$ S_t = (1-\\beta)Y_t + \\beta \\cdot S_{t-1} @f$, where @f$ \\beta @f$\n  // is the moving_average_fraction parameter.\n  optional float moving_average_fraction = 2 [default = .999];\n  // Small value to add to the variance estimate so that we don't divide by\n  // zero.\n  optional float eps = 3 [default = 1e-5];\n}\n\nmessage BiasParameter {\n  // The first axis of bottom[0] (the first input Blob) along which to apply\n  // bottom[1] (the second input Blob).  May be negative to index from the end\n  // (e.g., -1 for the last axis).\n  //\n  // For example, if bottom[0] is 4D with shape 100x3x40x60, the output\n  // top[0] will have the same shape, and bottom[1] may have any of the\n  // following shapes (for the given value of axis):\n  //    (axis == 0 == -4) 100; 100x3; 100x3x40; 100x3x40x60\n  //    (axis == 1 == -3)          3;     3x40;     3x40x60\n  //    (axis == 2 == -2)                   40;       40x60\n  //    (axis == 3 == -1)                                60\n  // Furthermore, bottom[1] may have the empty shape (regardless of the value of\n  // \"axis\") -- a scalar bias.\n  optional int32 axis = 1 [default = 1];\n\n  // (num_axes is ignored unless just one bottom is given and the bias is\n  // a learned parameter of the layer.  Otherwise, num_axes is determined by the\n  // number of axes by the second bottom.)\n  // The number of axes of the input (bottom[0]) covered by the bias\n  // parameter, or -1 to cover all axes of bottom[0] starting from `axis`.\n  // Set num_axes := 0, to add a zero-axis Blob: a scalar.\n  optional int32 num_axes = 2 [default = 1];\n\n  // (filler is ignored unless just one bottom is given and the bias is\n  // a learned parameter of the layer.)\n  // The initialization for the learned bias parameter.\n  // Default is the zero (0) initialization, resulting in the BiasLayer\n  // initially performing the identity operation.\n  optional FillerParameter filler = 3;\n}\n\nmessage ContrastiveLossParameter {\n  // margin for dissimilar pair\n  optional float margin = 1 [default = 1.0];\n  // The first implementation of this cost did not exactly match the cost of\n  // Hadsell et al 2006 -- using (margin - d^2) instead of (margin - d)^2.\n  // legacy_version = false (the default) uses (margin - d)^2 as proposed in the\n  // Hadsell paper. New models should probably use this version.\n  // legacy_version = true uses (margin - d^2). This is kept to support /\n  // reproduce existing models and results\n  optional bool legacy_version = 2 [default = false];\n}\n\nmessage ConvolutionParameter {\n  optional uint32 num_output = 1; // The number of outputs for the layer\n  optional bool bias_term = 2 [default = true]; // whether to have bias terms\n\n  // Pad, kernel size, and stride are all given as a single value for equal\n  // dimensions in all spatial dimensions, or once per spatial dimension.\n  repeated uint32 pad = 3; // The padding size; defaults to 0\n  repeated uint32 kernel_size = 4; // The kernel size\n  repeated uint32 stride = 6; // The stride; defaults to 1\n  // Factor used to dilate the kernel, (implicitly) zero-filling the resulting\n  // holes. (Kernel dilation is sometimes referred to by its use in the\n  // algorithme à trous from Holschneider et al. 1987.)\n  repeated uint32 dilation = 18; // The dilation; defaults to 1\n\n  // For 2D convolution only, the *_h and *_w versions may also be used to\n  // specify both spatial dimensions.\n  optional uint32 pad_h = 9 [default = 0]; // The padding height (2D only)\n  optional uint32 pad_w = 10 [default = 0]; // The padding width (2D only)\n  optional uint32 kernel_h = 11; // The kernel height (2D only)\n  optional uint32 kernel_w = 12; // The kernel width (2D only)\n  optional uint32 stride_h = 13; // The stride height (2D only)\n  optional uint32 stride_w = 14; // The stride width (2D only)\n\n  optional uint32 group = 5 [default = 1]; // The group size for group conv\n\n  optional FillerParameter weight_filler = 7; // The filler for the weight\n  optional FillerParameter bias_filler = 8; // The filler for the bias\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 15 [default = DEFAULT];\n\n  // The axis to interpret as \"channels\" when performing convolution.\n  // Preceding dimensions are treated as independent inputs;\n  // succeeding dimensions are treated as \"spatial\".\n  // With (N, C, H, W) inputs, and axis == 1 (the default), we perform\n  // N independent 2D convolutions, sliding C-channel (or (C/g)-channels, for\n  // groups g>1) filters across the spatial axes (H, W) of the input.\n  // With (N, C, D, H, W) inputs, and axis == 1, we perform\n  // N independent 3D convolutions, sliding (C/g)-channels\n  // filters across the spatial axes (D, H, W) of the input.\n  optional int32 axis = 16 [default = 1];\n\n  // Whether to force use of the general ND convolution, even if a specific\n  // implementation for blobs of the appropriate number of spatial dimensions\n  // is available. (Currently, there is only a 2D-specific convolution\n  // implementation; for input blobs with num_axes != 2, this option is\n  // ignored and the ND implementation will be used.)\n  optional bool force_nd_im2col = 17 [default = false];\n}\n\nmessage CropParameter {\n  // To crop, elements of the first bottom are selected to fit the dimensions\n  // of the second, reference bottom. The crop is configured by\n  // - the crop `axis` to pick the dimensions for cropping\n  // - the crop `offset` to set the shift for all/each dimension\n  // to align the cropped bottom with the reference bottom.\n  // All dimensions up to but excluding `axis` are preserved, while\n  // the dimensions including and trailing `axis` are cropped.\n  // If only one `offset` is set, then all dimensions are offset by this amount.\n  // Otherwise, the number of offsets must equal the number of cropped axes to\n  // shift the crop in each dimension accordingly.\n  // Note: standard dimensions are N,C,H,W so the default is a spatial crop,\n  // and `axis` may be negative to index from the end (e.g., -1 for the last\n  // axis).\n  optional int32 axis = 1 [default = 2];\n  repeated uint32 offset = 2;\n}\n\nmessage DataParameter {\n  enum DB {\n    LEVELDB = 0;\n    LMDB = 1;\n  }\n  // Specify the data source.\n  optional string source = 1;\n  // Specify the batch size.\n  optional uint32 batch_size = 4;\n  // The rand_skip variable is for the data layer to skip a few data points\n  // to avoid all asynchronous sgd clients to start at the same point. The skip\n  // point would be set as rand_skip * rand(0,1). Note that rand_skip should not\n  // be larger than the number of keys in the database.\n  // DEPRECATED. Each solver accesses a different subset of the database.\n  optional uint32 rand_skip = 7 [default = 0];\n  optional DB backend = 8 [default = LEVELDB];\n  // DEPRECATED. See TransformationParameter. For data pre-processing, we can do\n  // simple scaling and subtracting the data mean, if provided. Note that the\n  // mean subtraction is always carried out before scaling.\n  optional float scale = 2 [default = 1];\n  optional string mean_file = 3;\n  // DEPRECATED. See TransformationParameter. Specify if we would like to randomly\n  // crop an image.\n  optional uint32 crop_size = 5 [default = 0];\n  // DEPRECATED. See TransformationParameter. Specify if we want to randomly mirror\n  // data.\n  optional bool mirror = 6 [default = false];\n  // Force the encoded image to have 3 color channels\n  optional bool force_encoded_color = 9 [default = false];\n  // Prefetch queue (Increase if data feeding bandwidth varies, within the\n  // limit of device memory for GPU training)\n  optional uint32 prefetch = 10 [default = 4];\n}\n\nmessage DropoutParameter {\n  optional float dropout_ratio = 1 [default = 0.5]; // dropout ratio\n}\n\n// DummyDataLayer fills any number of arbitrarily shaped blobs with random\n// (or constant) data generated by \"Fillers\" (see \"message FillerParameter\").\nmessage DummyDataParameter {\n  // This layer produces N >= 1 top blobs.  DummyDataParameter must specify 1 or N\n  // shape fields, and 0, 1 or N data_fillers.\n  //\n  // If 0 data_fillers are specified, ConstantFiller with a value of 0 is used.\n  // If 1 data_filler is specified, it is applied to all top blobs.  If N are\n  // specified, the ith is applied to the ith top blob.\n  repeated FillerParameter data_filler = 1;\n  repeated BlobShape shape = 6;\n\n  // 4D dimensions -- deprecated.  Use \"shape\" instead.\n  repeated uint32 num = 2;\n  repeated uint32 channels = 3;\n  repeated uint32 height = 4;\n  repeated uint32 width = 5;\n}\n\nmessage EltwiseParameter {\n  enum EltwiseOp {\n    PROD = 0;\n    SUM = 1;\n    MAX = 2;\n  }\n  optional EltwiseOp operation = 1 [default = SUM]; // element-wise operation\n  repeated float coeff = 2; // blob-wise coefficient for SUM operation\n\n  // Whether to use an asymptotically slower (for >2 inputs) but stabler method\n  // of computing the gradient for the PROD operation. (No effect for SUM op.)\n  optional bool stable_prod_grad = 3 [default = true];\n}\n\n// Message that stores parameters used by ELULayer\nmessage ELUParameter {\n  // Described in:\n  // Clevert, D.-A., Unterthiner, T., & Hochreiter, S. (2015). Fast and Accurate\n  // Deep Network Learning by Exponential Linear Units (ELUs). arXiv\n  optional float alpha = 1 [default = 1];\n}\n\n// Message that stores parameters used by EmbedLayer\nmessage EmbedParameter {\n  optional uint32 num_output = 1; // The number of outputs for the layer\n  // The input is given as integers to be interpreted as one-hot\n  // vector indices with dimension num_input.  Hence num_input should be\n  // 1 greater than the maximum possible input value.\n  optional uint32 input_dim = 2;\n\n  optional bool bias_term = 3 [default = true]; // Whether to use a bias term\n  optional FillerParameter weight_filler = 4; // The filler for the weight\n  optional FillerParameter bias_filler = 5; // The filler for the bias\n\n}\n\n// Message that stores parameters used by ExpLayer\nmessage ExpParameter {\n  // ExpLayer computes outputs y = base ^ (shift + scale * x), for base > 0.\n  // Or if base is set to the default (-1), base is set to e,\n  // so y = exp(shift + scale * x).\n  optional float base = 1 [default = -1.0];\n  optional float scale = 2 [default = 1.0];\n  optional float shift = 3 [default = 0.0];\n}\n\n/// Message that stores parameters used by FlattenLayer\nmessage FlattenParameter {\n  // The first axis to flatten: all preceding axes are retained in the output.\n  // May be negative to index from the end (e.g., -1 for the last axis).\n  optional int32 axis = 1 [default = 1];\n\n  // The last axis to flatten: all following axes are retained in the output.\n  // May be negative to index from the end (e.g., the default -1 for the last\n  // axis).\n  optional int32 end_axis = 2 [default = -1];\n}\n\n// Message that stores parameters used by HDF5DataLayer\nmessage HDF5DataParameter {\n  // Specify the data source.\n  optional string source = 1;\n  // Specify the batch size.\n  optional uint32 batch_size = 2;\n\n  // Specify whether to shuffle the data.\n  // If shuffle == true, the ordering of the HDF5 files is shuffled,\n  // and the ordering of data within any given HDF5 file is shuffled,\n  // but data between different files are not interleaved; all of a file's\n  // data are output (in a random order) before moving onto another file.\n  optional bool shuffle = 3 [default = false];\n}\n\nmessage HDF5OutputParameter {\n  optional string file_name = 1;\n}\n\nmessage HingeLossParameter {\n  enum Norm {\n    L1 = 1;\n    L2 = 2;\n  }\n  // Specify the Norm to use L1 or L2\n  optional Norm norm = 1 [default = L1];\n}\n\nmessage ImageDataParameter {\n  // Specify the data source.\n  optional string source = 1;\n  // Specify the batch size.\n  optional uint32 batch_size = 4 [default = 1];\n  // The rand_skip variable is for the data layer to skip a few data points\n  // to avoid all asynchronous sgd clients to start at the same point. The skip\n  // point would be set as rand_skip * rand(0,1). Note that rand_skip should not\n  // be larger than the number of keys in the database.\n  optional uint32 rand_skip = 7 [default = 0];\n  // Whether or not ImageLayer should shuffle the list of files at every epoch.\n  optional bool shuffle = 8 [default = false];\n  // It will also resize images if new_height or new_width are not zero.\n  optional uint32 new_height = 9 [default = 0];\n  optional uint32 new_width = 10 [default = 0];\n  // Specify if the images are color or gray\n  optional bool is_color = 11 [default = true];\n  // DEPRECATED. See TransformationParameter. For data pre-processing, we can do\n  // simple scaling and subtracting the data mean, if provided. Note that the\n  // mean subtraction is always carried out before scaling.\n  optional float scale = 2 [default = 1];\n  optional string mean_file = 3;\n  // DEPRECATED. See TransformationParameter. Specify if we would like to randomly\n  // crop an image.\n  optional uint32 crop_size = 5 [default = 0];\n  // DEPRECATED. See TransformationParameter. Specify if we want to randomly mirror\n  // data.\n  optional bool mirror = 6 [default = false];\n  optional string root_folder = 12 [default = \"\"];\n}\n\nmessage InfogainLossParameter {\n  // Specify the infogain matrix source.\n  optional string source = 1;\n  optional int32 axis = 2 [default = 1]; // axis of prob\n}\n\nmessage InnerProductParameter {\n  optional uint32 num_output = 1; // The number of outputs for the layer\n  optional bool bias_term = 2 [default = true]; // whether to have bias terms\n  optional FillerParameter weight_filler = 3; // The filler for the weight\n  optional FillerParameter bias_filler = 4; // The filler for the bias\n\n  // The first axis to be lumped into a single inner product computation;\n  // all preceding axes are retained in the output.\n  // May be negative to index from the end (e.g., -1 for the last axis).\n  optional int32 axis = 5 [default = 1];\n  // Specify whether to transpose the weight matrix or not.\n  // If transpose == true, any operations will be performed on the transpose\n  // of the weight matrix. The weight matrix itself is not going to be transposed\n  // but rather the transfer flag of operations will be toggled accordingly.\n  optional bool transpose = 6 [default = false];\n}\n\nmessage InputParameter {\n  // This layer produces N >= 1 top blob(s) to be assigned manually.\n  // Define N shapes to set a shape for each top.\n  // Define 1 shape to set the same shape for every top.\n  // Define no shape to defer to reshaping manually.\n  repeated BlobShape shape = 1;\n}\n\n// Message that stores parameters used by LogLayer\nmessage LogParameter {\n  // LogLayer computes outputs y = log_base(shift + scale * x), for base > 0.\n  // Or if base is set to the default (-1), base is set to e,\n  // so y = ln(shift + scale * x) = log_e(shift + scale * x)\n  optional float base = 1 [default = -1.0];\n  optional float scale = 2 [default = 1.0];\n  optional float shift = 3 [default = 0.0];\n}\n\n// Message that stores parameters used by LRNLayer\nmessage LRNParameter {\n  optional uint32 local_size = 1 [default = 5];\n  optional float alpha = 2 [default = 1.];\n  optional float beta = 3 [default = 0.75];\n  enum NormRegion {\n    ACROSS_CHANNELS = 0;\n    WITHIN_CHANNEL = 1;\n  }\n  optional NormRegion norm_region = 4 [default = ACROSS_CHANNELS];\n  optional float k = 5 [default = 1.];\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 6 [default = DEFAULT];\n}\n\nmessage MemoryDataParameter {\n  optional uint32 batch_size = 1;\n  optional uint32 channels = 2;\n  optional uint32 height = 3;\n  optional uint32 width = 4;\n}\n\nmessage MVNParameter {\n  // This parameter can be set to false to normalize mean only\n  optional bool normalize_variance = 1 [default = true];\n\n  // This parameter can be set to true to perform DNN-like MVN\n  optional bool across_channels = 2 [default = false];\n\n  // Epsilon for not dividing by zero while normalizing variance\n  optional float eps = 3 [default = 1e-9];\n}\n\nmessage ParameterParameter {\n  optional BlobShape shape = 1;\n}\n\nmessage PoolingParameter {\n  enum PoolMethod {\n    MAX = 0;\n    AVE = 1;\n    STOCHASTIC = 2;\n  }\n  optional PoolMethod pool = 1 [default = MAX]; // The pooling method\n  // Pad, kernel size, and stride are all given as a single value for equal\n  // dimensions in height and width or as Y, X pairs.\n  optional uint32 pad = 4 [default = 0]; // The padding size (equal in Y, X)\n  optional uint32 pad_h = 9 [default = 0]; // The padding height\n  optional uint32 pad_w = 10 [default = 0]; // The padding width\n  optional uint32 kernel_size = 2; // The kernel size (square)\n  optional uint32 kernel_h = 5; // The kernel height\n  optional uint32 kernel_w = 6; // The kernel width\n  optional uint32 stride = 3 [default = 1]; // The stride (equal in Y, X)\n  optional uint32 stride_h = 7; // The stride height\n  optional uint32 stride_w = 8; // The stride width\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 11 [default = DEFAULT];\n  // If global_pooling then it will pool over the size of the bottom by doing\n  // kernel_h = bottom->height and kernel_w = bottom->width\n  optional bool global_pooling = 12 [default = false];\n  // How to calculate the output size - using ceil (default) or floor rounding.\n  enum RoundMode {\n    CEIL = 0;\n    FLOOR = 1;\n  }\n  optional RoundMode round_mode = 13 [default = CEIL];\n}\n\nmessage PowerParameter {\n  // PowerLayer computes outputs y = (shift + scale * x) ^ power.\n  optional float power = 1 [default = 1.0];\n  optional float scale = 2 [default = 1.0];\n  optional float shift = 3 [default = 0.0];\n}\n\nmessage PythonParameter {\n  optional string module = 1;\n  optional string layer = 2;\n  // This value is set to the attribute `param_str` of the `PythonLayer` object\n  // in Python before calling the `setup()` method. This could be a number,\n  // string, dictionary in Python dict format, JSON, etc. You may parse this\n  // string in `setup` method and use it in `forward` and `backward`.\n  optional string param_str = 3 [default = ''];\n  // DEPRECATED\n  optional bool share_in_parallel = 4 [default = false];\n}\n\n// Message that stores parameters used by RecurrentLayer\nmessage RecurrentParameter {\n  // The dimension of the output (and usually hidden state) representation --\n  // must be explicitly set to non-zero.\n  optional uint32 num_output = 1 [default = 0];\n\n  optional FillerParameter weight_filler = 2; // The filler for the weight\n  optional FillerParameter bias_filler = 3; // The filler for the bias\n\n  // Whether to enable displaying debug_info in the unrolled recurrent net.\n  optional bool debug_info = 4 [default = false];\n\n  // Whether to add as additional inputs (bottoms) the initial hidden state\n  // blobs, and add as additional outputs (tops) the final timestep hidden state\n  // blobs.  The number of additional bottom/top blobs required depends on the\n  // recurrent architecture -- e.g., 1 for RNNs, 2 for LSTMs.\n  optional bool expose_hidden = 5 [default = false];\n}\n\n// Message that stores parameters used by ReductionLayer\nmessage ReductionParameter {\n  enum ReductionOp {\n    SUM = 1;\n    ASUM = 2;\n    SUMSQ = 3;\n    MEAN = 4;\n  }\n\n  optional ReductionOp operation = 1 [default = SUM]; // reduction operation\n\n  // The first axis to reduce to a scalar -- may be negative to index from the\n  // end (e.g., -1 for the last axis).\n  // (Currently, only reduction along ALL \"tail\" axes is supported; reduction\n  // of axis M through N, where N < num_axes - 1, is unsupported.)\n  // Suppose we have an n-axis bottom Blob with shape:\n  //     (d0, d1, d2, ..., d(m-1), dm, d(m+1), ..., d(n-1)).\n  // If axis == m, the output Blob will have shape\n  //     (d0, d1, d2, ..., d(m-1)),\n  // and the ReductionOp operation is performed (d0 * d1 * d2 * ... * d(m-1))\n  // times, each including (dm * d(m+1) * ... * d(n-1)) individual data.\n  // If axis == 0 (the default), the output Blob always has the empty shape\n  // (count 1), performing reduction across the entire input --\n  // often useful for creating new loss functions.\n  optional int32 axis = 2 [default = 0];\n\n  optional float coeff = 3 [default = 1.0]; // coefficient for output\n}\n\n// Message that stores parameters used by ReLULayer\nmessage ReLUParameter {\n  // Allow non-zero slope for negative inputs to speed up optimization\n  // Described in:\n  // Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013). Rectifier nonlinearities\n  // improve neural network acoustic models. In ICML Workshop on Deep Learning\n  // for Audio, Speech, and Language Processing.\n  optional float negative_slope = 1 [default = 0];\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 2 [default = DEFAULT];\n}\n\nmessage ReshapeParameter {\n  // Specify the output dimensions. If some of the dimensions are set to 0,\n  // the corresponding dimension from the bottom layer is used (unchanged).\n  // Exactly one dimension may be set to -1, in which case its value is\n  // inferred from the count of the bottom blob and the remaining dimensions.\n  // For example, suppose we want to reshape a 2D blob \"input\" with shape 2 x 8:\n  //\n  //   layer {\n  //     type: \"Reshape\" bottom: \"input\" top: \"output\"\n  //     reshape_param { ... }\n  //   }\n  //\n  // If \"input\" is 2D with shape 2 x 8, then the following reshape_param\n  // specifications are all equivalent, producing a 3D blob \"output\" with shape\n  // 2 x 2 x 4:\n  //\n  //   reshape_param { shape { dim:  2  dim: 2  dim:  4 } }\n  //   reshape_param { shape { dim:  0  dim: 2  dim:  4 } }\n  //   reshape_param { shape { dim:  0  dim: 2  dim: -1 } }\n  //   reshape_param { shape { dim:  0  dim:-1  dim:  4 } }\n  //\n  optional BlobShape shape = 1;\n\n  // axis and num_axes control the portion of the bottom blob's shape that are\n  // replaced by (included in) the reshape. By default (axis == 0 and\n  // num_axes == -1), the entire bottom blob shape is included in the reshape,\n  // and hence the shape field must specify the entire output shape.\n  //\n  // axis may be non-zero to retain some portion of the beginning of the input\n  // shape (and may be negative to index from the end; e.g., -1 to begin the\n  // reshape after the last axis, including nothing in the reshape,\n  // -2 to include only the last axis, etc.).\n  //\n  // For example, suppose \"input\" is a 2D blob with shape 2 x 8.\n  // Then the following ReshapeLayer specifications are all equivalent,\n  // producing a blob \"output\" with shape 2 x 2 x 4:\n  //\n  //   reshape_param { shape { dim: 2  dim: 2  dim: 4 } }\n  //   reshape_param { shape { dim: 2  dim: 4 } axis:  1 }\n  //   reshape_param { shape { dim: 2  dim: 4 } axis: -3 }\n  //\n  // num_axes specifies the extent of the reshape.\n  // If num_axes >= 0 (and axis >= 0), the reshape will be performed only on\n  // input axes in the range [axis, axis+num_axes].\n  // num_axes may also be -1, the default, to include all remaining axes\n  // (starting from axis).\n  //\n  // For example, suppose \"input\" is a 2D blob with shape 2 x 8.\n  // Then the following ReshapeLayer specifications are equivalent,\n  // producing a blob \"output\" with shape 1 x 2 x 8.\n  //\n  //   reshape_param { shape { dim:  1  dim: 2  dim:  8 } }\n  //   reshape_param { shape { dim:  1  dim: 2  }  num_axes: 1 }\n  //   reshape_param { shape { dim:  1  }  num_axes: 0 }\n  //\n  // On the other hand, these would produce output blob shape 2 x 1 x 8:\n  //\n  //   reshape_param { shape { dim: 2  dim: 1  dim: 8  }  }\n  //   reshape_param { shape { dim: 1 }  axis: 1  num_axes: 0 }\n  //\n  optional int32 axis = 2 [default = 0];\n  optional int32 num_axes = 3 [default = -1];\n}\n\nmessage ScaleParameter {\n  // The first axis of bottom[0] (the first input Blob) along which to apply\n  // bottom[1] (the second input Blob).  May be negative to index from the end\n  // (e.g., -1 for the last axis).\n  //\n  // For example, if bottom[0] is 4D with shape 100x3x40x60, the output\n  // top[0] will have the same shape, and bottom[1] may have any of the\n  // following shapes (for the given value of axis):\n  //    (axis == 0 == -4) 100; 100x3; 100x3x40; 100x3x40x60\n  //    (axis == 1 == -3)          3;     3x40;     3x40x60\n  //    (axis == 2 == -2)                   40;       40x60\n  //    (axis == 3 == -1)                                60\n  // Furthermore, bottom[1] may have the empty shape (regardless of the value of\n  // \"axis\") -- a scalar multiplier.\n  optional int32 axis = 1 [default = 1];\n\n  // (num_axes is ignored unless just one bottom is given and the scale is\n  // a learned parameter of the layer.  Otherwise, num_axes is determined by the\n  // number of axes by the second bottom.)\n  // The number of axes of the input (bottom[0]) covered by the scale\n  // parameter, or -1 to cover all axes of bottom[0] starting from `axis`.\n  // Set num_axes := 0, to multiply with a zero-axis Blob: a scalar.\n  optional int32 num_axes = 2 [default = 1];\n\n  // (filler is ignored unless just one bottom is given and the scale is\n  // a learned parameter of the layer.)\n  // The initialization for the learned scale parameter.\n  // Default is the unit (1) initialization, resulting in the ScaleLayer\n  // initially performing the identity operation.\n  optional FillerParameter filler = 3;\n\n  // Whether to also learn a bias (equivalent to a ScaleLayer+BiasLayer, but\n  // may be more efficient).  Initialized with bias_filler (defaults to 0).\n  optional bool bias_term = 4 [default = false];\n  optional FillerParameter bias_filler = 5;\n}\n\nmessage SigmoidParameter {\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 1 [default = DEFAULT];\n}\n\nmessage SliceParameter {\n  // The axis along which to slice -- may be negative to index from the end\n  // (e.g., -1 for the last axis).\n  // By default, SliceLayer concatenates blobs along the \"channels\" axis (1).\n  optional int32 axis = 3 [default = 1];\n  repeated uint32 slice_point = 2;\n\n  // DEPRECATED: alias for \"axis\" -- does not support negative indexing.\n  optional uint32 slice_dim = 1 [default = 1];\n}\n\n// Message that stores parameters used by SoftmaxLayer, SoftmaxWithLossLayer\nmessage SoftmaxParameter {\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 1 [default = DEFAULT];\n\n  // The axis along which to perform the softmax -- may be negative to index\n  // from the end (e.g., -1 for the last axis).\n  // Any other axes will be evaluated as independent softmaxes.\n  optional int32 axis = 2 [default = 1];\n}\n\n// Message that stores parameters used by SwishLayer\nmessage SwishParameter {\n  // Beta parameter for the Swish activation function\n  // Described in:\n  // Prajit Ramachandran, Barret Zoph, Quoc V. Le. (2017). Searching for\n  // Activation Functions. https://arxiv.org/abs/1710.05941v2\n  optional float beta = 1 [default = 1];\n}\n\nmessage TanHParameter {\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 1 [default = DEFAULT];\n}\n\n// Message that stores parameters used by TileLayer\nmessage TileParameter {\n  // The index of the axis to tile.\n  optional int32 axis = 1 [default = 1];\n\n  // The number of copies (tiles) of the blob to output.\n  optional int32 tiles = 2;\n}\n\n// Message that stores parameters used by ThresholdLayer\nmessage ThresholdParameter {\n  optional float threshold = 1 [default = 0]; // Strictly positive values\n}\n\nmessage WindowDataParameter {\n  // Specify the data source.\n  optional string source = 1;\n  // For data pre-processing, we can do simple scaling and subtracting the\n  // data mean, if provided. Note that the mean subtraction is always carried\n  // out before scaling.\n  optional float scale = 2 [default = 1];\n  optional string mean_file = 3;\n  // Specify the batch size.\n  optional uint32 batch_size = 4;\n  // Specify if we would like to randomly crop an image.\n  optional uint32 crop_size = 5 [default = 0];\n  // Specify if we want to randomly mirror data.\n  optional bool mirror = 6 [default = false];\n  // Foreground (object) overlap threshold\n  optional float fg_threshold = 7 [default = 0.5];\n  // Background (non-object) overlap threshold\n  optional float bg_threshold = 8 [default = 0.5];\n  // Fraction of batch that should be foreground objects\n  optional float fg_fraction = 9 [default = 0.25];\n  // Amount of contextual padding to add around a window\n  // (used only by the window_data_layer)\n  optional uint32 context_pad = 10 [default = 0];\n  // Mode for cropping out a detection window\n  // warp: cropped window is warped to a fixed size and aspect ratio\n  // square: the tightest square around the window is cropped\n  optional string crop_mode = 11 [default = \"warp\"];\n  // cache_images: will load all images in memory for faster access\n  optional bool cache_images = 12 [default = false];\n  // append root_folder to locate images\n  optional string root_folder = 13 [default = \"\"];\n}\n\nmessage SPPParameter {\n  enum PoolMethod {\n    MAX = 0;\n    AVE = 1;\n    STOCHASTIC = 2;\n  }\n  optional uint32 pyramid_height = 1;\n  optional PoolMethod pool = 2 [default = MAX]; // The pooling method\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 6 [default = DEFAULT];\n}\n\n// DEPRECATED: use LayerParameter.\nmessage V1LayerParameter {\n  repeated string bottom = 2;\n  repeated string top = 3;\n  optional string name = 4;\n  repeated NetStateRule include = 32;\n  repeated NetStateRule exclude = 33;\n  enum LayerType {\n    NONE = 0;\n    ABSVAL = 35;\n    ACCURACY = 1;\n    ARGMAX = 30;\n    BNLL = 2;\n    CONCAT = 3;\n    CONTRASTIVE_LOSS = 37;\n    CONVOLUTION = 4;\n    DATA = 5;\n    DECONVOLUTION = 39;\n    DROPOUT = 6;\n    DUMMY_DATA = 32;\n    EUCLIDEAN_LOSS = 7;\n    ELTWISE = 25;\n    EXP = 38;\n    FLATTEN = 8;\n    HDF5_DATA = 9;\n    HDF5_OUTPUT = 10;\n    HINGE_LOSS = 28;\n    IM2COL = 11;\n    IMAGE_DATA = 12;\n    INFOGAIN_LOSS = 13;\n    INNER_PRODUCT = 14;\n    LRN = 15;\n    MEMORY_DATA = 29;\n    MULTINOMIAL_LOGISTIC_LOSS = 16;\n    MVN = 34;\n    POOLING = 17;\n    POWER = 26;\n    RELU = 18;\n    SIGMOID = 19;\n    SIGMOID_CROSS_ENTROPY_LOSS = 27;\n    SILENCE = 36;\n    SOFTMAX = 20;\n    SOFTMAX_LOSS = 21;\n    SPLIT = 22;\n    SLICE = 33;\n    TANH = 23;\n    WINDOW_DATA = 24;\n    THRESHOLD = 31;\n  }\n  optional LayerType type = 5;\n  repeated BlobProto blobs = 6;\n  repeated string param = 1001;\n  repeated DimCheckMode blob_share_mode = 1002;\n  enum DimCheckMode {\n    STRICT = 0;\n    PERMISSIVE = 1;\n  }\n  repeated float blobs_lr = 7;\n  repeated float weight_decay = 8;\n  repeated float loss_weight = 35;\n  optional AccuracyParameter accuracy_param = 27;\n  optional ArgMaxParameter argmax_param = 23;\n  optional ConcatParameter concat_param = 9;\n  optional ContrastiveLossParameter contrastive_loss_param = 40;\n  optional ConvolutionParameter convolution_param = 10;\n  optional DataParameter data_param = 11;\n  optional DropoutParameter dropout_param = 12;\n  optional DummyDataParameter dummy_data_param = 26;\n  optional EltwiseParameter eltwise_param = 24;\n  optional ExpParameter exp_param = 41;\n  optional HDF5DataParameter hdf5_data_param = 13;\n  optional HDF5OutputParameter hdf5_output_param = 14;\n  optional HingeLossParameter hinge_loss_param = 29;\n  optional ImageDataParameter image_data_param = 15;\n  optional InfogainLossParameter infogain_loss_param = 16;\n  optional InnerProductParameter inner_product_param = 17;\n  optional LRNParameter lrn_param = 18;\n  optional MemoryDataParameter memory_data_param = 22;\n  optional MVNParameter mvn_param = 34;\n  optional PoolingParameter pooling_param = 19;\n  optional PowerParameter power_param = 21;\n  optional ReLUParameter relu_param = 30;\n  optional SigmoidParameter sigmoid_param = 38;\n  optional SoftmaxParameter softmax_param = 39;\n  optional SliceParameter slice_param = 31;\n  optional TanHParameter tanh_param = 37;\n  optional ThresholdParameter threshold_param = 25;\n  optional WindowDataParameter window_data_param = 20;\n  optional TransformationParameter transform_param = 36;\n  optional LossParameter loss_param = 42;\n  optional V0LayerParameter layer = 1;\n}\n\n// DEPRECATED: V0LayerParameter is the old way of specifying layer parameters\n// in Caffe.  We keep this message type around for legacy support.\nmessage V0LayerParameter {\n  optional string name = 1; // the layer name\n  optional string type = 2; // the string to specify the layer type\n\n  // Parameters to specify layers with inner products.\n  optional uint32 num_output = 3; // The number of outputs for the layer\n  optional bool biasterm = 4 [default = true]; // whether to have bias terms\n  optional FillerParameter weight_filler = 5; // The filler for the weight\n  optional FillerParameter bias_filler = 6; // The filler for the bias\n\n  optional uint32 pad = 7 [default = 0]; // The padding size\n  optional uint32 kernelsize = 8; // The kernel size\n  optional uint32 group = 9 [default = 1]; // The group size for group conv\n  optional uint32 stride = 10 [default = 1]; // The stride\n  enum PoolMethod {\n    MAX = 0;\n    AVE = 1;\n    STOCHASTIC = 2;\n  }\n  optional PoolMethod pool = 11 [default = MAX]; // The pooling method\n  optional float dropout_ratio = 12 [default = 0.5]; // dropout ratio\n\n  optional uint32 local_size = 13 [default = 5]; // for local response norm\n  optional float alpha = 14 [default = 1.]; // for local response norm\n  optional float beta = 15 [default = 0.75]; // for local response norm\n  optional float k = 22 [default = 1.];\n\n  // For data layers, specify the data source\n  optional string source = 16;\n  // For data pre-processing, we can do simple scaling and subtracting the\n  // data mean, if provided. Note that the mean subtraction is always carried\n  // out before scaling.\n  optional float scale = 17 [default = 1];\n  optional string meanfile = 18;\n  // For data layers, specify the batch size.\n  optional uint32 batchsize = 19;\n  // For data layers, specify if we would like to randomly crop an image.\n  optional uint32 cropsize = 20 [default = 0];\n  // For data layers, specify if we want to randomly mirror data.\n  optional bool mirror = 21 [default = false];\n\n  // The blobs containing the numeric parameters of the layer\n  repeated BlobProto blobs = 50;\n  // The ratio that is multiplied on the global learning rate. If you want to\n  // set the learning ratio for one blob, you need to set it for all blobs.\n  repeated float blobs_lr = 51;\n  // The weight decay that is multiplied on the global weight decay.\n  repeated float weight_decay = 52;\n\n  // The rand_skip variable is for the data layer to skip a few data points\n  // to avoid all asynchronous sgd clients to start at the same point. The skip\n  // point would be set as rand_skip * rand(0,1). Note that rand_skip should not\n  // be larger than the number of keys in the database.\n  optional uint32 rand_skip = 53 [default = 0];\n\n  // Fields related to detection (det_*)\n  // foreground (object) overlap threshold\n  optional float det_fg_threshold = 54 [default = 0.5];\n  // background (non-object) overlap threshold\n  optional float det_bg_threshold = 55 [default = 0.5];\n  // Fraction of batch that should be foreground objects\n  optional float det_fg_fraction = 56 [default = 0.25];\n\n  // optional bool OBSOLETE_can_clobber = 57 [default = true];\n\n  // Amount of contextual padding to add around a window\n  // (used only by the window_data_layer)\n  optional uint32 det_context_pad = 58 [default = 0];\n\n  // Mode for cropping out a detection window\n  // warp: cropped window is warped to a fixed size and aspect ratio\n  // square: the tightest square around the window is cropped\n  optional string det_crop_mode = 59 [default = \"warp\"];\n\n  // For ReshapeLayer, one needs to specify the new dimensions.\n  optional int32 new_num = 60 [default = 0];\n  optional int32 new_channels = 61 [default = 0];\n  optional int32 new_height = 62 [default = 0];\n  optional int32 new_width = 63 [default = 0];\n\n  // Whether or not ImageLayer should shuffle the list of files at every epoch.\n  // It will also resize images if new_height or new_width are not zero.\n  optional bool shuffle_images = 64 [default = false];\n\n  // For ConcatLayer, one needs to specify the dimension for concatenation, and\n  // the other dimensions must be the same for all the bottom blobs.\n  // By default it will concatenate blobs along the channels dimension.\n  optional uint32 concat_dim = 65 [default = 1];\n\n  optional HDF5OutputParameter hdf5_output_param = 1001;\n}\n\nmessage PReLUParameter {\n  // Parametric ReLU described in K. He et al, Delving Deep into Rectifiers:\n  // Surpassing Human-Level Performance on ImageNet Classification, 2015.\n\n  // Initial value of a_i. Default is a_i=0.25 for all i.\n  optional FillerParameter filler = 1;\n  // Whether or not slope parameters are shared across channels.\n  optional bool channel_shared = 2 [default = false];\n}"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/caffe2/caffe/proto/caffe_pb2.py",
    "content": "# -*- coding: utf-8 -*-\n# Generated by the protocol buffer compiler.  DO NOT EDIT!\n# source: caffe.proto\n\nfrom google.protobuf.internal import enum_type_wrapper\nfrom google.protobuf import descriptor as _descriptor\nfrom google.protobuf import message as _message\nfrom google.protobuf import reflection as _reflection\nfrom google.protobuf import symbol_database as _symbol_database\n# @@protoc_insertion_point(imports)\n\n_sym_db = _symbol_database.Default()\n\n\n\n\nDESCRIPTOR = _descriptor.FileDescriptor(\n  name='caffe.proto',\n  package='caffe',\n  syntax='proto2',\n  serialized_options=None,\n  create_key=_descriptor._internal_create_key,\n  serialized_pb=b'\\n\\x0b\\x63\\x61\\x66\\x66\\x65.proto\\x12\\x05\\x63\\x61\\x66\\x66\\x65\\\"\\x1c\\n\\tBlobShape\\x12\\x0f\\n\\x03\\x64im\\x18\\x01 \\x03(\\x03\\x42\\x02\\x10\\x01\\\"\\xcc\\x01\\n\\tBlobProto\\x12\\x1f\\n\\x05shape\\x18\\x07 \\x01(\\x0b\\x32\\x10.caffe.BlobShape\\x12\\x10\\n\\x04\\x64\\x61ta\\x18\\x05 \\x03(\\x02\\x42\\x02\\x10\\x01\\x12\\x10\\n\\x04\\x64iff\\x18\\x06 \\x03(\\x02\\x42\\x02\\x10\\x01\\x12\\x17\\n\\x0b\\x64ouble_data\\x18\\x08 \\x03(\\x01\\x42\\x02\\x10\\x01\\x12\\x17\\n\\x0b\\x64ouble_diff\\x18\\t \\x03(\\x01\\x42\\x02\\x10\\x01\\x12\\x0e\\n\\x03num\\x18\\x01 \\x01(\\x05:\\x01\\x30\\x12\\x13\\n\\x08\\x63hannels\\x18\\x02 \\x01(\\x05:\\x01\\x30\\x12\\x11\\n\\x06height\\x18\\x03 \\x01(\\x05:\\x01\\x30\\x12\\x10\\n\\x05width\\x18\\x04 \\x01(\\x05:\\x01\\x30\\\"2\\n\\x0f\\x42lobProtoVector\\x12\\x1f\\n\\x05\\x62lobs\\x18\\x01 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\\"\\x81\\x01\\n\\x05\\x44\\x61tum\\x12\\x10\\n\\x08\\x63hannels\\x18\\x01 \\x01(\\x05\\x12\\x0e\\n\\x06height\\x18\\x02 \\x01(\\x05\\x12\\r\\n\\x05width\\x18\\x03 \\x01(\\x05\\x12\\x0c\\n\\x04\\x64\\x61ta\\x18\\x04 \\x01(\\x0c\\x12\\r\\n\\x05label\\x18\\x05 \\x01(\\x05\\x12\\x12\\n\\nfloat_data\\x18\\x06 \\x03(\\x02\\x12\\x16\\n\\x07\\x65ncoded\\x18\\x07 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\x8a\\x02\\n\\x0f\\x46illerParameter\\x12\\x16\\n\\x04type\\x18\\x01 \\x01(\\t:\\x08\\x63onstant\\x12\\x10\\n\\x05value\\x18\\x02 \\x01(\\x02:\\x01\\x30\\x12\\x0e\\n\\x03min\\x18\\x03 \\x01(\\x02:\\x01\\x30\\x12\\x0e\\n\\x03max\\x18\\x04 \\x01(\\x02:\\x01\\x31\\x12\\x0f\\n\\x04mean\\x18\\x05 \\x01(\\x02:\\x01\\x30\\x12\\x0e\\n\\x03std\\x18\\x06 \\x01(\\x02:\\x01\\x31\\x12\\x12\\n\\x06sparse\\x18\\x07 \\x01(\\x05:\\x02-1\\x12\\x42\\n\\rvariance_norm\\x18\\x08 \\x01(\\x0e\\x32#.caffe.FillerParameter.VarianceNorm:\\x06\\x46\\x41N_IN\\\"4\\n\\x0cVarianceNorm\\x12\\n\\n\\x06\\x46\\x41N_IN\\x10\\x00\\x12\\x0b\\n\\x07\\x46\\x41N_OUT\\x10\\x01\\x12\\x0b\\n\\x07\\x41VERAGE\\x10\\x02\\\"\\x8e\\x02\\n\\x0cNetParameter\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\r\\n\\x05input\\x18\\x03 \\x03(\\t\\x12%\\n\\x0binput_shape\\x18\\x08 \\x03(\\x0b\\x32\\x10.caffe.BlobShape\\x12\\x11\\n\\tinput_dim\\x18\\x04 \\x03(\\x05\\x12\\x1d\\n\\x0e\\x66orce_backward\\x18\\x05 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x1e\\n\\x05state\\x18\\x06 \\x01(\\x0b\\x32\\x0f.caffe.NetState\\x12\\x19\\n\\ndebug_info\\x18\\x07 \\x01(\\x08:\\x05\\x66\\x61lse\\x12$\\n\\x05layer\\x18\\x64 \\x03(\\x0b\\x32\\x15.caffe.LayerParameter\\x12\\'\\n\\x06layers\\x18\\x02 \\x03(\\x0b\\x32\\x17.caffe.V1LayerParameter\\\"\\xd4\\n\\n\\x0fSolverParameter\\x12\\x0b\\n\\x03net\\x18\\x18 \\x01(\\t\\x12&\\n\\tnet_param\\x18\\x19 \\x01(\\x0b\\x32\\x13.caffe.NetParameter\\x12\\x11\\n\\ttrain_net\\x18\\x01 \\x01(\\t\\x12\\x10\\n\\x08test_net\\x18\\x02 \\x03(\\t\\x12,\\n\\x0ftrain_net_param\\x18\\x15 \\x01(\\x0b\\x32\\x13.caffe.NetParameter\\x12+\\n\\x0etest_net_param\\x18\\x16 \\x03(\\x0b\\x32\\x13.caffe.NetParameter\\x12$\\n\\x0btrain_state\\x18\\x1a \\x01(\\x0b\\x32\\x0f.caffe.NetState\\x12#\\n\\ntest_state\\x18\\x1b \\x03(\\x0b\\x32\\x0f.caffe.NetState\\x12\\x11\\n\\ttest_iter\\x18\\x03 \\x03(\\x05\\x12\\x18\\n\\rtest_interval\\x18\\x04 \\x01(\\x05:\\x01\\x30\\x12 \\n\\x11test_compute_loss\\x18\\x13 \\x01(\\x08:\\x05\\x66\\x61lse\\x12!\\n\\x13test_initialization\\x18  \\x01(\\x08:\\x04true\\x12\\x0f\\n\\x07\\x62\\x61se_lr\\x18\\x05 \\x01(\\x02\\x12\\x0f\\n\\x07\\x64isplay\\x18\\x06 \\x01(\\x05\\x12\\x17\\n\\x0c\\x61verage_loss\\x18! \\x01(\\x05:\\x01\\x31\\x12\\x10\\n\\x08max_iter\\x18\\x07 \\x01(\\x05\\x12\\x14\\n\\titer_size\\x18$ \\x01(\\x05:\\x01\\x31\\x12\\x11\\n\\tlr_policy\\x18\\x08 \\x01(\\t\\x12\\r\\n\\x05gamma\\x18\\t \\x01(\\x02\\x12\\r\\n\\x05power\\x18\\n \\x01(\\x02\\x12\\x10\\n\\x08momentum\\x18\\x0b \\x01(\\x02\\x12\\x14\\n\\x0cweight_decay\\x18\\x0c \\x01(\\x02\\x12\\x1f\\n\\x13regularization_type\\x18\\x1d \\x01(\\t:\\x02L2\\x12\\x10\\n\\x08stepsize\\x18\\r \\x01(\\x05\\x12\\x11\\n\\tstepvalue\\x18\\\" \\x03(\\x05\\x12\\x1a\\n\\x0e\\x63lip_gradients\\x18# \\x01(\\x02:\\x02-1\\x12\\x13\\n\\x08snapshot\\x18\\x0e \\x01(\\x05:\\x01\\x30\\x12\\x17\\n\\x0fsnapshot_prefix\\x18\\x0f \\x01(\\t\\x12\\x1c\\n\\rsnapshot_diff\\x18\\x10 \\x01(\\x08:\\x05\\x66\\x61lse\\x12K\\n\\x0fsnapshot_format\\x18% \\x01(\\x0e\\x32%.caffe.SolverParameter.SnapshotFormat:\\x0b\\x42INARYPROTO\\x12;\\n\\x0bsolver_mode\\x18\\x11 \\x01(\\x0e\\x32!.caffe.SolverParameter.SolverMode:\\x03GPU\\x12\\x14\\n\\tdevice_id\\x18\\x12 \\x01(\\x05:\\x01\\x30\\x12\\x17\\n\\x0brandom_seed\\x18\\x14 \\x01(\\x03:\\x02-1\\x12\\x11\\n\\x04type\\x18( \\x01(\\t:\\x03SGD\\x12\\x14\\n\\x05\\x64\\x65lta\\x18\\x1f \\x01(\\x02:\\x05\\x31\\x65-08\\x12\\x18\\n\\tmomentum2\\x18\\' \\x01(\\x02:\\x05\\x30.999\\x12\\x17\\n\\trms_decay\\x18& \\x01(\\x02:\\x04\\x30.99\\x12\\x19\\n\\ndebug_info\\x18\\x17 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\\"\\n\\x14snapshot_after_train\\x18\\x1c \\x01(\\x08:\\x04true\\x12;\\n\\x0bsolver_type\\x18\\x1e \\x01(\\x0e\\x32!.caffe.SolverParameter.SolverType:\\x03SGD\\x12\\x1f\\n\\x11layer_wise_reduce\\x18) \\x01(\\x08:\\x04true\\x12\\x0f\\n\\x07weights\\x18* \\x03(\\t\\\"+\\n\\x0eSnapshotFormat\\x12\\x08\\n\\x04HDF5\\x10\\x00\\x12\\x0f\\n\\x0b\\x42INARYPROTO\\x10\\x01\\\"\\x1e\\n\\nSolverMode\\x12\\x07\\n\\x03\\x43PU\\x10\\x00\\x12\\x07\\n\\x03GPU\\x10\\x01\\\"U\\n\\nSolverType\\x12\\x07\\n\\x03SGD\\x10\\x00\\x12\\x0c\\n\\x08NESTEROV\\x10\\x01\\x12\\x0b\\n\\x07\\x41\\x44\\x41GRAD\\x10\\x02\\x12\\x0b\\n\\x07RMSPROP\\x10\\x03\\x12\\x0c\\n\\x08\\x41\\x44\\x41\\x44\\x45LTA\\x10\\x04\\x12\\x08\\n\\x04\\x41\\x44\\x41M\\x10\\x05\\\"l\\n\\x0bSolverState\\x12\\x0c\\n\\x04iter\\x18\\x01 \\x01(\\x05\\x12\\x13\\n\\x0blearned_net\\x18\\x02 \\x01(\\t\\x12!\\n\\x07history\\x18\\x03 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\x12\\x17\\n\\x0c\\x63urrent_step\\x18\\x04 \\x01(\\x05:\\x01\\x30\\\"N\\n\\x08NetState\\x12!\\n\\x05phase\\x18\\x01 \\x01(\\x0e\\x32\\x0c.caffe.Phase:\\x04TEST\\x12\\x10\\n\\x05level\\x18\\x02 \\x01(\\x05:\\x01\\x30\\x12\\r\\n\\x05stage\\x18\\x03 \\x03(\\t\\\"s\\n\\x0cNetStateRule\\x12\\x1b\\n\\x05phase\\x18\\x01 \\x01(\\x0e\\x32\\x0c.caffe.Phase\\x12\\x11\\n\\tmin_level\\x18\\x02 \\x01(\\x05\\x12\\x11\\n\\tmax_level\\x18\\x03 \\x01(\\x05\\x12\\r\\n\\x05stage\\x18\\x04 \\x03(\\t\\x12\\x11\\n\\tnot_stage\\x18\\x05 \\x03(\\t\\\"\\xa3\\x01\\n\\tParamSpec\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\x31\\n\\nshare_mode\\x18\\x02 \\x01(\\x0e\\x32\\x1d.caffe.ParamSpec.DimCheckMode\\x12\\x12\\n\\x07lr_mult\\x18\\x03 \\x01(\\x02:\\x01\\x31\\x12\\x15\\n\\ndecay_mult\\x18\\x04 \\x01(\\x02:\\x01\\x31\\\"*\\n\\x0c\\x44imCheckMode\\x12\\n\\n\\x06STRICT\\x10\\x00\\x12\\x0e\\n\\nPERMISSIVE\\x10\\x01\\\"\\xda\\x14\\n\\x0eLayerParameter\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\x0c\\n\\x04type\\x18\\x02 \\x01(\\t\\x12\\x0e\\n\\x06\\x62ottom\\x18\\x03 \\x03(\\t\\x12\\x0b\\n\\x03top\\x18\\x04 \\x03(\\t\\x12\\x1b\\n\\x05phase\\x18\\n \\x01(\\x0e\\x32\\x0c.caffe.Phase\\x12\\x13\\n\\x0bloss_weight\\x18\\x05 \\x03(\\x02\\x12\\x1f\\n\\x05param\\x18\\x06 \\x03(\\x0b\\x32\\x10.caffe.ParamSpec\\x12\\x1f\\n\\x05\\x62lobs\\x18\\x07 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\x12\\x16\\n\\x0epropagate_down\\x18\\x0b \\x03(\\x08\\x12$\\n\\x07include\\x18\\x08 \\x03(\\x0b\\x32\\x13.caffe.NetStateRule\\x12$\\n\\x07\\x65xclude\\x18\\t \\x03(\\x0b\\x32\\x13.caffe.NetStateRule\\x12\\x37\\n\\x0ftransform_param\\x18\\x64 \\x01(\\x0b\\x32\\x1e.caffe.TransformationParameter\\x12(\\n\\nloss_param\\x18\\x65 \\x01(\\x0b\\x32\\x14.caffe.LossParameter\\x12\\x30\\n\\x0e\\x61\\x63\\x63uracy_param\\x18\\x66 \\x01(\\x0b\\x32\\x18.caffe.AccuracyParameter\\x12,\\n\\x0c\\x61rgmax_param\\x18g \\x01(\\x0b\\x32\\x16.caffe.ArgMaxParameter\\x12\\x34\\n\\x10\\x62\\x61tch_norm_param\\x18\\x8b\\x01 \\x01(\\x0b\\x32\\x19.caffe.BatchNormParameter\\x12)\\n\\nbias_param\\x18\\x8d\\x01 \\x01(\\x0b\\x32\\x14.caffe.BiasParameter\\x12)\\n\\nclip_param\\x18\\x94\\x01 \\x01(\\x0b\\x32\\x14.caffe.ClipParameter\\x12,\\n\\x0c\\x63oncat_param\\x18h \\x01(\\x0b\\x32\\x16.caffe.ConcatParameter\\x12?\\n\\x16\\x63ontrastive_loss_param\\x18i \\x01(\\x0b\\x32\\x1f.caffe.ContrastiveLossParameter\\x12\\x36\\n\\x11\\x63onvolution_param\\x18j \\x01(\\x0b\\x32\\x1b.caffe.ConvolutionParameter\\x12)\\n\\ncrop_param\\x18\\x90\\x01 \\x01(\\x0b\\x32\\x14.caffe.CropParameter\\x12(\\n\\ndata_param\\x18k \\x01(\\x0b\\x32\\x14.caffe.DataParameter\\x12.\\n\\rdropout_param\\x18l \\x01(\\x0b\\x32\\x17.caffe.DropoutParameter\\x12\\x33\\n\\x10\\x64ummy_data_param\\x18m \\x01(\\x0b\\x32\\x19.caffe.DummyDataParameter\\x12.\\n\\reltwise_param\\x18n \\x01(\\x0b\\x32\\x17.caffe.EltwiseParameter\\x12\\'\\n\\telu_param\\x18\\x8c\\x01 \\x01(\\x0b\\x32\\x13.caffe.ELUParameter\\x12+\\n\\x0b\\x65mbed_param\\x18\\x89\\x01 \\x01(\\x0b\\x32\\x15.caffe.EmbedParameter\\x12&\\n\\texp_param\\x18o \\x01(\\x0b\\x32\\x13.caffe.ExpParameter\\x12/\\n\\rflatten_param\\x18\\x87\\x01 \\x01(\\x0b\\x32\\x17.caffe.FlattenParameter\\x12\\x31\\n\\x0fhdf5_data_param\\x18p \\x01(\\x0b\\x32\\x18.caffe.HDF5DataParameter\\x12\\x35\\n\\x11hdf5_output_param\\x18q \\x01(\\x0b\\x32\\x1a.caffe.HDF5OutputParameter\\x12\\x33\\n\\x10hinge_loss_param\\x18r \\x01(\\x0b\\x32\\x19.caffe.HingeLossParameter\\x12\\x33\\n\\x10image_data_param\\x18s \\x01(\\x0b\\x32\\x19.caffe.ImageDataParameter\\x12\\x39\\n\\x13infogain_loss_param\\x18t \\x01(\\x0b\\x32\\x1c.caffe.InfogainLossParameter\\x12\\x39\\n\\x13inner_product_param\\x18u \\x01(\\x0b\\x32\\x1c.caffe.InnerProductParameter\\x12+\\n\\x0binput_param\\x18\\x8f\\x01 \\x01(\\x0b\\x32\\x15.caffe.InputParameter\\x12\\'\\n\\tlog_param\\x18\\x86\\x01 \\x01(\\x0b\\x32\\x13.caffe.LogParameter\\x12&\\n\\tlrn_param\\x18v \\x01(\\x0b\\x32\\x13.caffe.LRNParameter\\x12\\x35\\n\\x11memory_data_param\\x18w \\x01(\\x0b\\x32\\x1a.caffe.MemoryDataParameter\\x12&\\n\\tmvn_param\\x18x \\x01(\\x0b\\x32\\x13.caffe.MVNParameter\\x12\\x33\\n\\x0fparameter_param\\x18\\x91\\x01 \\x01(\\x0b\\x32\\x19.caffe.ParameterParameter\\x12.\\n\\rpooling_param\\x18y \\x01(\\x0b\\x32\\x17.caffe.PoolingParameter\\x12*\\n\\x0bpower_param\\x18z \\x01(\\x0b\\x32\\x15.caffe.PowerParameter\\x12+\\n\\x0bprelu_param\\x18\\x83\\x01 \\x01(\\x0b\\x32\\x15.caffe.PReLUParameter\\x12-\\n\\x0cpython_param\\x18\\x82\\x01 \\x01(\\x0b\\x32\\x16.caffe.PythonParameter\\x12\\x33\\n\\x0frecurrent_param\\x18\\x92\\x01 \\x01(\\x0b\\x32\\x19.caffe.RecurrentParameter\\x12\\x33\\n\\x0freduction_param\\x18\\x88\\x01 \\x01(\\x0b\\x32\\x19.caffe.ReductionParameter\\x12(\\n\\nrelu_param\\x18{ \\x01(\\x0b\\x32\\x14.caffe.ReLUParameter\\x12/\\n\\rreshape_param\\x18\\x85\\x01 \\x01(\\x0b\\x32\\x17.caffe.ReshapeParameter\\x12+\\n\\x0bscale_param\\x18\\x8e\\x01 \\x01(\\x0b\\x32\\x15.caffe.ScaleParameter\\x12.\\n\\rsigmoid_param\\x18| \\x01(\\x0b\\x32\\x17.caffe.SigmoidParameter\\x12.\\n\\rsoftmax_param\\x18} \\x01(\\x0b\\x32\\x17.caffe.SoftmaxParameter\\x12\\'\\n\\tspp_param\\x18\\x84\\x01 \\x01(\\x0b\\x32\\x13.caffe.SPPParameter\\x12*\\n\\x0bslice_param\\x18~ \\x01(\\x0b\\x32\\x15.caffe.SliceParameter\\x12+\\n\\x0bswish_param\\x18\\x93\\x01 \\x01(\\x0b\\x32\\x15.caffe.SwishParameter\\x12(\\n\\ntanh_param\\x18\\x7f \\x01(\\x0b\\x32\\x14.caffe.TanHParameter\\x12\\x33\\n\\x0fthreshold_param\\x18\\x80\\x01 \\x01(\\x0b\\x32\\x19.caffe.ThresholdParameter\\x12)\\n\\ntile_param\\x18\\x8a\\x01 \\x01(\\x0b\\x32\\x14.caffe.TileParameter\\x12\\x36\\n\\x11window_data_param\\x18\\x81\\x01 \\x01(\\x0b\\x32\\x1a.caffe.WindowDataParameter\\\"\\xb6\\x01\\n\\x17TransformationParameter\\x12\\x10\\n\\x05scale\\x18\\x01 \\x01(\\x02:\\x01\\x31\\x12\\x15\\n\\x06mirror\\x18\\x02 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x14\\n\\tcrop_size\\x18\\x03 \\x01(\\r:\\x01\\x30\\x12\\x11\\n\\tmean_file\\x18\\x04 \\x01(\\t\\x12\\x12\\n\\nmean_value\\x18\\x05 \\x03(\\x02\\x12\\x1a\\n\\x0b\\x66orce_color\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x19\\n\\nforce_gray\\x18\\x07 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\xc2\\x01\\n\\rLossParameter\\x12\\x14\\n\\x0cignore_label\\x18\\x01 \\x01(\\x05\\x12\\x44\\n\\rnormalization\\x18\\x03 \\x01(\\x0e\\x32&.caffe.LossParameter.NormalizationMode:\\x05VALID\\x12\\x11\\n\\tnormalize\\x18\\x02 \\x01(\\x08\\\"B\\n\\x11NormalizationMode\\x12\\x08\\n\\x04\\x46ULL\\x10\\x00\\x12\\t\\n\\x05VALID\\x10\\x01\\x12\\x0e\\n\\nBATCH_SIZE\\x10\\x02\\x12\\x08\\n\\x04NONE\\x10\\x03\\\"L\\n\\x11\\x41\\x63\\x63uracyParameter\\x12\\x10\\n\\x05top_k\\x18\\x01 \\x01(\\r:\\x01\\x31\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x31\\x12\\x14\\n\\x0cignore_label\\x18\\x03 \\x01(\\x05\\\"M\\n\\x0f\\x41rgMaxParameter\\x12\\x1a\\n\\x0bout_max_val\\x18\\x01 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x10\\n\\x05top_k\\x18\\x02 \\x01(\\r:\\x01\\x31\\x12\\x0c\\n\\x04\\x61xis\\x18\\x03 \\x01(\\x05\\\")\\n\\rClipParameter\\x12\\x0b\\n\\x03min\\x18\\x01 \\x02(\\x02\\x12\\x0b\\n\\x03max\\x18\\x02 \\x02(\\x02\\\"9\\n\\x0f\\x43oncatParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x31\\x12\\x15\\n\\nconcat_dim\\x18\\x01 \\x01(\\r:\\x01\\x31\\\"j\\n\\x12\\x42\\x61tchNormParameter\\x12\\x18\\n\\x10use_global_stats\\x18\\x01 \\x01(\\x08\\x12&\\n\\x17moving_average_fraction\\x18\\x02 \\x01(\\x02:\\x05\\x30.999\\x12\\x12\\n\\x03\\x65ps\\x18\\x03 \\x01(\\x02:\\x05\\x31\\x65-05\\\"]\\n\\rBiasParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x31\\x12\\x13\\n\\x08num_axes\\x18\\x02 \\x01(\\x05:\\x01\\x31\\x12&\\n\\x06\\x66iller\\x18\\x03 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\\"L\\n\\x18\\x43ontrastiveLossParameter\\x12\\x11\\n\\x06margin\\x18\\x01 \\x01(\\x02:\\x01\\x31\\x12\\x1d\\n\\x0elegacy_version\\x18\\x02 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\xfc\\x03\\n\\x14\\x43onvolutionParameter\\x12\\x12\\n\\nnum_output\\x18\\x01 \\x01(\\r\\x12\\x17\\n\\tbias_term\\x18\\x02 \\x01(\\x08:\\x04true\\x12\\x0b\\n\\x03pad\\x18\\x03 \\x03(\\r\\x12\\x13\\n\\x0bkernel_size\\x18\\x04 \\x03(\\r\\x12\\x0e\\n\\x06stride\\x18\\x06 \\x03(\\r\\x12\\x10\\n\\x08\\x64ilation\\x18\\x12 \\x03(\\r\\x12\\x10\\n\\x05pad_h\\x18\\t \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x05pad_w\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x08kernel_h\\x18\\x0b \\x01(\\r\\x12\\x10\\n\\x08kernel_w\\x18\\x0c \\x01(\\r\\x12\\x10\\n\\x08stride_h\\x18\\r \\x01(\\r\\x12\\x10\\n\\x08stride_w\\x18\\x0e \\x01(\\r\\x12\\x10\\n\\x05group\\x18\\x05 \\x01(\\r:\\x01\\x31\\x12-\\n\\rweight_filler\\x18\\x07 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x08 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12;\\n\\x06\\x65ngine\\x18\\x0f \\x01(\\x0e\\x32\\\".caffe.ConvolutionParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\x12\\x0f\\n\\x04\\x61xis\\x18\\x10 \\x01(\\x05:\\x01\\x31\\x12\\x1e\\n\\x0f\\x66orce_nd_im2col\\x18\\x11 \\x01(\\x08:\\x05\\x66\\x61lse\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"0\\n\\rCropParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x32\\x12\\x0e\\n\\x06offset\\x18\\x02 \\x03(\\r\\\"\\xa4\\x02\\n\\rDataParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x12\\n\\nbatch_size\\x18\\x04 \\x01(\\r\\x12\\x14\\n\\trand_skip\\x18\\x07 \\x01(\\r:\\x01\\x30\\x12\\x31\\n\\x07\\x62\\x61\\x63kend\\x18\\x08 \\x01(\\x0e\\x32\\x17.caffe.DataParameter.DB:\\x07LEVELDB\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x11\\n\\tmean_file\\x18\\x03 \\x01(\\t\\x12\\x14\\n\\tcrop_size\\x18\\x05 \\x01(\\r:\\x01\\x30\\x12\\x15\\n\\x06mirror\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\\"\\n\\x13\\x66orce_encoded_color\\x18\\t \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x13\\n\\x08prefetch\\x18\\n \\x01(\\r:\\x01\\x34\\\"\\x1b\\n\\x02\\x44\\x42\\x12\\x0b\\n\\x07LEVELDB\\x10\\x00\\x12\\x08\\n\\x04LMDB\\x10\\x01\\\".\\n\\x10\\x44ropoutParameter\\x12\\x1a\\n\\rdropout_ratio\\x18\\x01 \\x01(\\x02:\\x03\\x30.5\\\"\\xa0\\x01\\n\\x12\\x44ummyDataParameter\\x12+\\n\\x0b\\x64\\x61ta_filler\\x18\\x01 \\x03(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x1f\\n\\x05shape\\x18\\x06 \\x03(\\x0b\\x32\\x10.caffe.BlobShape\\x12\\x0b\\n\\x03num\\x18\\x02 \\x03(\\r\\x12\\x10\\n\\x08\\x63hannels\\x18\\x03 \\x03(\\r\\x12\\x0e\\n\\x06height\\x18\\x04 \\x03(\\r\\x12\\r\\n\\x05width\\x18\\x05 \\x03(\\r\\\"\\xa5\\x01\\n\\x10\\x45ltwiseParameter\\x12\\x39\\n\\toperation\\x18\\x01 \\x01(\\x0e\\x32!.caffe.EltwiseParameter.EltwiseOp:\\x03SUM\\x12\\r\\n\\x05\\x63oeff\\x18\\x02 \\x03(\\x02\\x12\\x1e\\n\\x10stable_prod_grad\\x18\\x03 \\x01(\\x08:\\x04true\\\"\\'\\n\\tEltwiseOp\\x12\\x08\\n\\x04PROD\\x10\\x00\\x12\\x07\\n\\x03SUM\\x10\\x01\\x12\\x07\\n\\x03MAX\\x10\\x02\\\" \\n\\x0c\\x45LUParameter\\x12\\x10\\n\\x05\\x61lpha\\x18\\x01 \\x01(\\x02:\\x01\\x31\\\"\\xac\\x01\\n\\x0e\\x45mbedParameter\\x12\\x12\\n\\nnum_output\\x18\\x01 \\x01(\\r\\x12\\x11\\n\\tinput_dim\\x18\\x02 \\x01(\\r\\x12\\x17\\n\\tbias_term\\x18\\x03 \\x01(\\x08:\\x04true\\x12-\\n\\rweight_filler\\x18\\x04 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x05 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\\"D\\n\\x0c\\x45xpParameter\\x12\\x10\\n\\x04\\x62\\x61se\\x18\\x01 \\x01(\\x02:\\x02-1\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x05shift\\x18\\x03 \\x01(\\x02:\\x01\\x30\\\"9\\n\\x10\\x46lattenParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x31\\x12\\x14\\n\\x08\\x65nd_axis\\x18\\x02 \\x01(\\x05:\\x02-1\\\"O\\n\\x11HDF5DataParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x12\\n\\nbatch_size\\x18\\x02 \\x01(\\r\\x12\\x16\\n\\x07shuffle\\x18\\x03 \\x01(\\x08:\\x05\\x66\\x61lse\\\"(\\n\\x13HDF5OutputParameter\\x12\\x11\\n\\tfile_name\\x18\\x01 \\x01(\\t\\\"^\\n\\x12HingeLossParameter\\x12\\x30\\n\\x04norm\\x18\\x01 \\x01(\\x0e\\x32\\x1e.caffe.HingeLossParameter.Norm:\\x02L1\\\"\\x16\\n\\x04Norm\\x12\\x06\\n\\x02L1\\x10\\x01\\x12\\x06\\n\\x02L2\\x10\\x02\\\"\\x97\\x02\\n\\x12ImageDataParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x15\\n\\nbatch_size\\x18\\x04 \\x01(\\r:\\x01\\x31\\x12\\x14\\n\\trand_skip\\x18\\x07 \\x01(\\r:\\x01\\x30\\x12\\x16\\n\\x07shuffle\\x18\\x08 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x15\\n\\nnew_height\\x18\\t \\x01(\\r:\\x01\\x30\\x12\\x14\\n\\tnew_width\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x16\\n\\x08is_color\\x18\\x0b \\x01(\\x08:\\x04true\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x11\\n\\tmean_file\\x18\\x03 \\x01(\\t\\x12\\x14\\n\\tcrop_size\\x18\\x05 \\x01(\\r:\\x01\\x30\\x12\\x15\\n\\x06mirror\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x15\\n\\x0broot_folder\\x18\\x0c \\x01(\\t:\\x00\\\"8\\n\\x15InfogainLossParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x31\\\"\\xcb\\x01\\n\\x15InnerProductParameter\\x12\\x12\\n\\nnum_output\\x18\\x01 \\x01(\\r\\x12\\x17\\n\\tbias_term\\x18\\x02 \\x01(\\x08:\\x04true\\x12-\\n\\rweight_filler\\x18\\x03 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x04 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x05 \\x01(\\x05:\\x01\\x31\\x12\\x18\\n\\ttranspose\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\\"1\\n\\x0eInputParameter\\x12\\x1f\\n\\x05shape\\x18\\x01 \\x03(\\x0b\\x32\\x10.caffe.BlobShape\\\"D\\n\\x0cLogParameter\\x12\\x10\\n\\x04\\x62\\x61se\\x18\\x01 \\x01(\\x02:\\x02-1\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x05shift\\x18\\x03 \\x01(\\x02:\\x01\\x30\\\"\\xb8\\x02\\n\\x0cLRNParameter\\x12\\x15\\n\\nlocal_size\\x18\\x01 \\x01(\\r:\\x01\\x35\\x12\\x10\\n\\x05\\x61lpha\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x12\\n\\x04\\x62\\x65ta\\x18\\x03 \\x01(\\x02:\\x04\\x30.75\\x12\\x44\\n\\x0bnorm_region\\x18\\x04 \\x01(\\x0e\\x32\\x1e.caffe.LRNParameter.NormRegion:\\x0f\\x41\\x43ROSS_CHANNELS\\x12\\x0c\\n\\x01k\\x18\\x05 \\x01(\\x02:\\x01\\x31\\x12\\x33\\n\\x06\\x65ngine\\x18\\x06 \\x01(\\x0e\\x32\\x1a.caffe.LRNParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\"5\\n\\nNormRegion\\x12\\x13\\n\\x0f\\x41\\x43ROSS_CHANNELS\\x10\\x00\\x12\\x12\\n\\x0eWITHIN_CHANNEL\\x10\\x01\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"Z\\n\\x13MemoryDataParameter\\x12\\x12\\n\\nbatch_size\\x18\\x01 \\x01(\\r\\x12\\x10\\n\\x08\\x63hannels\\x18\\x02 \\x01(\\r\\x12\\x0e\\n\\x06height\\x18\\x03 \\x01(\\r\\x12\\r\\n\\x05width\\x18\\x04 \\x01(\\r\\\"d\\n\\x0cMVNParameter\\x12 \\n\\x12normalize_variance\\x18\\x01 \\x01(\\x08:\\x04true\\x12\\x1e\\n\\x0f\\x61\\x63ross_channels\\x18\\x02 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x12\\n\\x03\\x65ps\\x18\\x03 \\x01(\\x02:\\x05\\x31\\x65-09\\\"5\\n\\x12ParameterParameter\\x12\\x1f\\n\\x05shape\\x18\\x01 \\x01(\\x0b\\x32\\x10.caffe.BlobShape\\\"\\x81\\x04\\n\\x10PoolingParameter\\x12\\x35\\n\\x04pool\\x18\\x01 \\x01(\\x0e\\x32\\\".caffe.PoolingParameter.PoolMethod:\\x03MAX\\x12\\x0e\\n\\x03pad\\x18\\x04 \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x05pad_h\\x18\\t \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x05pad_w\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x13\\n\\x0bkernel_size\\x18\\x02 \\x01(\\r\\x12\\x10\\n\\x08kernel_h\\x18\\x05 \\x01(\\r\\x12\\x10\\n\\x08kernel_w\\x18\\x06 \\x01(\\r\\x12\\x11\\n\\x06stride\\x18\\x03 \\x01(\\r:\\x01\\x31\\x12\\x10\\n\\x08stride_h\\x18\\x07 \\x01(\\r\\x12\\x10\\n\\x08stride_w\\x18\\x08 \\x01(\\r\\x12\\x37\\n\\x06\\x65ngine\\x18\\x0b \\x01(\\x0e\\x32\\x1e.caffe.PoolingParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\x12\\x1d\\n\\x0eglobal_pooling\\x18\\x0c \\x01(\\x08:\\x05\\x66\\x61lse\\x12;\\n\\nround_mode\\x18\\r \\x01(\\x0e\\x32!.caffe.PoolingParameter.RoundMode:\\x04\\x43\\x45IL\\\".\\n\\nPoolMethod\\x12\\x07\\n\\x03MAX\\x10\\x00\\x12\\x07\\n\\x03\\x41VE\\x10\\x01\\x12\\x0e\\n\\nSTOCHASTIC\\x10\\x02\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\" \\n\\tRoundMode\\x12\\x08\\n\\x04\\x43\\x45IL\\x10\\x00\\x12\\t\\n\\x05\\x46LOOR\\x10\\x01\\\"F\\n\\x0ePowerParameter\\x12\\x10\\n\\x05power\\x18\\x01 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x05shift\\x18\\x03 \\x01(\\x02:\\x01\\x30\\\"g\\n\\x0fPythonParameter\\x12\\x0e\\n\\x06module\\x18\\x01 \\x01(\\t\\x12\\r\\n\\x05layer\\x18\\x02 \\x01(\\t\\x12\\x13\\n\\tparam_str\\x18\\x03 \\x01(\\t:\\x00\\x12 \\n\\x11share_in_parallel\\x18\\x04 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\xc0\\x01\\n\\x12RecurrentParameter\\x12\\x15\\n\\nnum_output\\x18\\x01 \\x01(\\r:\\x01\\x30\\x12-\\n\\rweight_filler\\x18\\x02 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x03 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x19\\n\\ndebug_info\\x18\\x04 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x1c\\n\\rexpose_hidden\\x18\\x05 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\xad\\x01\\n\\x12ReductionParameter\\x12=\\n\\toperation\\x18\\x01 \\x01(\\x0e\\x32%.caffe.ReductionParameter.ReductionOp:\\x03SUM\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x30\\x12\\x10\\n\\x05\\x63oeff\\x18\\x03 \\x01(\\x02:\\x01\\x31\\\"5\\n\\x0bReductionOp\\x12\\x07\\n\\x03SUM\\x10\\x01\\x12\\x08\\n\\x04\\x41SUM\\x10\\x02\\x12\\t\\n\\x05SUMSQ\\x10\\x03\\x12\\x08\\n\\x04MEAN\\x10\\x04\\\"\\x8d\\x01\\n\\rReLUParameter\\x12\\x19\\n\\x0enegative_slope\\x18\\x01 \\x01(\\x02:\\x01\\x30\\x12\\x34\\n\\x06\\x65ngine\\x18\\x02 \\x01(\\x0e\\x32\\x1b.caffe.ReLUParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"Z\\n\\x10ReshapeParameter\\x12\\x1f\\n\\x05shape\\x18\\x01 \\x01(\\x0b\\x32\\x10.caffe.BlobShape\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x30\\x12\\x14\\n\\x08num_axes\\x18\\x03 \\x01(\\x05:\\x02-1\\\"\\xa5\\x01\\n\\x0eScaleParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x31\\x12\\x13\\n\\x08num_axes\\x18\\x02 \\x01(\\x05:\\x01\\x31\\x12&\\n\\x06\\x66iller\\x18\\x03 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x18\\n\\tbias_term\\x18\\x04 \\x01(\\x08:\\x05\\x66\\x61lse\\x12+\\n\\x0b\\x62ias_filler\\x18\\x05 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\\"x\\n\\x10SigmoidParameter\\x12\\x37\\n\\x06\\x65ngine\\x18\\x01 \\x01(\\x0e\\x32\\x1e.caffe.SigmoidParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"L\\n\\x0eSliceParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x03 \\x01(\\x05:\\x01\\x31\\x12\\x13\\n\\x0bslice_point\\x18\\x02 \\x03(\\r\\x12\\x14\\n\\tslice_dim\\x18\\x01 \\x01(\\r:\\x01\\x31\\\"\\x89\\x01\\n\\x10SoftmaxParameter\\x12\\x37\\n\\x06\\x65ngine\\x18\\x01 \\x01(\\x0e\\x32\\x1e.caffe.SoftmaxParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x31\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"!\\n\\x0eSwishParameter\\x12\\x0f\\n\\x04\\x62\\x65ta\\x18\\x01 \\x01(\\x02:\\x01\\x31\\\"r\\n\\rTanHParameter\\x12\\x34\\n\\x06\\x65ngine\\x18\\x01 \\x01(\\x0e\\x32\\x1b.caffe.TanHParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"/\\n\\rTileParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x31\\x12\\r\\n\\x05tiles\\x18\\x02 \\x01(\\x05\\\"*\\n\\x12ThresholdParameter\\x12\\x14\\n\\tthreshold\\x18\\x01 \\x01(\\x02:\\x01\\x30\\\"\\xc1\\x02\\n\\x13WindowDataParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x11\\n\\tmean_file\\x18\\x03 \\x01(\\t\\x12\\x12\\n\\nbatch_size\\x18\\x04 \\x01(\\r\\x12\\x14\\n\\tcrop_size\\x18\\x05 \\x01(\\r:\\x01\\x30\\x12\\x15\\n\\x06mirror\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x19\\n\\x0c\\x66g_threshold\\x18\\x07 \\x01(\\x02:\\x03\\x30.5\\x12\\x19\\n\\x0c\\x62g_threshold\\x18\\x08 \\x01(\\x02:\\x03\\x30.5\\x12\\x19\\n\\x0b\\x66g_fraction\\x18\\t \\x01(\\x02:\\x04\\x30.25\\x12\\x16\\n\\x0b\\x63ontext_pad\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x17\\n\\tcrop_mode\\x18\\x0b \\x01(\\t:\\x04warp\\x12\\x1b\\n\\x0c\\x63\\x61\\x63he_images\\x18\\x0c \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x15\\n\\x0broot_folder\\x18\\r \\x01(\\t:\\x00\\\"\\xeb\\x01\\n\\x0cSPPParameter\\x12\\x16\\n\\x0epyramid_height\\x18\\x01 \\x01(\\r\\x12\\x31\\n\\x04pool\\x18\\x02 \\x01(\\x0e\\x32\\x1e.caffe.SPPParameter.PoolMethod:\\x03MAX\\x12\\x33\\n\\x06\\x65ngine\\x18\\x06 \\x01(\\x0e\\x32\\x1a.caffe.SPPParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\".\\n\\nPoolMethod\\x12\\x07\\n\\x03MAX\\x10\\x00\\x12\\x07\\n\\x03\\x41VE\\x10\\x01\\x12\\x0e\\n\\nSTOCHASTIC\\x10\\x02\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"\\xe0\\x13\\n\\x10V1LayerParameter\\x12\\x0e\\n\\x06\\x62ottom\\x18\\x02 \\x03(\\t\\x12\\x0b\\n\\x03top\\x18\\x03 \\x03(\\t\\x12\\x0c\\n\\x04name\\x18\\x04 \\x01(\\t\\x12$\\n\\x07include\\x18  \\x03(\\x0b\\x32\\x13.caffe.NetStateRule\\x12$\\n\\x07\\x65xclude\\x18! \\x03(\\x0b\\x32\\x13.caffe.NetStateRule\\x12/\\n\\x04type\\x18\\x05 \\x01(\\x0e\\x32!.caffe.V1LayerParameter.LayerType\\x12\\x1f\\n\\x05\\x62lobs\\x18\\x06 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\x12\\x0e\\n\\x05param\\x18\\xe9\\x07 \\x03(\\t\\x12>\\n\\x0f\\x62lob_share_mode\\x18\\xea\\x07 \\x03(\\x0e\\x32$.caffe.V1LayerParameter.DimCheckMode\\x12\\x10\\n\\x08\\x62lobs_lr\\x18\\x07 \\x03(\\x02\\x12\\x14\\n\\x0cweight_decay\\x18\\x08 \\x03(\\x02\\x12\\x13\\n\\x0bloss_weight\\x18# \\x03(\\x02\\x12\\x30\\n\\x0e\\x61\\x63\\x63uracy_param\\x18\\x1b \\x01(\\x0b\\x32\\x18.caffe.AccuracyParameter\\x12,\\n\\x0c\\x61rgmax_param\\x18\\x17 \\x01(\\x0b\\x32\\x16.caffe.ArgMaxParameter\\x12,\\n\\x0c\\x63oncat_param\\x18\\t \\x01(\\x0b\\x32\\x16.caffe.ConcatParameter\\x12?\\n\\x16\\x63ontrastive_loss_param\\x18( \\x01(\\x0b\\x32\\x1f.caffe.ContrastiveLossParameter\\x12\\x36\\n\\x11\\x63onvolution_param\\x18\\n \\x01(\\x0b\\x32\\x1b.caffe.ConvolutionParameter\\x12(\\n\\ndata_param\\x18\\x0b \\x01(\\x0b\\x32\\x14.caffe.DataParameter\\x12.\\n\\rdropout_param\\x18\\x0c \\x01(\\x0b\\x32\\x17.caffe.DropoutParameter\\x12\\x33\\n\\x10\\x64ummy_data_param\\x18\\x1a \\x01(\\x0b\\x32\\x19.caffe.DummyDataParameter\\x12.\\n\\reltwise_param\\x18\\x18 \\x01(\\x0b\\x32\\x17.caffe.EltwiseParameter\\x12&\\n\\texp_param\\x18) \\x01(\\x0b\\x32\\x13.caffe.ExpParameter\\x12\\x31\\n\\x0fhdf5_data_param\\x18\\r \\x01(\\x0b\\x32\\x18.caffe.HDF5DataParameter\\x12\\x35\\n\\x11hdf5_output_param\\x18\\x0e \\x01(\\x0b\\x32\\x1a.caffe.HDF5OutputParameter\\x12\\x33\\n\\x10hinge_loss_param\\x18\\x1d \\x01(\\x0b\\x32\\x19.caffe.HingeLossParameter\\x12\\x33\\n\\x10image_data_param\\x18\\x0f \\x01(\\x0b\\x32\\x19.caffe.ImageDataParameter\\x12\\x39\\n\\x13infogain_loss_param\\x18\\x10 \\x01(\\x0b\\x32\\x1c.caffe.InfogainLossParameter\\x12\\x39\\n\\x13inner_product_param\\x18\\x11 \\x01(\\x0b\\x32\\x1c.caffe.InnerProductParameter\\x12&\\n\\tlrn_param\\x18\\x12 \\x01(\\x0b\\x32\\x13.caffe.LRNParameter\\x12\\x35\\n\\x11memory_data_param\\x18\\x16 \\x01(\\x0b\\x32\\x1a.caffe.MemoryDataParameter\\x12&\\n\\tmvn_param\\x18\\\" \\x01(\\x0b\\x32\\x13.caffe.MVNParameter\\x12.\\n\\rpooling_param\\x18\\x13 \\x01(\\x0b\\x32\\x17.caffe.PoolingParameter\\x12*\\n\\x0bpower_param\\x18\\x15 \\x01(\\x0b\\x32\\x15.caffe.PowerParameter\\x12(\\n\\nrelu_param\\x18\\x1e \\x01(\\x0b\\x32\\x14.caffe.ReLUParameter\\x12.\\n\\rsigmoid_param\\x18& \\x01(\\x0b\\x32\\x17.caffe.SigmoidParameter\\x12.\\n\\rsoftmax_param\\x18\\' \\x01(\\x0b\\x32\\x17.caffe.SoftmaxParameter\\x12*\\n\\x0bslice_param\\x18\\x1f \\x01(\\x0b\\x32\\x15.caffe.SliceParameter\\x12(\\n\\ntanh_param\\x18% \\x01(\\x0b\\x32\\x14.caffe.TanHParameter\\x12\\x32\\n\\x0fthreshold_param\\x18\\x19 \\x01(\\x0b\\x32\\x19.caffe.ThresholdParameter\\x12\\x35\\n\\x11window_data_param\\x18\\x14 \\x01(\\x0b\\x32\\x1a.caffe.WindowDataParameter\\x12\\x37\\n\\x0ftransform_param\\x18$ \\x01(\\x0b\\x32\\x1e.caffe.TransformationParameter\\x12(\\n\\nloss_param\\x18* \\x01(\\x0b\\x32\\x14.caffe.LossParameter\\x12&\\n\\x05layer\\x18\\x01 \\x01(\\x0b\\x32\\x17.caffe.V0LayerParameter\\\"\\xd8\\x04\\n\\tLayerType\\x12\\x08\\n\\x04NONE\\x10\\x00\\x12\\n\\n\\x06\\x41\\x42SVAL\\x10#\\x12\\x0c\\n\\x08\\x41\\x43\\x43URACY\\x10\\x01\\x12\\n\\n\\x06\\x41RGMAX\\x10\\x1e\\x12\\x08\\n\\x04\\x42NLL\\x10\\x02\\x12\\n\\n\\x06\\x43ONCAT\\x10\\x03\\x12\\x14\\n\\x10\\x43ONTRASTIVE_LOSS\\x10%\\x12\\x0f\\n\\x0b\\x43ONVOLUTION\\x10\\x04\\x12\\x08\\n\\x04\\x44\\x41TA\\x10\\x05\\x12\\x11\\n\\rDECONVOLUTION\\x10\\'\\x12\\x0b\\n\\x07\\x44ROPOUT\\x10\\x06\\x12\\x0e\\n\\nDUMMY_DATA\\x10 \\x12\\x12\\n\\x0e\\x45UCLIDEAN_LOSS\\x10\\x07\\x12\\x0b\\n\\x07\\x45LTWISE\\x10\\x19\\x12\\x07\\n\\x03\\x45XP\\x10&\\x12\\x0b\\n\\x07\\x46LATTEN\\x10\\x08\\x12\\r\\n\\tHDF5_DATA\\x10\\t\\x12\\x0f\\n\\x0bHDF5_OUTPUT\\x10\\n\\x12\\x0e\\n\\nHINGE_LOSS\\x10\\x1c\\x12\\n\\n\\x06IM2COL\\x10\\x0b\\x12\\x0e\\n\\nIMAGE_DATA\\x10\\x0c\\x12\\x11\\n\\rINFOGAIN_LOSS\\x10\\r\\x12\\x11\\n\\rINNER_PRODUCT\\x10\\x0e\\x12\\x07\\n\\x03LRN\\x10\\x0f\\x12\\x0f\\n\\x0bMEMORY_DATA\\x10\\x1d\\x12\\x1d\\n\\x19MULTINOMIAL_LOGISTIC_LOSS\\x10\\x10\\x12\\x07\\n\\x03MVN\\x10\\\"\\x12\\x0b\\n\\x07POOLING\\x10\\x11\\x12\\t\\n\\x05POWER\\x10\\x1a\\x12\\x08\\n\\x04RELU\\x10\\x12\\x12\\x0b\\n\\x07SIGMOID\\x10\\x13\\x12\\x1e\\n\\x1aSIGMOID_CROSS_ENTROPY_LOSS\\x10\\x1b\\x12\\x0b\\n\\x07SILENCE\\x10$\\x12\\x0b\\n\\x07SOFTMAX\\x10\\x14\\x12\\x10\\n\\x0cSOFTMAX_LOSS\\x10\\x15\\x12\\t\\n\\x05SPLIT\\x10\\x16\\x12\\t\\n\\x05SLICE\\x10!\\x12\\x08\\n\\x04TANH\\x10\\x17\\x12\\x0f\\n\\x0bWINDOW_DATA\\x10\\x18\\x12\\r\\n\\tTHRESHOLD\\x10\\x1f\\\"*\\n\\x0c\\x44imCheckMode\\x12\\n\\n\\x06STRICT\\x10\\x00\\x12\\x0e\\n\\nPERMISSIVE\\x10\\x01\\\"\\xfd\\x07\\n\\x10V0LayerParameter\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\x0c\\n\\x04type\\x18\\x02 \\x01(\\t\\x12\\x12\\n\\nnum_output\\x18\\x03 \\x01(\\r\\x12\\x16\\n\\x08\\x62iasterm\\x18\\x04 \\x01(\\x08:\\x04true\\x12-\\n\\rweight_filler\\x18\\x05 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x06 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x0e\\n\\x03pad\\x18\\x07 \\x01(\\r:\\x01\\x30\\x12\\x12\\n\\nkernelsize\\x18\\x08 \\x01(\\r\\x12\\x10\\n\\x05group\\x18\\t \\x01(\\r:\\x01\\x31\\x12\\x11\\n\\x06stride\\x18\\n \\x01(\\r:\\x01\\x31\\x12\\x35\\n\\x04pool\\x18\\x0b \\x01(\\x0e\\x32\\\".caffe.V0LayerParameter.PoolMethod:\\x03MAX\\x12\\x1a\\n\\rdropout_ratio\\x18\\x0c \\x01(\\x02:\\x03\\x30.5\\x12\\x15\\n\\nlocal_size\\x18\\r \\x01(\\r:\\x01\\x35\\x12\\x10\\n\\x05\\x61lpha\\x18\\x0e \\x01(\\x02:\\x01\\x31\\x12\\x12\\n\\x04\\x62\\x65ta\\x18\\x0f \\x01(\\x02:\\x04\\x30.75\\x12\\x0c\\n\\x01k\\x18\\x16 \\x01(\\x02:\\x01\\x31\\x12\\x0e\\n\\x06source\\x18\\x10 \\x01(\\t\\x12\\x10\\n\\x05scale\\x18\\x11 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x08meanfile\\x18\\x12 \\x01(\\t\\x12\\x11\\n\\tbatchsize\\x18\\x13 \\x01(\\r\\x12\\x13\\n\\x08\\x63ropsize\\x18\\x14 \\x01(\\r:\\x01\\x30\\x12\\x15\\n\\x06mirror\\x18\\x15 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x1f\\n\\x05\\x62lobs\\x18\\x32 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\x12\\x10\\n\\x08\\x62lobs_lr\\x18\\x33 \\x03(\\x02\\x12\\x14\\n\\x0cweight_decay\\x18\\x34 \\x03(\\x02\\x12\\x14\\n\\trand_skip\\x18\\x35 \\x01(\\r:\\x01\\x30\\x12\\x1d\\n\\x10\\x64\\x65t_fg_threshold\\x18\\x36 \\x01(\\x02:\\x03\\x30.5\\x12\\x1d\\n\\x10\\x64\\x65t_bg_threshold\\x18\\x37 \\x01(\\x02:\\x03\\x30.5\\x12\\x1d\\n\\x0f\\x64\\x65t_fg_fraction\\x18\\x38 \\x01(\\x02:\\x04\\x30.25\\x12\\x1a\\n\\x0f\\x64\\x65t_context_pad\\x18: \\x01(\\r:\\x01\\x30\\x12\\x1b\\n\\rdet_crop_mode\\x18; \\x01(\\t:\\x04warp\\x12\\x12\\n\\x07new_num\\x18< \\x01(\\x05:\\x01\\x30\\x12\\x17\\n\\x0cnew_channels\\x18= \\x01(\\x05:\\x01\\x30\\x12\\x15\\n\\nnew_height\\x18> \\x01(\\x05:\\x01\\x30\\x12\\x14\\n\\tnew_width\\x18? \\x01(\\x05:\\x01\\x30\\x12\\x1d\\n\\x0eshuffle_images\\x18@ \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x15\\n\\nconcat_dim\\x18\\x41 \\x01(\\r:\\x01\\x31\\x12\\x36\\n\\x11hdf5_output_param\\x18\\xe9\\x07 \\x01(\\x0b\\x32\\x1a.caffe.HDF5OutputParameter\\\".\\n\\nPoolMethod\\x12\\x07\\n\\x03MAX\\x10\\x00\\x12\\x07\\n\\x03\\x41VE\\x10\\x01\\x12\\x0e\\n\\nSTOCHASTIC\\x10\\x02\\\"W\\n\\x0ePReLUParameter\\x12&\\n\\x06\\x66iller\\x18\\x01 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x1d\\n\\x0e\\x63hannel_shared\\x18\\x02 \\x01(\\x08:\\x05\\x66\\x61lse*\\x1c\\n\\x05Phase\\x12\\t\\n\\x05TRAIN\\x10\\x00\\x12\\x08\\n\\x04TEST\\x10\\x01'\n)\n\n_PHASE = _descriptor.EnumDescriptor(\n  name='Phase',\n  full_name='caffe.Phase',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='TRAIN', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='TEST', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=15681,\n  serialized_end=15709,\n)\n_sym_db.RegisterEnumDescriptor(_PHASE)\n\nPhase = enum_type_wrapper.EnumTypeWrapper(_PHASE)\nTRAIN = 0\nTEST = 1\n\n\n_FILLERPARAMETER_VARIANCENORM = _descriptor.EnumDescriptor(\n  name='VarianceNorm',\n  full_name='caffe.FillerParameter.VarianceNorm',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='FAN_IN', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='FAN_OUT', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='AVERAGE', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=658,\n  serialized_end=710,\n)\n_sym_db.RegisterEnumDescriptor(_FILLERPARAMETER_VARIANCENORM)\n\n_SOLVERPARAMETER_SNAPSHOTFORMAT = _descriptor.EnumDescriptor(\n  name='SnapshotFormat',\n  full_name='caffe.SolverParameter.SnapshotFormat',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='HDF5', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='BINARYPROTO', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=2188,\n  serialized_end=2231,\n)\n_sym_db.RegisterEnumDescriptor(_SOLVERPARAMETER_SNAPSHOTFORMAT)\n\n_SOLVERPARAMETER_SOLVERMODE = _descriptor.EnumDescriptor(\n  name='SolverMode',\n  full_name='caffe.SolverParameter.SolverMode',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='CPU', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='GPU', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=2233,\n  serialized_end=2263,\n)\n_sym_db.RegisterEnumDescriptor(_SOLVERPARAMETER_SOLVERMODE)\n\n_SOLVERPARAMETER_SOLVERTYPE = _descriptor.EnumDescriptor(\n  name='SolverType',\n  full_name='caffe.SolverParameter.SolverType',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='SGD', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='NESTEROV', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='ADAGRAD', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='RMSPROP', index=3, number=3,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='ADADELTA', index=4, number=4,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='ADAM', index=5, number=5,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=2265,\n  serialized_end=2350,\n)\n_sym_db.RegisterEnumDescriptor(_SOLVERPARAMETER_SOLVERTYPE)\n\n_PARAMSPEC_DIMCHECKMODE = _descriptor.EnumDescriptor(\n  name='DimCheckMode',\n  full_name='caffe.ParamSpec.DimCheckMode',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='STRICT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='PERMISSIVE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=2781,\n  serialized_end=2823,\n)\n_sym_db.RegisterEnumDescriptor(_PARAMSPEC_DIMCHECKMODE)\n\n_LOSSPARAMETER_NORMALIZATIONMODE = _descriptor.EnumDescriptor(\n  name='NormalizationMode',\n  full_name='caffe.LossParameter.NormalizationMode',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='FULL', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='VALID', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='BATCH_SIZE', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='NONE', index=3, number=3,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=5792,\n  serialized_end=5858,\n)\n_sym_db.RegisterEnumDescriptor(_LOSSPARAMETER_NORMALIZATIONMODE)\n\n_CONVOLUTIONPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.ConvolutionParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=6866,\n  serialized_end=6909,\n)\n_sym_db.RegisterEnumDescriptor(_CONVOLUTIONPARAMETER_ENGINE)\n\n_DATAPARAMETER_DB = _descriptor.EnumDescriptor(\n  name='DB',\n  full_name='caffe.DataParameter.DB',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='LEVELDB', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='LMDB', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=7227,\n  serialized_end=7254,\n)\n_sym_db.RegisterEnumDescriptor(_DATAPARAMETER_DB)\n\n_ELTWISEPARAMETER_ELTWISEOP = _descriptor.EnumDescriptor(\n  name='EltwiseOp',\n  full_name='caffe.EltwiseParameter.EltwiseOp',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='PROD', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SUM', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='MAX', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=7594,\n  serialized_end=7633,\n)\n_sym_db.RegisterEnumDescriptor(_ELTWISEPARAMETER_ELTWISEOP)\n\n_HINGELOSSPARAMETER_NORM = _descriptor.EnumDescriptor(\n  name='Norm',\n  full_name='caffe.HingeLossParameter.Norm',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='L1', index=0, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='L2', index=1, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=8168,\n  serialized_end=8190,\n)\n_sym_db.RegisterEnumDescriptor(_HINGELOSSPARAMETER_NORM)\n\n_LRNPARAMETER_NORMREGION = _descriptor.EnumDescriptor(\n  name='NormRegion',\n  full_name='caffe.LRNParameter.NormRegion',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='ACROSS_CHANNELS', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='WITHIN_CHANNEL', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=9074,\n  serialized_end=9127,\n)\n_sym_db.RegisterEnumDescriptor(_LRNPARAMETER_NORMREGION)\n\n_LRNPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.LRNParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=6866,\n  serialized_end=6909,\n)\n_sym_db.RegisterEnumDescriptor(_LRNPARAMETER_ENGINE)\n\n_POOLINGPARAMETER_POOLMETHOD = _descriptor.EnumDescriptor(\n  name='PoolMethod',\n  full_name='caffe.PoolingParameter.PoolMethod',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='MAX', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='AVE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='STOCHASTIC', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=9812,\n  serialized_end=9858,\n)\n_sym_db.RegisterEnumDescriptor(_POOLINGPARAMETER_POOLMETHOD)\n\n_POOLINGPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.PoolingParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=6866,\n  serialized_end=6909,\n)\n_sym_db.RegisterEnumDescriptor(_POOLINGPARAMETER_ENGINE)\n\n_POOLINGPARAMETER_ROUNDMODE = _descriptor.EnumDescriptor(\n  name='RoundMode',\n  full_name='caffe.PoolingParameter.RoundMode',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='CEIL', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='FLOOR', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=9905,\n  serialized_end=9937,\n)\n_sym_db.RegisterEnumDescriptor(_POOLINGPARAMETER_ROUNDMODE)\n\n_REDUCTIONPARAMETER_REDUCTIONOP = _descriptor.EnumDescriptor(\n  name='ReductionOp',\n  full_name='caffe.ReductionParameter.ReductionOp',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='SUM', index=0, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='ASUM', index=1, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SUMSQ', index=2, number=3,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='MEAN', index=3, number=4,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=10432,\n  serialized_end=10485,\n)\n_sym_db.RegisterEnumDescriptor(_REDUCTIONPARAMETER_REDUCTIONOP)\n\n_RELUPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.ReLUParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=6866,\n  serialized_end=6909,\n)\n_sym_db.RegisterEnumDescriptor(_RELUPARAMETER_ENGINE)\n\n_SIGMOIDPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.SigmoidParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=6866,\n  serialized_end=6909,\n)\n_sym_db.RegisterEnumDescriptor(_SIGMOIDPARAMETER_ENGINE)\n\n_SOFTMAXPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.SoftmaxParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=6866,\n  serialized_end=6909,\n)\n_sym_db.RegisterEnumDescriptor(_SOFTMAXPARAMETER_ENGINE)\n\n_TANHPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.TanHParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=6866,\n  serialized_end=6909,\n)\n_sym_db.RegisterEnumDescriptor(_TANHPARAMETER_ENGINE)\n\n_SPPPARAMETER_POOLMETHOD = _descriptor.EnumDescriptor(\n  name='PoolMethod',\n  full_name='caffe.SPPParameter.PoolMethod',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='MAX', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='AVE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='STOCHASTIC', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=9812,\n  serialized_end=9858,\n)\n_sym_db.RegisterEnumDescriptor(_SPPPARAMETER_POOLMETHOD)\n\n_SPPPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.SPPParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=6866,\n  serialized_end=6909,\n)\n_sym_db.RegisterEnumDescriptor(_SPPPARAMETER_ENGINE)\n\n_V1LAYERPARAMETER_LAYERTYPE = _descriptor.EnumDescriptor(\n  name='LayerType',\n  full_name='caffe.V1LayerParameter.LayerType',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='NONE', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='ABSVAL', index=1, number=35,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='ACCURACY', index=2, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='ARGMAX', index=3, number=30,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='BNLL', index=4, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CONCAT', index=5, number=3,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CONTRASTIVE_LOSS', index=6, number=37,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='CONVOLUTION', index=7, number=4,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='DATA', index=8, number=5,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='DECONVOLUTION', index=9, number=39,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='DROPOUT', index=10, number=6,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='DUMMY_DATA', index=11, number=32,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='EUCLIDEAN_LOSS', index=12, number=7,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='ELTWISE', index=13, number=25,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='EXP', index=14, number=38,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='FLATTEN', index=15, number=8,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='HDF5_DATA', index=16, number=9,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='HDF5_OUTPUT', index=17, number=10,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='HINGE_LOSS', index=18, number=28,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='IM2COL', index=19, number=11,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='IMAGE_DATA', index=20, number=12,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='INFOGAIN_LOSS', index=21, number=13,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='INNER_PRODUCT', index=22, number=14,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='LRN', index=23, number=15,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='MEMORY_DATA', index=24, number=29,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='MULTINOMIAL_LOGISTIC_LOSS', index=25, number=16,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='MVN', index=26, number=34,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='POOLING', index=27, number=17,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='POWER', index=28, number=26,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='RELU', index=29, number=18,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SIGMOID', index=30, number=19,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SIGMOID_CROSS_ENTROPY_LOSS', index=31, number=27,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SILENCE', index=32, number=36,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SOFTMAX', index=33, number=20,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SOFTMAX_LOSS', index=34, number=21,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SPLIT', index=35, number=22,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='SLICE', index=36, number=33,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='TANH', index=37, number=23,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='WINDOW_DATA', index=38, number=24,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='THRESHOLD', index=39, number=31,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=13922,\n  serialized_end=14522,\n)\n_sym_db.RegisterEnumDescriptor(_V1LAYERPARAMETER_LAYERTYPE)\n\n_V1LAYERPARAMETER_DIMCHECKMODE = _descriptor.EnumDescriptor(\n  name='DimCheckMode',\n  full_name='caffe.V1LayerParameter.DimCheckMode',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='STRICT', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='PERMISSIVE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=2781,\n  serialized_end=2823,\n)\n_sym_db.RegisterEnumDescriptor(_V1LAYERPARAMETER_DIMCHECKMODE)\n\n_V0LAYERPARAMETER_POOLMETHOD = _descriptor.EnumDescriptor(\n  name='PoolMethod',\n  full_name='caffe.V0LayerParameter.PoolMethod',\n  filename=None,\n  file=DESCRIPTOR,\n  create_key=_descriptor._internal_create_key,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='MAX', index=0, number=0,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='AVE', index=1, number=1,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n    _descriptor.EnumValueDescriptor(\n      name='STOCHASTIC', index=2, number=2,\n      serialized_options=None,\n      type=None,\n      create_key=_descriptor._internal_create_key),\n  ],\n  containing_type=None,\n  serialized_options=None,\n  serialized_start=9812,\n  serialized_end=9858,\n)\n_sym_db.RegisterEnumDescriptor(_V0LAYERPARAMETER_POOLMETHOD)\n\n\n_BLOBSHAPE = _descriptor.Descriptor(\n  name='BlobShape',\n  full_name='caffe.BlobShape',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='dim', full_name='caffe.BlobShape.dim', index=0,\n      number=1, type=3, cpp_type=2, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=b'\\020\\001', file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=22,\n  serialized_end=50,\n)\n\n\n_BLOBPROTO = _descriptor.Descriptor(\n  name='BlobProto',\n  full_name='caffe.BlobProto',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='shape', full_name='caffe.BlobProto.shape', index=0,\n      number=7, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='data', full_name='caffe.BlobProto.data', index=1,\n      number=5, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=b'\\020\\001', file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='diff', full_name='caffe.BlobProto.diff', index=2,\n      number=6, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=b'\\020\\001', file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='double_data', full_name='caffe.BlobProto.double_data', index=3,\n      number=8, type=1, cpp_type=5, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=b'\\020\\001', file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='double_diff', full_name='caffe.BlobProto.double_diff', index=4,\n      number=9, type=1, cpp_type=5, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=b'\\020\\001', file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='num', full_name='caffe.BlobProto.num', index=5,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='channels', full_name='caffe.BlobProto.channels', index=6,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='height', full_name='caffe.BlobProto.height', index=7,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='width', full_name='caffe.BlobProto.width', index=8,\n      number=4, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=53,\n  serialized_end=257,\n)\n\n\n_BLOBPROTOVECTOR = _descriptor.Descriptor(\n  name='BlobProtoVector',\n  full_name='caffe.BlobProtoVector',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='blobs', full_name='caffe.BlobProtoVector.blobs', index=0,\n      number=1, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=259,\n  serialized_end=309,\n)\n\n\n_DATUM = _descriptor.Descriptor(\n  name='Datum',\n  full_name='caffe.Datum',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='channels', full_name='caffe.Datum.channels', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='height', full_name='caffe.Datum.height', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='width', full_name='caffe.Datum.width', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='data', full_name='caffe.Datum.data', index=3,\n      number=4, type=12, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\",\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='label', full_name='caffe.Datum.label', index=4,\n      number=5, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='float_data', full_name='caffe.Datum.float_data', index=5,\n      number=6, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='encoded', full_name='caffe.Datum.encoded', index=6,\n      number=7, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=312,\n  serialized_end=441,\n)\n\n\n_FILLERPARAMETER = _descriptor.Descriptor(\n  name='FillerParameter',\n  full_name='caffe.FillerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.FillerParameter.type', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=b\"constant\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='value', full_name='caffe.FillerParameter.value', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='min', full_name='caffe.FillerParameter.min', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='max', full_name='caffe.FillerParameter.max', index=3,\n      number=4, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mean', full_name='caffe.FillerParameter.mean', index=4,\n      number=5, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='std', full_name='caffe.FillerParameter.std', index=5,\n      number=6, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='sparse', full_name='caffe.FillerParameter.sparse', index=6,\n      number=7, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='variance_norm', full_name='caffe.FillerParameter.variance_norm', index=7,\n      number=8, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _FILLERPARAMETER_VARIANCENORM,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=444,\n  serialized_end=710,\n)\n\n\n_NETPARAMETER = _descriptor.Descriptor(\n  name='NetParameter',\n  full_name='caffe.NetParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.NetParameter.name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='input', full_name='caffe.NetParameter.input', index=1,\n      number=3, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='input_shape', full_name='caffe.NetParameter.input_shape', index=2,\n      number=8, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='input_dim', full_name='caffe.NetParameter.input_dim', index=3,\n      number=4, type=5, cpp_type=1, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='force_backward', full_name='caffe.NetParameter.force_backward', index=4,\n      number=5, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='state', full_name='caffe.NetParameter.state', index=5,\n      number=6, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='debug_info', full_name='caffe.NetParameter.debug_info', index=6,\n      number=7, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='layer', full_name='caffe.NetParameter.layer', index=7,\n      number=100, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='layers', full_name='caffe.NetParameter.layers', index=8,\n      number=2, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=713,\n  serialized_end=983,\n)\n\n\n_SOLVERPARAMETER = _descriptor.Descriptor(\n  name='SolverParameter',\n  full_name='caffe.SolverParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='net', full_name='caffe.SolverParameter.net', index=0,\n      number=24, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='net_param', full_name='caffe.SolverParameter.net_param', index=1,\n      number=25, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='train_net', full_name='caffe.SolverParameter.train_net', index=2,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='test_net', full_name='caffe.SolverParameter.test_net', index=3,\n      number=2, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='train_net_param', full_name='caffe.SolverParameter.train_net_param', index=4,\n      number=21, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='test_net_param', full_name='caffe.SolverParameter.test_net_param', index=5,\n      number=22, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='train_state', full_name='caffe.SolverParameter.train_state', index=6,\n      number=26, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='test_state', full_name='caffe.SolverParameter.test_state', index=7,\n      number=27, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='test_iter', full_name='caffe.SolverParameter.test_iter', index=8,\n      number=3, type=5, cpp_type=1, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='test_interval', full_name='caffe.SolverParameter.test_interval', index=9,\n      number=4, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='test_compute_loss', full_name='caffe.SolverParameter.test_compute_loss', index=10,\n      number=19, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='test_initialization', full_name='caffe.SolverParameter.test_initialization', index=11,\n      number=32, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='base_lr', full_name='caffe.SolverParameter.base_lr', index=12,\n      number=5, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='display', full_name='caffe.SolverParameter.display', index=13,\n      number=6, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='average_loss', full_name='caffe.SolverParameter.average_loss', index=14,\n      number=33, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='max_iter', full_name='caffe.SolverParameter.max_iter', index=15,\n      number=7, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='iter_size', full_name='caffe.SolverParameter.iter_size', index=16,\n      number=36, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='lr_policy', full_name='caffe.SolverParameter.lr_policy', index=17,\n      number=8, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='gamma', full_name='caffe.SolverParameter.gamma', index=18,\n      number=9, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='power', full_name='caffe.SolverParameter.power', index=19,\n      number=10, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='momentum', full_name='caffe.SolverParameter.momentum', index=20,\n      number=11, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weight_decay', full_name='caffe.SolverParameter.weight_decay', index=21,\n      number=12, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='regularization_type', full_name='caffe.SolverParameter.regularization_type', index=22,\n      number=29, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=b\"L2\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stepsize', full_name='caffe.SolverParameter.stepsize', index=23,\n      number=13, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stepvalue', full_name='caffe.SolverParameter.stepvalue', index=24,\n      number=34, type=5, cpp_type=1, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='clip_gradients', full_name='caffe.SolverParameter.clip_gradients', index=25,\n      number=35, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(-1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='snapshot', full_name='caffe.SolverParameter.snapshot', index=26,\n      number=14, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='snapshot_prefix', full_name='caffe.SolverParameter.snapshot_prefix', index=27,\n      number=15, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='snapshot_diff', full_name='caffe.SolverParameter.snapshot_diff', index=28,\n      number=16, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='snapshot_format', full_name='caffe.SolverParameter.snapshot_format', index=29,\n      number=37, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='solver_mode', full_name='caffe.SolverParameter.solver_mode', index=30,\n      number=17, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='device_id', full_name='caffe.SolverParameter.device_id', index=31,\n      number=18, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='random_seed', full_name='caffe.SolverParameter.random_seed', index=32,\n      number=20, type=3, cpp_type=2, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.SolverParameter.type', index=33,\n      number=40, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=b\"SGD\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='delta', full_name='caffe.SolverParameter.delta', index=34,\n      number=31, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1e-08),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='momentum2', full_name='caffe.SolverParameter.momentum2', index=35,\n      number=39, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.999),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='rms_decay', full_name='caffe.SolverParameter.rms_decay', index=36,\n      number=38, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.99),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='debug_info', full_name='caffe.SolverParameter.debug_info', index=37,\n      number=23, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='snapshot_after_train', full_name='caffe.SolverParameter.snapshot_after_train', index=38,\n      number=28, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='solver_type', full_name='caffe.SolverParameter.solver_type', index=39,\n      number=30, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='layer_wise_reduce', full_name='caffe.SolverParameter.layer_wise_reduce', index=40,\n      number=41, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weights', full_name='caffe.SolverParameter.weights', index=41,\n      number=42, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _SOLVERPARAMETER_SNAPSHOTFORMAT,\n    _SOLVERPARAMETER_SOLVERMODE,\n    _SOLVERPARAMETER_SOLVERTYPE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=986,\n  serialized_end=2350,\n)\n\n\n_SOLVERSTATE = _descriptor.Descriptor(\n  name='SolverState',\n  full_name='caffe.SolverState',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='iter', full_name='caffe.SolverState.iter', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='learned_net', full_name='caffe.SolverState.learned_net', index=1,\n      number=2, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='history', full_name='caffe.SolverState.history', index=2,\n      number=3, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='current_step', full_name='caffe.SolverState.current_step', index=3,\n      number=4, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2352,\n  serialized_end=2460,\n)\n\n\n_NETSTATE = _descriptor.Descriptor(\n  name='NetState',\n  full_name='caffe.NetState',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='phase', full_name='caffe.NetState.phase', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='level', full_name='caffe.NetState.level', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stage', full_name='caffe.NetState.stage', index=2,\n      number=3, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2462,\n  serialized_end=2540,\n)\n\n\n_NETSTATERULE = _descriptor.Descriptor(\n  name='NetStateRule',\n  full_name='caffe.NetStateRule',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='phase', full_name='caffe.NetStateRule.phase', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='min_level', full_name='caffe.NetStateRule.min_level', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='max_level', full_name='caffe.NetStateRule.max_level', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stage', full_name='caffe.NetStateRule.stage', index=3,\n      number=4, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='not_stage', full_name='caffe.NetStateRule.not_stage', index=4,\n      number=5, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2542,\n  serialized_end=2657,\n)\n\n\n_PARAMSPEC = _descriptor.Descriptor(\n  name='ParamSpec',\n  full_name='caffe.ParamSpec',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.ParamSpec.name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='share_mode', full_name='caffe.ParamSpec.share_mode', index=1,\n      number=2, type=14, cpp_type=8, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='lr_mult', full_name='caffe.ParamSpec.lr_mult', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='decay_mult', full_name='caffe.ParamSpec.decay_mult', index=3,\n      number=4, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _PARAMSPEC_DIMCHECKMODE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2660,\n  serialized_end=2823,\n)\n\n\n_LAYERPARAMETER = _descriptor.Descriptor(\n  name='LayerParameter',\n  full_name='caffe.LayerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.LayerParameter.name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.LayerParameter.type', index=1,\n      number=2, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bottom', full_name='caffe.LayerParameter.bottom', index=2,\n      number=3, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='top', full_name='caffe.LayerParameter.top', index=3,\n      number=4, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='phase', full_name='caffe.LayerParameter.phase', index=4,\n      number=10, type=14, cpp_type=8, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='loss_weight', full_name='caffe.LayerParameter.loss_weight', index=5,\n      number=5, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='param', full_name='caffe.LayerParameter.param', index=6,\n      number=6, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='blobs', full_name='caffe.LayerParameter.blobs', index=7,\n      number=7, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='propagate_down', full_name='caffe.LayerParameter.propagate_down', index=8,\n      number=11, type=8, cpp_type=7, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='include', full_name='caffe.LayerParameter.include', index=9,\n      number=8, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='exclude', full_name='caffe.LayerParameter.exclude', index=10,\n      number=9, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='transform_param', full_name='caffe.LayerParameter.transform_param', index=11,\n      number=100, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='loss_param', full_name='caffe.LayerParameter.loss_param', index=12,\n      number=101, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='accuracy_param', full_name='caffe.LayerParameter.accuracy_param', index=13,\n      number=102, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='argmax_param', full_name='caffe.LayerParameter.argmax_param', index=14,\n      number=103, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='batch_norm_param', full_name='caffe.LayerParameter.batch_norm_param', index=15,\n      number=139, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_param', full_name='caffe.LayerParameter.bias_param', index=16,\n      number=141, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='clip_param', full_name='caffe.LayerParameter.clip_param', index=17,\n      number=148, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='concat_param', full_name='caffe.LayerParameter.concat_param', index=18,\n      number=104, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='contrastive_loss_param', full_name='caffe.LayerParameter.contrastive_loss_param', index=19,\n      number=105, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='convolution_param', full_name='caffe.LayerParameter.convolution_param', index=20,\n      number=106, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='crop_param', full_name='caffe.LayerParameter.crop_param', index=21,\n      number=144, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='data_param', full_name='caffe.LayerParameter.data_param', index=22,\n      number=107, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='dropout_param', full_name='caffe.LayerParameter.dropout_param', index=23,\n      number=108, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='dummy_data_param', full_name='caffe.LayerParameter.dummy_data_param', index=24,\n      number=109, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='eltwise_param', full_name='caffe.LayerParameter.eltwise_param', index=25,\n      number=110, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='elu_param', full_name='caffe.LayerParameter.elu_param', index=26,\n      number=140, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='embed_param', full_name='caffe.LayerParameter.embed_param', index=27,\n      number=137, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='exp_param', full_name='caffe.LayerParameter.exp_param', index=28,\n      number=111, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='flatten_param', full_name='caffe.LayerParameter.flatten_param', index=29,\n      number=135, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='hdf5_data_param', full_name='caffe.LayerParameter.hdf5_data_param', index=30,\n      number=112, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='hdf5_output_param', full_name='caffe.LayerParameter.hdf5_output_param', index=31,\n      number=113, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='hinge_loss_param', full_name='caffe.LayerParameter.hinge_loss_param', index=32,\n      number=114, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='image_data_param', full_name='caffe.LayerParameter.image_data_param', index=33,\n      number=115, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='infogain_loss_param', full_name='caffe.LayerParameter.infogain_loss_param', index=34,\n      number=116, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='inner_product_param', full_name='caffe.LayerParameter.inner_product_param', index=35,\n      number=117, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='input_param', full_name='caffe.LayerParameter.input_param', index=36,\n      number=143, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='log_param', full_name='caffe.LayerParameter.log_param', index=37,\n      number=134, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='lrn_param', full_name='caffe.LayerParameter.lrn_param', index=38,\n      number=118, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='memory_data_param', full_name='caffe.LayerParameter.memory_data_param', index=39,\n      number=119, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mvn_param', full_name='caffe.LayerParameter.mvn_param', index=40,\n      number=120, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='parameter_param', full_name='caffe.LayerParameter.parameter_param', index=41,\n      number=145, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pooling_param', full_name='caffe.LayerParameter.pooling_param', index=42,\n      number=121, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='power_param', full_name='caffe.LayerParameter.power_param', index=43,\n      number=122, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='prelu_param', full_name='caffe.LayerParameter.prelu_param', index=44,\n      number=131, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='python_param', full_name='caffe.LayerParameter.python_param', index=45,\n      number=130, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='recurrent_param', full_name='caffe.LayerParameter.recurrent_param', index=46,\n      number=146, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='reduction_param', full_name='caffe.LayerParameter.reduction_param', index=47,\n      number=136, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='relu_param', full_name='caffe.LayerParameter.relu_param', index=48,\n      number=123, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='reshape_param', full_name='caffe.LayerParameter.reshape_param', index=49,\n      number=133, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='scale_param', full_name='caffe.LayerParameter.scale_param', index=50,\n      number=142, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='sigmoid_param', full_name='caffe.LayerParameter.sigmoid_param', index=51,\n      number=124, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='softmax_param', full_name='caffe.LayerParameter.softmax_param', index=52,\n      number=125, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='spp_param', full_name='caffe.LayerParameter.spp_param', index=53,\n      number=132, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='slice_param', full_name='caffe.LayerParameter.slice_param', index=54,\n      number=126, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='swish_param', full_name='caffe.LayerParameter.swish_param', index=55,\n      number=147, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='tanh_param', full_name='caffe.LayerParameter.tanh_param', index=56,\n      number=127, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='threshold_param', full_name='caffe.LayerParameter.threshold_param', index=57,\n      number=128, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='tile_param', full_name='caffe.LayerParameter.tile_param', index=58,\n      number=138, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='window_data_param', full_name='caffe.LayerParameter.window_data_param', index=59,\n      number=129, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2826,\n  serialized_end=5476,\n)\n\n\n_TRANSFORMATIONPARAMETER = _descriptor.Descriptor(\n  name='TransformationParameter',\n  full_name='caffe.TransformationParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.TransformationParameter.scale', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.TransformationParameter.mirror', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='crop_size', full_name='caffe.TransformationParameter.crop_size', index=2,\n      number=3, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mean_file', full_name='caffe.TransformationParameter.mean_file', index=3,\n      number=4, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mean_value', full_name='caffe.TransformationParameter.mean_value', index=4,\n      number=5, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='force_color', full_name='caffe.TransformationParameter.force_color', index=5,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='force_gray', full_name='caffe.TransformationParameter.force_gray', index=6,\n      number=7, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5479,\n  serialized_end=5661,\n)\n\n\n_LOSSPARAMETER = _descriptor.Descriptor(\n  name='LossParameter',\n  full_name='caffe.LossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='ignore_label', full_name='caffe.LossParameter.ignore_label', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='normalization', full_name='caffe.LossParameter.normalization', index=1,\n      number=3, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='normalize', full_name='caffe.LossParameter.normalize', index=2,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=False, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _LOSSPARAMETER_NORMALIZATIONMODE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5664,\n  serialized_end=5858,\n)\n\n\n_ACCURACYPARAMETER = _descriptor.Descriptor(\n  name='AccuracyParameter',\n  full_name='caffe.AccuracyParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='top_k', full_name='caffe.AccuracyParameter.top_k', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.AccuracyParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='ignore_label', full_name='caffe.AccuracyParameter.ignore_label', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5860,\n  serialized_end=5936,\n)\n\n\n_ARGMAXPARAMETER = _descriptor.Descriptor(\n  name='ArgMaxParameter',\n  full_name='caffe.ArgMaxParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='out_max_val', full_name='caffe.ArgMaxParameter.out_max_val', index=0,\n      number=1, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='top_k', full_name='caffe.ArgMaxParameter.top_k', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ArgMaxParameter.axis', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5938,\n  serialized_end=6015,\n)\n\n\n_CLIPPARAMETER = _descriptor.Descriptor(\n  name='ClipParameter',\n  full_name='caffe.ClipParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='min', full_name='caffe.ClipParameter.min', index=0,\n      number=1, type=2, cpp_type=6, label=2,\n      has_default_value=False, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='max', full_name='caffe.ClipParameter.max', index=1,\n      number=2, type=2, cpp_type=6, label=2,\n      has_default_value=False, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6017,\n  serialized_end=6058,\n)\n\n\n_CONCATPARAMETER = _descriptor.Descriptor(\n  name='ConcatParameter',\n  full_name='caffe.ConcatParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ConcatParameter.axis', index=0,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='concat_dim', full_name='caffe.ConcatParameter.concat_dim', index=1,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6060,\n  serialized_end=6117,\n)\n\n\n_BATCHNORMPARAMETER = _descriptor.Descriptor(\n  name='BatchNormParameter',\n  full_name='caffe.BatchNormParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='use_global_stats', full_name='caffe.BatchNormParameter.use_global_stats', index=0,\n      number=1, type=8, cpp_type=7, label=1,\n      has_default_value=False, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='moving_average_fraction', full_name='caffe.BatchNormParameter.moving_average_fraction', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.999),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='eps', full_name='caffe.BatchNormParameter.eps', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1e-05),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6119,\n  serialized_end=6225,\n)\n\n\n_BIASPARAMETER = _descriptor.Descriptor(\n  name='BiasParameter',\n  full_name='caffe.BiasParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.BiasParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='num_axes', full_name='caffe.BiasParameter.num_axes', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='filler', full_name='caffe.BiasParameter.filler', index=2,\n      number=3, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6227,\n  serialized_end=6320,\n)\n\n\n_CONTRASTIVELOSSPARAMETER = _descriptor.Descriptor(\n  name='ContrastiveLossParameter',\n  full_name='caffe.ContrastiveLossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='margin', full_name='caffe.ContrastiveLossParameter.margin', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='legacy_version', full_name='caffe.ContrastiveLossParameter.legacy_version', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6322,\n  serialized_end=6398,\n)\n\n\n_CONVOLUTIONPARAMETER = _descriptor.Descriptor(\n  name='ConvolutionParameter',\n  full_name='caffe.ConvolutionParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.ConvolutionParameter.num_output', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.ConvolutionParameter.bias_term', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pad', full_name='caffe.ConvolutionParameter.pad', index=2,\n      number=3, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='kernel_size', full_name='caffe.ConvolutionParameter.kernel_size', index=3,\n      number=4, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stride', full_name='caffe.ConvolutionParameter.stride', index=4,\n      number=6, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='dilation', full_name='caffe.ConvolutionParameter.dilation', index=5,\n      number=18, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pad_h', full_name='caffe.ConvolutionParameter.pad_h', index=6,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pad_w', full_name='caffe.ConvolutionParameter.pad_w', index=7,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='kernel_h', full_name='caffe.ConvolutionParameter.kernel_h', index=8,\n      number=11, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='kernel_w', full_name='caffe.ConvolutionParameter.kernel_w', index=9,\n      number=12, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stride_h', full_name='caffe.ConvolutionParameter.stride_h', index=10,\n      number=13, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stride_w', full_name='caffe.ConvolutionParameter.stride_w', index=11,\n      number=14, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='group', full_name='caffe.ConvolutionParameter.group', index=12,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.ConvolutionParameter.weight_filler', index=13,\n      number=7, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.ConvolutionParameter.bias_filler', index=14,\n      number=8, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.ConvolutionParameter.engine', index=15,\n      number=15, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ConvolutionParameter.axis', index=16,\n      number=16, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='force_nd_im2col', full_name='caffe.ConvolutionParameter.force_nd_im2col', index=17,\n      number=17, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _CONVOLUTIONPARAMETER_ENGINE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6401,\n  serialized_end=6909,\n)\n\n\n_CROPPARAMETER = _descriptor.Descriptor(\n  name='CropParameter',\n  full_name='caffe.CropParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.CropParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=2,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='offset', full_name='caffe.CropParameter.offset', index=1,\n      number=2, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6911,\n  serialized_end=6959,\n)\n\n\n_DATAPARAMETER = _descriptor.Descriptor(\n  name='DataParameter',\n  full_name='caffe.DataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.DataParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.DataParameter.batch_size', index=1,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='rand_skip', full_name='caffe.DataParameter.rand_skip', index=2,\n      number=7, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='backend', full_name='caffe.DataParameter.backend', index=3,\n      number=8, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.DataParameter.scale', index=4,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mean_file', full_name='caffe.DataParameter.mean_file', index=5,\n      number=3, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='crop_size', full_name='caffe.DataParameter.crop_size', index=6,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.DataParameter.mirror', index=7,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='force_encoded_color', full_name='caffe.DataParameter.force_encoded_color', index=8,\n      number=9, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='prefetch', full_name='caffe.DataParameter.prefetch', index=9,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=4,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _DATAPARAMETER_DB,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6962,\n  serialized_end=7254,\n)\n\n\n_DROPOUTPARAMETER = _descriptor.Descriptor(\n  name='DropoutParameter',\n  full_name='caffe.DropoutParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='dropout_ratio', full_name='caffe.DropoutParameter.dropout_ratio', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.5),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7256,\n  serialized_end=7302,\n)\n\n\n_DUMMYDATAPARAMETER = _descriptor.Descriptor(\n  name='DummyDataParameter',\n  full_name='caffe.DummyDataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='data_filler', full_name='caffe.DummyDataParameter.data_filler', index=0,\n      number=1, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='shape', full_name='caffe.DummyDataParameter.shape', index=1,\n      number=6, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='num', full_name='caffe.DummyDataParameter.num', index=2,\n      number=2, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='channels', full_name='caffe.DummyDataParameter.channels', index=3,\n      number=3, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='height', full_name='caffe.DummyDataParameter.height', index=4,\n      number=4, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='width', full_name='caffe.DummyDataParameter.width', index=5,\n      number=5, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7305,\n  serialized_end=7465,\n)\n\n\n_ELTWISEPARAMETER = _descriptor.Descriptor(\n  name='EltwiseParameter',\n  full_name='caffe.EltwiseParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='operation', full_name='caffe.EltwiseParameter.operation', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='coeff', full_name='caffe.EltwiseParameter.coeff', index=1,\n      number=2, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stable_prod_grad', full_name='caffe.EltwiseParameter.stable_prod_grad', index=2,\n      number=3, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _ELTWISEPARAMETER_ELTWISEOP,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7468,\n  serialized_end=7633,\n)\n\n\n_ELUPARAMETER = _descriptor.Descriptor(\n  name='ELUParameter',\n  full_name='caffe.ELUParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='alpha', full_name='caffe.ELUParameter.alpha', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7635,\n  serialized_end=7667,\n)\n\n\n_EMBEDPARAMETER = _descriptor.Descriptor(\n  name='EmbedParameter',\n  full_name='caffe.EmbedParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.EmbedParameter.num_output', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='input_dim', full_name='caffe.EmbedParameter.input_dim', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.EmbedParameter.bias_term', index=2,\n      number=3, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.EmbedParameter.weight_filler', index=3,\n      number=4, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.EmbedParameter.bias_filler', index=4,\n      number=5, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7670,\n  serialized_end=7842,\n)\n\n\n_EXPPARAMETER = _descriptor.Descriptor(\n  name='ExpParameter',\n  full_name='caffe.ExpParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='base', full_name='caffe.ExpParameter.base', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(-1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.ExpParameter.scale', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='shift', full_name='caffe.ExpParameter.shift', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7844,\n  serialized_end=7912,\n)\n\n\n_FLATTENPARAMETER = _descriptor.Descriptor(\n  name='FlattenParameter',\n  full_name='caffe.FlattenParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.FlattenParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='end_axis', full_name='caffe.FlattenParameter.end_axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7914,\n  serialized_end=7971,\n)\n\n\n_HDF5DATAPARAMETER = _descriptor.Descriptor(\n  name='HDF5DataParameter',\n  full_name='caffe.HDF5DataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.HDF5DataParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.HDF5DataParameter.batch_size', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='shuffle', full_name='caffe.HDF5DataParameter.shuffle', index=2,\n      number=3, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7973,\n  serialized_end=8052,\n)\n\n\n_HDF5OUTPUTPARAMETER = _descriptor.Descriptor(\n  name='HDF5OutputParameter',\n  full_name='caffe.HDF5OutputParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='file_name', full_name='caffe.HDF5OutputParameter.file_name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8054,\n  serialized_end=8094,\n)\n\n\n_HINGELOSSPARAMETER = _descriptor.Descriptor(\n  name='HingeLossParameter',\n  full_name='caffe.HingeLossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='norm', full_name='caffe.HingeLossParameter.norm', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _HINGELOSSPARAMETER_NORM,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8096,\n  serialized_end=8190,\n)\n\n\n_IMAGEDATAPARAMETER = _descriptor.Descriptor(\n  name='ImageDataParameter',\n  full_name='caffe.ImageDataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.ImageDataParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.ImageDataParameter.batch_size', index=1,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='rand_skip', full_name='caffe.ImageDataParameter.rand_skip', index=2,\n      number=7, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='shuffle', full_name='caffe.ImageDataParameter.shuffle', index=3,\n      number=8, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='new_height', full_name='caffe.ImageDataParameter.new_height', index=4,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='new_width', full_name='caffe.ImageDataParameter.new_width', index=5,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='is_color', full_name='caffe.ImageDataParameter.is_color', index=6,\n      number=11, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.ImageDataParameter.scale', index=7,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mean_file', full_name='caffe.ImageDataParameter.mean_file', index=8,\n      number=3, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='crop_size', full_name='caffe.ImageDataParameter.crop_size', index=9,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.ImageDataParameter.mirror', index=10,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='root_folder', full_name='caffe.ImageDataParameter.root_folder', index=11,\n      number=12, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8193,\n  serialized_end=8472,\n)\n\n\n_INFOGAINLOSSPARAMETER = _descriptor.Descriptor(\n  name='InfogainLossParameter',\n  full_name='caffe.InfogainLossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.InfogainLossParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.InfogainLossParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8474,\n  serialized_end=8530,\n)\n\n\n_INNERPRODUCTPARAMETER = _descriptor.Descriptor(\n  name='InnerProductParameter',\n  full_name='caffe.InnerProductParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.InnerProductParameter.num_output', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.InnerProductParameter.bias_term', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.InnerProductParameter.weight_filler', index=2,\n      number=3, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.InnerProductParameter.bias_filler', index=3,\n      number=4, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.InnerProductParameter.axis', index=4,\n      number=5, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='transpose', full_name='caffe.InnerProductParameter.transpose', index=5,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8533,\n  serialized_end=8736,\n)\n\n\n_INPUTPARAMETER = _descriptor.Descriptor(\n  name='InputParameter',\n  full_name='caffe.InputParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='shape', full_name='caffe.InputParameter.shape', index=0,\n      number=1, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8738,\n  serialized_end=8787,\n)\n\n\n_LOGPARAMETER = _descriptor.Descriptor(\n  name='LogParameter',\n  full_name='caffe.LogParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='base', full_name='caffe.LogParameter.base', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(-1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.LogParameter.scale', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='shift', full_name='caffe.LogParameter.shift', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8789,\n  serialized_end=8857,\n)\n\n\n_LRNPARAMETER = _descriptor.Descriptor(\n  name='LRNParameter',\n  full_name='caffe.LRNParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='local_size', full_name='caffe.LRNParameter.local_size', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='alpha', full_name='caffe.LRNParameter.alpha', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='beta', full_name='caffe.LRNParameter.beta', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.75),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='norm_region', full_name='caffe.LRNParameter.norm_region', index=3,\n      number=4, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='k', full_name='caffe.LRNParameter.k', index=4,\n      number=5, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.LRNParameter.engine', index=5,\n      number=6, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _LRNPARAMETER_NORMREGION,\n    _LRNPARAMETER_ENGINE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8860,\n  serialized_end=9172,\n)\n\n\n_MEMORYDATAPARAMETER = _descriptor.Descriptor(\n  name='MemoryDataParameter',\n  full_name='caffe.MemoryDataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.MemoryDataParameter.batch_size', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='channels', full_name='caffe.MemoryDataParameter.channels', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='height', full_name='caffe.MemoryDataParameter.height', index=2,\n      number=3, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='width', full_name='caffe.MemoryDataParameter.width', index=3,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9174,\n  serialized_end=9264,\n)\n\n\n_MVNPARAMETER = _descriptor.Descriptor(\n  name='MVNParameter',\n  full_name='caffe.MVNParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='normalize_variance', full_name='caffe.MVNParameter.normalize_variance', index=0,\n      number=1, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='across_channels', full_name='caffe.MVNParameter.across_channels', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='eps', full_name='caffe.MVNParameter.eps', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1e-09),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9266,\n  serialized_end=9366,\n)\n\n\n_PARAMETERPARAMETER = _descriptor.Descriptor(\n  name='ParameterParameter',\n  full_name='caffe.ParameterParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='shape', full_name='caffe.ParameterParameter.shape', index=0,\n      number=1, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9368,\n  serialized_end=9421,\n)\n\n\n_POOLINGPARAMETER = _descriptor.Descriptor(\n  name='PoolingParameter',\n  full_name='caffe.PoolingParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='pool', full_name='caffe.PoolingParameter.pool', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pad', full_name='caffe.PoolingParameter.pad', index=1,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pad_h', full_name='caffe.PoolingParameter.pad_h', index=2,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pad_w', full_name='caffe.PoolingParameter.pad_w', index=3,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='kernel_size', full_name='caffe.PoolingParameter.kernel_size', index=4,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='kernel_h', full_name='caffe.PoolingParameter.kernel_h', index=5,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='kernel_w', full_name='caffe.PoolingParameter.kernel_w', index=6,\n      number=6, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stride', full_name='caffe.PoolingParameter.stride', index=7,\n      number=3, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stride_h', full_name='caffe.PoolingParameter.stride_h', index=8,\n      number=7, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stride_w', full_name='caffe.PoolingParameter.stride_w', index=9,\n      number=8, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.PoolingParameter.engine', index=10,\n      number=11, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='global_pooling', full_name='caffe.PoolingParameter.global_pooling', index=11,\n      number=12, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='round_mode', full_name='caffe.PoolingParameter.round_mode', index=12,\n      number=13, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _POOLINGPARAMETER_POOLMETHOD,\n    _POOLINGPARAMETER_ENGINE,\n    _POOLINGPARAMETER_ROUNDMODE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9424,\n  serialized_end=9937,\n)\n\n\n_POWERPARAMETER = _descriptor.Descriptor(\n  name='PowerParameter',\n  full_name='caffe.PowerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='power', full_name='caffe.PowerParameter.power', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.PowerParameter.scale', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='shift', full_name='caffe.PowerParameter.shift', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9939,\n  serialized_end=10009,\n)\n\n\n_PYTHONPARAMETER = _descriptor.Descriptor(\n  name='PythonParameter',\n  full_name='caffe.PythonParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='module', full_name='caffe.PythonParameter.module', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='layer', full_name='caffe.PythonParameter.layer', index=1,\n      number=2, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='param_str', full_name='caffe.PythonParameter.param_str', index=2,\n      number=3, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='share_in_parallel', full_name='caffe.PythonParameter.share_in_parallel', index=3,\n      number=4, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10011,\n  serialized_end=10114,\n)\n\n\n_RECURRENTPARAMETER = _descriptor.Descriptor(\n  name='RecurrentParameter',\n  full_name='caffe.RecurrentParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.RecurrentParameter.num_output', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.RecurrentParameter.weight_filler', index=1,\n      number=2, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.RecurrentParameter.bias_filler', index=2,\n      number=3, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='debug_info', full_name='caffe.RecurrentParameter.debug_info', index=3,\n      number=4, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='expose_hidden', full_name='caffe.RecurrentParameter.expose_hidden', index=4,\n      number=5, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10117,\n  serialized_end=10309,\n)\n\n\n_REDUCTIONPARAMETER = _descriptor.Descriptor(\n  name='ReductionParameter',\n  full_name='caffe.ReductionParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='operation', full_name='caffe.ReductionParameter.operation', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ReductionParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='coeff', full_name='caffe.ReductionParameter.coeff', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _REDUCTIONPARAMETER_REDUCTIONOP,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10312,\n  serialized_end=10485,\n)\n\n\n_RELUPARAMETER = _descriptor.Descriptor(\n  name='ReLUParameter',\n  full_name='caffe.ReLUParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='negative_slope', full_name='caffe.ReLUParameter.negative_slope', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.ReLUParameter.engine', index=1,\n      number=2, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _RELUPARAMETER_ENGINE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10488,\n  serialized_end=10629,\n)\n\n\n_RESHAPEPARAMETER = _descriptor.Descriptor(\n  name='ReshapeParameter',\n  full_name='caffe.ReshapeParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='shape', full_name='caffe.ReshapeParameter.shape', index=0,\n      number=1, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ReshapeParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='num_axes', full_name='caffe.ReshapeParameter.num_axes', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10631,\n  serialized_end=10721,\n)\n\n\n_SCALEPARAMETER = _descriptor.Descriptor(\n  name='ScaleParameter',\n  full_name='caffe.ScaleParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ScaleParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='num_axes', full_name='caffe.ScaleParameter.num_axes', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='filler', full_name='caffe.ScaleParameter.filler', index=2,\n      number=3, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.ScaleParameter.bias_term', index=3,\n      number=4, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.ScaleParameter.bias_filler', index=4,\n      number=5, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10724,\n  serialized_end=10889,\n)\n\n\n_SIGMOIDPARAMETER = _descriptor.Descriptor(\n  name='SigmoidParameter',\n  full_name='caffe.SigmoidParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.SigmoidParameter.engine', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _SIGMOIDPARAMETER_ENGINE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10891,\n  serialized_end=11011,\n)\n\n\n_SLICEPARAMETER = _descriptor.Descriptor(\n  name='SliceParameter',\n  full_name='caffe.SliceParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.SliceParameter.axis', index=0,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='slice_point', full_name='caffe.SliceParameter.slice_point', index=1,\n      number=2, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='slice_dim', full_name='caffe.SliceParameter.slice_dim', index=2,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11013,\n  serialized_end=11089,\n)\n\n\n_SOFTMAXPARAMETER = _descriptor.Descriptor(\n  name='SoftmaxParameter',\n  full_name='caffe.SoftmaxParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.SoftmaxParameter.engine', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.SoftmaxParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _SOFTMAXPARAMETER_ENGINE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11092,\n  serialized_end=11229,\n)\n\n\n_SWISHPARAMETER = _descriptor.Descriptor(\n  name='SwishParameter',\n  full_name='caffe.SwishParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='beta', full_name='caffe.SwishParameter.beta', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11231,\n  serialized_end=11264,\n)\n\n\n_TANHPARAMETER = _descriptor.Descriptor(\n  name='TanHParameter',\n  full_name='caffe.TanHParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.TanHParameter.engine', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _TANHPARAMETER_ENGINE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11266,\n  serialized_end=11380,\n)\n\n\n_TILEPARAMETER = _descriptor.Descriptor(\n  name='TileParameter',\n  full_name='caffe.TileParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.TileParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='tiles', full_name='caffe.TileParameter.tiles', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11382,\n  serialized_end=11429,\n)\n\n\n_THRESHOLDPARAMETER = _descriptor.Descriptor(\n  name='ThresholdParameter',\n  full_name='caffe.ThresholdParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='threshold', full_name='caffe.ThresholdParameter.threshold', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11431,\n  serialized_end=11473,\n)\n\n\n_WINDOWDATAPARAMETER = _descriptor.Descriptor(\n  name='WindowDataParameter',\n  full_name='caffe.WindowDataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.WindowDataParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.WindowDataParameter.scale', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mean_file', full_name='caffe.WindowDataParameter.mean_file', index=2,\n      number=3, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.WindowDataParameter.batch_size', index=3,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='crop_size', full_name='caffe.WindowDataParameter.crop_size', index=4,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.WindowDataParameter.mirror', index=5,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='fg_threshold', full_name='caffe.WindowDataParameter.fg_threshold', index=6,\n      number=7, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.5),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bg_threshold', full_name='caffe.WindowDataParameter.bg_threshold', index=7,\n      number=8, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.5),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='fg_fraction', full_name='caffe.WindowDataParameter.fg_fraction', index=8,\n      number=9, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.25),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='context_pad', full_name='caffe.WindowDataParameter.context_pad', index=9,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='crop_mode', full_name='caffe.WindowDataParameter.crop_mode', index=10,\n      number=11, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=b\"warp\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='cache_images', full_name='caffe.WindowDataParameter.cache_images', index=11,\n      number=12, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='root_folder', full_name='caffe.WindowDataParameter.root_folder', index=12,\n      number=13, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11476,\n  serialized_end=11797,\n)\n\n\n_SPPPARAMETER = _descriptor.Descriptor(\n  name='SPPParameter',\n  full_name='caffe.SPPParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='pyramid_height', full_name='caffe.SPPParameter.pyramid_height', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pool', full_name='caffe.SPPParameter.pool', index=1,\n      number=2, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.SPPParameter.engine', index=2,\n      number=6, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _SPPPARAMETER_POOLMETHOD,\n    _SPPPARAMETER_ENGINE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11800,\n  serialized_end=12035,\n)\n\n\n_V1LAYERPARAMETER = _descriptor.Descriptor(\n  name='V1LayerParameter',\n  full_name='caffe.V1LayerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='bottom', full_name='caffe.V1LayerParameter.bottom', index=0,\n      number=2, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='top', full_name='caffe.V1LayerParameter.top', index=1,\n      number=3, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.V1LayerParameter.name', index=2,\n      number=4, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='include', full_name='caffe.V1LayerParameter.include', index=3,\n      number=32, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='exclude', full_name='caffe.V1LayerParameter.exclude', index=4,\n      number=33, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.V1LayerParameter.type', index=5,\n      number=5, type=14, cpp_type=8, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='blobs', full_name='caffe.V1LayerParameter.blobs', index=6,\n      number=6, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='param', full_name='caffe.V1LayerParameter.param', index=7,\n      number=1001, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='blob_share_mode', full_name='caffe.V1LayerParameter.blob_share_mode', index=8,\n      number=1002, type=14, cpp_type=8, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='blobs_lr', full_name='caffe.V1LayerParameter.blobs_lr', index=9,\n      number=7, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weight_decay', full_name='caffe.V1LayerParameter.weight_decay', index=10,\n      number=8, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='loss_weight', full_name='caffe.V1LayerParameter.loss_weight', index=11,\n      number=35, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='accuracy_param', full_name='caffe.V1LayerParameter.accuracy_param', index=12,\n      number=27, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='argmax_param', full_name='caffe.V1LayerParameter.argmax_param', index=13,\n      number=23, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='concat_param', full_name='caffe.V1LayerParameter.concat_param', index=14,\n      number=9, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='contrastive_loss_param', full_name='caffe.V1LayerParameter.contrastive_loss_param', index=15,\n      number=40, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='convolution_param', full_name='caffe.V1LayerParameter.convolution_param', index=16,\n      number=10, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='data_param', full_name='caffe.V1LayerParameter.data_param', index=17,\n      number=11, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='dropout_param', full_name='caffe.V1LayerParameter.dropout_param', index=18,\n      number=12, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='dummy_data_param', full_name='caffe.V1LayerParameter.dummy_data_param', index=19,\n      number=26, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='eltwise_param', full_name='caffe.V1LayerParameter.eltwise_param', index=20,\n      number=24, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='exp_param', full_name='caffe.V1LayerParameter.exp_param', index=21,\n      number=41, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='hdf5_data_param', full_name='caffe.V1LayerParameter.hdf5_data_param', index=22,\n      number=13, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='hdf5_output_param', full_name='caffe.V1LayerParameter.hdf5_output_param', index=23,\n      number=14, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='hinge_loss_param', full_name='caffe.V1LayerParameter.hinge_loss_param', index=24,\n      number=29, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='image_data_param', full_name='caffe.V1LayerParameter.image_data_param', index=25,\n      number=15, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='infogain_loss_param', full_name='caffe.V1LayerParameter.infogain_loss_param', index=26,\n      number=16, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='inner_product_param', full_name='caffe.V1LayerParameter.inner_product_param', index=27,\n      number=17, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='lrn_param', full_name='caffe.V1LayerParameter.lrn_param', index=28,\n      number=18, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='memory_data_param', full_name='caffe.V1LayerParameter.memory_data_param', index=29,\n      number=22, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mvn_param', full_name='caffe.V1LayerParameter.mvn_param', index=30,\n      number=34, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pooling_param', full_name='caffe.V1LayerParameter.pooling_param', index=31,\n      number=19, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='power_param', full_name='caffe.V1LayerParameter.power_param', index=32,\n      number=21, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='relu_param', full_name='caffe.V1LayerParameter.relu_param', index=33,\n      number=30, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='sigmoid_param', full_name='caffe.V1LayerParameter.sigmoid_param', index=34,\n      number=38, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='softmax_param', full_name='caffe.V1LayerParameter.softmax_param', index=35,\n      number=39, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='slice_param', full_name='caffe.V1LayerParameter.slice_param', index=36,\n      number=31, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='tanh_param', full_name='caffe.V1LayerParameter.tanh_param', index=37,\n      number=37, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='threshold_param', full_name='caffe.V1LayerParameter.threshold_param', index=38,\n      number=25, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='window_data_param', full_name='caffe.V1LayerParameter.window_data_param', index=39,\n      number=20, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='transform_param', full_name='caffe.V1LayerParameter.transform_param', index=40,\n      number=36, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='loss_param', full_name='caffe.V1LayerParameter.loss_param', index=41,\n      number=42, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='layer', full_name='caffe.V1LayerParameter.layer', index=42,\n      number=1, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _V1LAYERPARAMETER_LAYERTYPE,\n    _V1LAYERPARAMETER_DIMCHECKMODE,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=12038,\n  serialized_end=14566,\n)\n\n\n_V0LAYERPARAMETER = _descriptor.Descriptor(\n  name='V0LayerParameter',\n  full_name='caffe.V0LayerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.V0LayerParameter.name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.V0LayerParameter.type', index=1,\n      number=2, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.V0LayerParameter.num_output', index=2,\n      number=3, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='biasterm', full_name='caffe.V0LayerParameter.biasterm', index=3,\n      number=4, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.V0LayerParameter.weight_filler', index=4,\n      number=5, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.V0LayerParameter.bias_filler', index=5,\n      number=6, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pad', full_name='caffe.V0LayerParameter.pad', index=6,\n      number=7, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='kernelsize', full_name='caffe.V0LayerParameter.kernelsize', index=7,\n      number=8, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='group', full_name='caffe.V0LayerParameter.group', index=8,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='stride', full_name='caffe.V0LayerParameter.stride', index=9,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='pool', full_name='caffe.V0LayerParameter.pool', index=10,\n      number=11, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='dropout_ratio', full_name='caffe.V0LayerParameter.dropout_ratio', index=11,\n      number=12, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.5),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='local_size', full_name='caffe.V0LayerParameter.local_size', index=12,\n      number=13, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='alpha', full_name='caffe.V0LayerParameter.alpha', index=13,\n      number=14, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='beta', full_name='caffe.V0LayerParameter.beta', index=14,\n      number=15, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.75),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='k', full_name='caffe.V0LayerParameter.k', index=15,\n      number=22, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.V0LayerParameter.source', index=16,\n      number=16, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.V0LayerParameter.scale', index=17,\n      number=17, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(1),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='meanfile', full_name='caffe.V0LayerParameter.meanfile', index=18,\n      number=18, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=b\"\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='batchsize', full_name='caffe.V0LayerParameter.batchsize', index=19,\n      number=19, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='cropsize', full_name='caffe.V0LayerParameter.cropsize', index=20,\n      number=20, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.V0LayerParameter.mirror', index=21,\n      number=21, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='blobs', full_name='caffe.V0LayerParameter.blobs', index=22,\n      number=50, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='blobs_lr', full_name='caffe.V0LayerParameter.blobs_lr', index=23,\n      number=51, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='weight_decay', full_name='caffe.V0LayerParameter.weight_decay', index=24,\n      number=52, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='rand_skip', full_name='caffe.V0LayerParameter.rand_skip', index=25,\n      number=53, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='det_fg_threshold', full_name='caffe.V0LayerParameter.det_fg_threshold', index=26,\n      number=54, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.5),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='det_bg_threshold', full_name='caffe.V0LayerParameter.det_bg_threshold', index=27,\n      number=55, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.5),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='det_fg_fraction', full_name='caffe.V0LayerParameter.det_fg_fraction', index=28,\n      number=56, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=float(0.25),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='det_context_pad', full_name='caffe.V0LayerParameter.det_context_pad', index=29,\n      number=58, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='det_crop_mode', full_name='caffe.V0LayerParameter.det_crop_mode', index=30,\n      number=59, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=b\"warp\".decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='new_num', full_name='caffe.V0LayerParameter.new_num', index=31,\n      number=60, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='new_channels', full_name='caffe.V0LayerParameter.new_channels', index=32,\n      number=61, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='new_height', full_name='caffe.V0LayerParameter.new_height', index=33,\n      number=62, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='new_width', full_name='caffe.V0LayerParameter.new_width', index=34,\n      number=63, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='shuffle_images', full_name='caffe.V0LayerParameter.shuffle_images', index=35,\n      number=64, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='concat_dim', full_name='caffe.V0LayerParameter.concat_dim', index=36,\n      number=65, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='hdf5_output_param', full_name='caffe.V0LayerParameter.hdf5_output_param', index=37,\n      number=1001, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _V0LAYERPARAMETER_POOLMETHOD,\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=14569,\n  serialized_end=15590,\n)\n\n\n_PRELUPARAMETER = _descriptor.Descriptor(\n  name='PReLUParameter',\n  full_name='caffe.PReLUParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  create_key=_descriptor._internal_create_key,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='filler', full_name='caffe.PReLUParameter.filler', index=0,\n      number=1, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n    _descriptor.FieldDescriptor(\n      name='channel_shared', full_name='caffe.PReLUParameter.channel_shared', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  serialized_options=None,\n  is_extendable=False,\n  syntax='proto2',\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=15592,\n  serialized_end=15679,\n)\n\n_BLOBPROTO.fields_by_name['shape'].message_type = _BLOBSHAPE\n_BLOBPROTOVECTOR.fields_by_name['blobs'].message_type = _BLOBPROTO\n_FILLERPARAMETER.fields_by_name['variance_norm'].enum_type = _FILLERPARAMETER_VARIANCENORM\n_FILLERPARAMETER_VARIANCENORM.containing_type = _FILLERPARAMETER\n_NETPARAMETER.fields_by_name['input_shape'].message_type = _BLOBSHAPE\n_NETPARAMETER.fields_by_name['state'].message_type = _NETSTATE\n_NETPARAMETER.fields_by_name['layer'].message_type = _LAYERPARAMETER\n_NETPARAMETER.fields_by_name['layers'].message_type = _V1LAYERPARAMETER\n_SOLVERPARAMETER.fields_by_name['net_param'].message_type = _NETPARAMETER\n_SOLVERPARAMETER.fields_by_name['train_net_param'].message_type = _NETPARAMETER\n_SOLVERPARAMETER.fields_by_name['test_net_param'].message_type = _NETPARAMETER\n_SOLVERPARAMETER.fields_by_name['train_state'].message_type = _NETSTATE\n_SOLVERPARAMETER.fields_by_name['test_state'].message_type = _NETSTATE\n_SOLVERPARAMETER.fields_by_name['snapshot_format'].enum_type = _SOLVERPARAMETER_SNAPSHOTFORMAT\n_SOLVERPARAMETER.fields_by_name['solver_mode'].enum_type = _SOLVERPARAMETER_SOLVERMODE\n_SOLVERPARAMETER.fields_by_name['solver_type'].enum_type = _SOLVERPARAMETER_SOLVERTYPE\n_SOLVERPARAMETER_SNAPSHOTFORMAT.containing_type = _SOLVERPARAMETER\n_SOLVERPARAMETER_SOLVERMODE.containing_type = _SOLVERPARAMETER\n_SOLVERPARAMETER_SOLVERTYPE.containing_type = _SOLVERPARAMETER\n_SOLVERSTATE.fields_by_name['history'].message_type = _BLOBPROTO\n_NETSTATE.fields_by_name['phase'].enum_type = _PHASE\n_NETSTATERULE.fields_by_name['phase'].enum_type = _PHASE\n_PARAMSPEC.fields_by_name['share_mode'].enum_type = _PARAMSPEC_DIMCHECKMODE\n_PARAMSPEC_DIMCHECKMODE.containing_type = _PARAMSPEC\n_LAYERPARAMETER.fields_by_name['phase'].enum_type = _PHASE\n_LAYERPARAMETER.fields_by_name['param'].message_type = _PARAMSPEC\n_LAYERPARAMETER.fields_by_name['blobs'].message_type = _BLOBPROTO\n_LAYERPARAMETER.fields_by_name['include'].message_type = _NETSTATERULE\n_LAYERPARAMETER.fields_by_name['exclude'].message_type = _NETSTATERULE\n_LAYERPARAMETER.fields_by_name['transform_param'].message_type = _TRANSFORMATIONPARAMETER\n_LAYERPARAMETER.fields_by_name['loss_param'].message_type = _LOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['accuracy_param'].message_type = _ACCURACYPARAMETER\n_LAYERPARAMETER.fields_by_name['argmax_param'].message_type = _ARGMAXPARAMETER\n_LAYERPARAMETER.fields_by_name['batch_norm_param'].message_type = _BATCHNORMPARAMETER\n_LAYERPARAMETER.fields_by_name['bias_param'].message_type = _BIASPARAMETER\n_LAYERPARAMETER.fields_by_name['clip_param'].message_type = _CLIPPARAMETER\n_LAYERPARAMETER.fields_by_name['concat_param'].message_type = _CONCATPARAMETER\n_LAYERPARAMETER.fields_by_name['contrastive_loss_param'].message_type = _CONTRASTIVELOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['convolution_param'].message_type = _CONVOLUTIONPARAMETER\n_LAYERPARAMETER.fields_by_name['crop_param'].message_type = _CROPPARAMETER\n_LAYERPARAMETER.fields_by_name['data_param'].message_type = _DATAPARAMETER\n_LAYERPARAMETER.fields_by_name['dropout_param'].message_type = _DROPOUTPARAMETER\n_LAYERPARAMETER.fields_by_name['dummy_data_param'].message_type = _DUMMYDATAPARAMETER\n_LAYERPARAMETER.fields_by_name['eltwise_param'].message_type = _ELTWISEPARAMETER\n_LAYERPARAMETER.fields_by_name['elu_param'].message_type = _ELUPARAMETER\n_LAYERPARAMETER.fields_by_name['embed_param'].message_type = _EMBEDPARAMETER\n_LAYERPARAMETER.fields_by_name['exp_param'].message_type = _EXPPARAMETER\n_LAYERPARAMETER.fields_by_name['flatten_param'].message_type = _FLATTENPARAMETER\n_LAYERPARAMETER.fields_by_name['hdf5_data_param'].message_type = _HDF5DATAPARAMETER\n_LAYERPARAMETER.fields_by_name['hdf5_output_param'].message_type = _HDF5OUTPUTPARAMETER\n_LAYERPARAMETER.fields_by_name['hinge_loss_param'].message_type = _HINGELOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['image_data_param'].message_type = _IMAGEDATAPARAMETER\n_LAYERPARAMETER.fields_by_name['infogain_loss_param'].message_type = _INFOGAINLOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['inner_product_param'].message_type = _INNERPRODUCTPARAMETER\n_LAYERPARAMETER.fields_by_name['input_param'].message_type = _INPUTPARAMETER\n_LAYERPARAMETER.fields_by_name['log_param'].message_type = _LOGPARAMETER\n_LAYERPARAMETER.fields_by_name['lrn_param'].message_type = _LRNPARAMETER\n_LAYERPARAMETER.fields_by_name['memory_data_param'].message_type = _MEMORYDATAPARAMETER\n_LAYERPARAMETER.fields_by_name['mvn_param'].message_type = _MVNPARAMETER\n_LAYERPARAMETER.fields_by_name['parameter_param'].message_type = _PARAMETERPARAMETER\n_LAYERPARAMETER.fields_by_name['pooling_param'].message_type = _POOLINGPARAMETER\n_LAYERPARAMETER.fields_by_name['power_param'].message_type = _POWERPARAMETER\n_LAYERPARAMETER.fields_by_name['prelu_param'].message_type = _PRELUPARAMETER\n_LAYERPARAMETER.fields_by_name['python_param'].message_type = _PYTHONPARAMETER\n_LAYERPARAMETER.fields_by_name['recurrent_param'].message_type = _RECURRENTPARAMETER\n_LAYERPARAMETER.fields_by_name['reduction_param'].message_type = _REDUCTIONPARAMETER\n_LAYERPARAMETER.fields_by_name['relu_param'].message_type = _RELUPARAMETER\n_LAYERPARAMETER.fields_by_name['reshape_param'].message_type = _RESHAPEPARAMETER\n_LAYERPARAMETER.fields_by_name['scale_param'].message_type = _SCALEPARAMETER\n_LAYERPARAMETER.fields_by_name['sigmoid_param'].message_type = _SIGMOIDPARAMETER\n_LAYERPARAMETER.fields_by_name['softmax_param'].message_type = _SOFTMAXPARAMETER\n_LAYERPARAMETER.fields_by_name['spp_param'].message_type = _SPPPARAMETER\n_LAYERPARAMETER.fields_by_name['slice_param'].message_type = _SLICEPARAMETER\n_LAYERPARAMETER.fields_by_name['swish_param'].message_type = _SWISHPARAMETER\n_LAYERPARAMETER.fields_by_name['tanh_param'].message_type = _TANHPARAMETER\n_LAYERPARAMETER.fields_by_name['threshold_param'].message_type = _THRESHOLDPARAMETER\n_LAYERPARAMETER.fields_by_name['tile_param'].message_type = _TILEPARAMETER\n_LAYERPARAMETER.fields_by_name['window_data_param'].message_type = _WINDOWDATAPARAMETER\n_LOSSPARAMETER.fields_by_name['normalization'].enum_type = _LOSSPARAMETER_NORMALIZATIONMODE\n_LOSSPARAMETER_NORMALIZATIONMODE.containing_type = _LOSSPARAMETER\n_BIASPARAMETER.fields_by_name['filler'].message_type = _FILLERPARAMETER\n_CONVOLUTIONPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_CONVOLUTIONPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_CONVOLUTIONPARAMETER.fields_by_name['engine'].enum_type = _CONVOLUTIONPARAMETER_ENGINE\n_CONVOLUTIONPARAMETER_ENGINE.containing_type = _CONVOLUTIONPARAMETER\n_DATAPARAMETER.fields_by_name['backend'].enum_type = _DATAPARAMETER_DB\n_DATAPARAMETER_DB.containing_type = _DATAPARAMETER\n_DUMMYDATAPARAMETER.fields_by_name['data_filler'].message_type = _FILLERPARAMETER\n_DUMMYDATAPARAMETER.fields_by_name['shape'].message_type = _BLOBSHAPE\n_ELTWISEPARAMETER.fields_by_name['operation'].enum_type = _ELTWISEPARAMETER_ELTWISEOP\n_ELTWISEPARAMETER_ELTWISEOP.containing_type = _ELTWISEPARAMETER\n_EMBEDPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_EMBEDPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_HINGELOSSPARAMETER.fields_by_name['norm'].enum_type = _HINGELOSSPARAMETER_NORM\n_HINGELOSSPARAMETER_NORM.containing_type = _HINGELOSSPARAMETER\n_INNERPRODUCTPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_INNERPRODUCTPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_INPUTPARAMETER.fields_by_name['shape'].message_type = _BLOBSHAPE\n_LRNPARAMETER.fields_by_name['norm_region'].enum_type = _LRNPARAMETER_NORMREGION\n_LRNPARAMETER.fields_by_name['engine'].enum_type = _LRNPARAMETER_ENGINE\n_LRNPARAMETER_NORMREGION.containing_type = _LRNPARAMETER\n_LRNPARAMETER_ENGINE.containing_type = _LRNPARAMETER\n_PARAMETERPARAMETER.fields_by_name['shape'].message_type = _BLOBSHAPE\n_POOLINGPARAMETER.fields_by_name['pool'].enum_type = _POOLINGPARAMETER_POOLMETHOD\n_POOLINGPARAMETER.fields_by_name['engine'].enum_type = _POOLINGPARAMETER_ENGINE\n_POOLINGPARAMETER.fields_by_name['round_mode'].enum_type = _POOLINGPARAMETER_ROUNDMODE\n_POOLINGPARAMETER_POOLMETHOD.containing_type = _POOLINGPARAMETER\n_POOLINGPARAMETER_ENGINE.containing_type = _POOLINGPARAMETER\n_POOLINGPARAMETER_ROUNDMODE.containing_type = _POOLINGPARAMETER\n_RECURRENTPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_RECURRENTPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_REDUCTIONPARAMETER.fields_by_name['operation'].enum_type = _REDUCTIONPARAMETER_REDUCTIONOP\n_REDUCTIONPARAMETER_REDUCTIONOP.containing_type = _REDUCTIONPARAMETER\n_RELUPARAMETER.fields_by_name['engine'].enum_type = _RELUPARAMETER_ENGINE\n_RELUPARAMETER_ENGINE.containing_type = _RELUPARAMETER\n_RESHAPEPARAMETER.fields_by_name['shape'].message_type = _BLOBSHAPE\n_SCALEPARAMETER.fields_by_name['filler'].message_type = _FILLERPARAMETER\n_SCALEPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_SIGMOIDPARAMETER.fields_by_name['engine'].enum_type = _SIGMOIDPARAMETER_ENGINE\n_SIGMOIDPARAMETER_ENGINE.containing_type = _SIGMOIDPARAMETER\n_SOFTMAXPARAMETER.fields_by_name['engine'].enum_type = _SOFTMAXPARAMETER_ENGINE\n_SOFTMAXPARAMETER_ENGINE.containing_type = _SOFTMAXPARAMETER\n_TANHPARAMETER.fields_by_name['engine'].enum_type = _TANHPARAMETER_ENGINE\n_TANHPARAMETER_ENGINE.containing_type = _TANHPARAMETER\n_SPPPARAMETER.fields_by_name['pool'].enum_type = _SPPPARAMETER_POOLMETHOD\n_SPPPARAMETER.fields_by_name['engine'].enum_type = _SPPPARAMETER_ENGINE\n_SPPPARAMETER_POOLMETHOD.containing_type = _SPPPARAMETER\n_SPPPARAMETER_ENGINE.containing_type = _SPPPARAMETER\n_V1LAYERPARAMETER.fields_by_name['include'].message_type = _NETSTATERULE\n_V1LAYERPARAMETER.fields_by_name['exclude'].message_type = _NETSTATERULE\n_V1LAYERPARAMETER.fields_by_name['type'].enum_type = _V1LAYERPARAMETER_LAYERTYPE\n_V1LAYERPARAMETER.fields_by_name['blobs'].message_type = _BLOBPROTO\n_V1LAYERPARAMETER.fields_by_name['blob_share_mode'].enum_type = _V1LAYERPARAMETER_DIMCHECKMODE\n_V1LAYERPARAMETER.fields_by_name['accuracy_param'].message_type = _ACCURACYPARAMETER\n_V1LAYERPARAMETER.fields_by_name['argmax_param'].message_type = _ARGMAXPARAMETER\n_V1LAYERPARAMETER.fields_by_name['concat_param'].message_type = _CONCATPARAMETER\n_V1LAYERPARAMETER.fields_by_name['contrastive_loss_param'].message_type = _CONTRASTIVELOSSPARAMETER\n_V1LAYERPARAMETER.fields_by_name['convolution_param'].message_type = _CONVOLUTIONPARAMETER\n_V1LAYERPARAMETER.fields_by_name['data_param'].message_type = _DATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['dropout_param'].message_type = _DROPOUTPARAMETER\n_V1LAYERPARAMETER.fields_by_name['dummy_data_param'].message_type = _DUMMYDATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['eltwise_param'].message_type = _ELTWISEPARAMETER\n_V1LAYERPARAMETER.fields_by_name['exp_param'].message_type = _EXPPARAMETER\n_V1LAYERPARAMETER.fields_by_name['hdf5_data_param'].message_type = _HDF5DATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['hdf5_output_param'].message_type = _HDF5OUTPUTPARAMETER\n_V1LAYERPARAMETER.fields_by_name['hinge_loss_param'].message_type = _HINGELOSSPARAMETER\n_V1LAYERPARAMETER.fields_by_name['image_data_param'].message_type = _IMAGEDATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['infogain_loss_param'].message_type = _INFOGAINLOSSPARAMETER\n_V1LAYERPARAMETER.fields_by_name['inner_product_param'].message_type = _INNERPRODUCTPARAMETER\n_V1LAYERPARAMETER.fields_by_name['lrn_param'].message_type = _LRNPARAMETER\n_V1LAYERPARAMETER.fields_by_name['memory_data_param'].message_type = _MEMORYDATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['mvn_param'].message_type = _MVNPARAMETER\n_V1LAYERPARAMETER.fields_by_name['pooling_param'].message_type = _POOLINGPARAMETER\n_V1LAYERPARAMETER.fields_by_name['power_param'].message_type = _POWERPARAMETER\n_V1LAYERPARAMETER.fields_by_name['relu_param'].message_type = _RELUPARAMETER\n_V1LAYERPARAMETER.fields_by_name['sigmoid_param'].message_type = _SIGMOIDPARAMETER\n_V1LAYERPARAMETER.fields_by_name['softmax_param'].message_type = _SOFTMAXPARAMETER\n_V1LAYERPARAMETER.fields_by_name['slice_param'].message_type = _SLICEPARAMETER\n_V1LAYERPARAMETER.fields_by_name['tanh_param'].message_type = _TANHPARAMETER\n_V1LAYERPARAMETER.fields_by_name['threshold_param'].message_type = _THRESHOLDPARAMETER\n_V1LAYERPARAMETER.fields_by_name['window_data_param'].message_type = _WINDOWDATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['transform_param'].message_type = _TRANSFORMATIONPARAMETER\n_V1LAYERPARAMETER.fields_by_name['loss_param'].message_type = _LOSSPARAMETER\n_V1LAYERPARAMETER.fields_by_name['layer'].message_type = _V0LAYERPARAMETER\n_V1LAYERPARAMETER_LAYERTYPE.containing_type = _V1LAYERPARAMETER\n_V1LAYERPARAMETER_DIMCHECKMODE.containing_type = _V1LAYERPARAMETER\n_V0LAYERPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_V0LAYERPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_V0LAYERPARAMETER.fields_by_name['pool'].enum_type = _V0LAYERPARAMETER_POOLMETHOD\n_V0LAYERPARAMETER.fields_by_name['blobs'].message_type = _BLOBPROTO\n_V0LAYERPARAMETER.fields_by_name['hdf5_output_param'].message_type = _HDF5OUTPUTPARAMETER\n_V0LAYERPARAMETER_POOLMETHOD.containing_type = _V0LAYERPARAMETER\n_PRELUPARAMETER.fields_by_name['filler'].message_type = _FILLERPARAMETER\nDESCRIPTOR.message_types_by_name['BlobShape'] = _BLOBSHAPE\nDESCRIPTOR.message_types_by_name['BlobProto'] = _BLOBPROTO\nDESCRIPTOR.message_types_by_name['BlobProtoVector'] = _BLOBPROTOVECTOR\nDESCRIPTOR.message_types_by_name['Datum'] = _DATUM\nDESCRIPTOR.message_types_by_name['FillerParameter'] = _FILLERPARAMETER\nDESCRIPTOR.message_types_by_name['NetParameter'] = _NETPARAMETER\nDESCRIPTOR.message_types_by_name['SolverParameter'] = _SOLVERPARAMETER\nDESCRIPTOR.message_types_by_name['SolverState'] = _SOLVERSTATE\nDESCRIPTOR.message_types_by_name['NetState'] = _NETSTATE\nDESCRIPTOR.message_types_by_name['NetStateRule'] = _NETSTATERULE\nDESCRIPTOR.message_types_by_name['ParamSpec'] = _PARAMSPEC\nDESCRIPTOR.message_types_by_name['LayerParameter'] = _LAYERPARAMETER\nDESCRIPTOR.message_types_by_name['TransformationParameter'] = _TRANSFORMATIONPARAMETER\nDESCRIPTOR.message_types_by_name['LossParameter'] = _LOSSPARAMETER\nDESCRIPTOR.message_types_by_name['AccuracyParameter'] = _ACCURACYPARAMETER\nDESCRIPTOR.message_types_by_name['ArgMaxParameter'] = _ARGMAXPARAMETER\nDESCRIPTOR.message_types_by_name['ClipParameter'] = _CLIPPARAMETER\nDESCRIPTOR.message_types_by_name['ConcatParameter'] = _CONCATPARAMETER\nDESCRIPTOR.message_types_by_name['BatchNormParameter'] = _BATCHNORMPARAMETER\nDESCRIPTOR.message_types_by_name['BiasParameter'] = _BIASPARAMETER\nDESCRIPTOR.message_types_by_name['ContrastiveLossParameter'] = _CONTRASTIVELOSSPARAMETER\nDESCRIPTOR.message_types_by_name['ConvolutionParameter'] = _CONVOLUTIONPARAMETER\nDESCRIPTOR.message_types_by_name['CropParameter'] = _CROPPARAMETER\nDESCRIPTOR.message_types_by_name['DataParameter'] = _DATAPARAMETER\nDESCRIPTOR.message_types_by_name['DropoutParameter'] = _DROPOUTPARAMETER\nDESCRIPTOR.message_types_by_name['DummyDataParameter'] = _DUMMYDATAPARAMETER\nDESCRIPTOR.message_types_by_name['EltwiseParameter'] = _ELTWISEPARAMETER\nDESCRIPTOR.message_types_by_name['ELUParameter'] = _ELUPARAMETER\nDESCRIPTOR.message_types_by_name['EmbedParameter'] = _EMBEDPARAMETER\nDESCRIPTOR.message_types_by_name['ExpParameter'] = _EXPPARAMETER\nDESCRIPTOR.message_types_by_name['FlattenParameter'] = _FLATTENPARAMETER\nDESCRIPTOR.message_types_by_name['HDF5DataParameter'] = _HDF5DATAPARAMETER\nDESCRIPTOR.message_types_by_name['HDF5OutputParameter'] = _HDF5OUTPUTPARAMETER\nDESCRIPTOR.message_types_by_name['HingeLossParameter'] = _HINGELOSSPARAMETER\nDESCRIPTOR.message_types_by_name['ImageDataParameter'] = _IMAGEDATAPARAMETER\nDESCRIPTOR.message_types_by_name['InfogainLossParameter'] = _INFOGAINLOSSPARAMETER\nDESCRIPTOR.message_types_by_name['InnerProductParameter'] = _INNERPRODUCTPARAMETER\nDESCRIPTOR.message_types_by_name['InputParameter'] = _INPUTPARAMETER\nDESCRIPTOR.message_types_by_name['LogParameter'] = _LOGPARAMETER\nDESCRIPTOR.message_types_by_name['LRNParameter'] = _LRNPARAMETER\nDESCRIPTOR.message_types_by_name['MemoryDataParameter'] = _MEMORYDATAPARAMETER\nDESCRIPTOR.message_types_by_name['MVNParameter'] = _MVNPARAMETER\nDESCRIPTOR.message_types_by_name['ParameterParameter'] = _PARAMETERPARAMETER\nDESCRIPTOR.message_types_by_name['PoolingParameter'] = _POOLINGPARAMETER\nDESCRIPTOR.message_types_by_name['PowerParameter'] = _POWERPARAMETER\nDESCRIPTOR.message_types_by_name['PythonParameter'] = _PYTHONPARAMETER\nDESCRIPTOR.message_types_by_name['RecurrentParameter'] = _RECURRENTPARAMETER\nDESCRIPTOR.message_types_by_name['ReductionParameter'] = _REDUCTIONPARAMETER\nDESCRIPTOR.message_types_by_name['ReLUParameter'] = _RELUPARAMETER\nDESCRIPTOR.message_types_by_name['ReshapeParameter'] = _RESHAPEPARAMETER\nDESCRIPTOR.message_types_by_name['ScaleParameter'] = _SCALEPARAMETER\nDESCRIPTOR.message_types_by_name['SigmoidParameter'] = _SIGMOIDPARAMETER\nDESCRIPTOR.message_types_by_name['SliceParameter'] = _SLICEPARAMETER\nDESCRIPTOR.message_types_by_name['SoftmaxParameter'] = _SOFTMAXPARAMETER\nDESCRIPTOR.message_types_by_name['SwishParameter'] = _SWISHPARAMETER\nDESCRIPTOR.message_types_by_name['TanHParameter'] = _TANHPARAMETER\nDESCRIPTOR.message_types_by_name['TileParameter'] = _TILEPARAMETER\nDESCRIPTOR.message_types_by_name['ThresholdParameter'] = _THRESHOLDPARAMETER\nDESCRIPTOR.message_types_by_name['WindowDataParameter'] = _WINDOWDATAPARAMETER\nDESCRIPTOR.message_types_by_name['SPPParameter'] = _SPPPARAMETER\nDESCRIPTOR.message_types_by_name['V1LayerParameter'] = _V1LAYERPARAMETER\nDESCRIPTOR.message_types_by_name['V0LayerParameter'] = _V0LAYERPARAMETER\nDESCRIPTOR.message_types_by_name['PReLUParameter'] = _PRELUPARAMETER\nDESCRIPTOR.enum_types_by_name['Phase'] = _PHASE\n_sym_db.RegisterFileDescriptor(DESCRIPTOR)\n\nBlobShape = _reflection.GeneratedProtocolMessageType('BlobShape', (_message.Message,), {\n  'DESCRIPTOR' : _BLOBSHAPE,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BlobShape)\n  })\n_sym_db.RegisterMessage(BlobShape)\n\nBlobProto = _reflection.GeneratedProtocolMessageType('BlobProto', (_message.Message,), {\n  'DESCRIPTOR' : _BLOBPROTO,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BlobProto)\n  })\n_sym_db.RegisterMessage(BlobProto)\n\nBlobProtoVector = _reflection.GeneratedProtocolMessageType('BlobProtoVector', (_message.Message,), {\n  'DESCRIPTOR' : _BLOBPROTOVECTOR,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BlobProtoVector)\n  })\n_sym_db.RegisterMessage(BlobProtoVector)\n\nDatum = _reflection.GeneratedProtocolMessageType('Datum', (_message.Message,), {\n  'DESCRIPTOR' : _DATUM,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.Datum)\n  })\n_sym_db.RegisterMessage(Datum)\n\nFillerParameter = _reflection.GeneratedProtocolMessageType('FillerParameter', (_message.Message,), {\n  'DESCRIPTOR' : _FILLERPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.FillerParameter)\n  })\n_sym_db.RegisterMessage(FillerParameter)\n\nNetParameter = _reflection.GeneratedProtocolMessageType('NetParameter', (_message.Message,), {\n  'DESCRIPTOR' : _NETPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.NetParameter)\n  })\n_sym_db.RegisterMessage(NetParameter)\n\nSolverParameter = _reflection.GeneratedProtocolMessageType('SolverParameter', (_message.Message,), {\n  'DESCRIPTOR' : _SOLVERPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SolverParameter)\n  })\n_sym_db.RegisterMessage(SolverParameter)\n\nSolverState = _reflection.GeneratedProtocolMessageType('SolverState', (_message.Message,), {\n  'DESCRIPTOR' : _SOLVERSTATE,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SolverState)\n  })\n_sym_db.RegisterMessage(SolverState)\n\nNetState = _reflection.GeneratedProtocolMessageType('NetState', (_message.Message,), {\n  'DESCRIPTOR' : _NETSTATE,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.NetState)\n  })\n_sym_db.RegisterMessage(NetState)\n\nNetStateRule = _reflection.GeneratedProtocolMessageType('NetStateRule', (_message.Message,), {\n  'DESCRIPTOR' : _NETSTATERULE,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.NetStateRule)\n  })\n_sym_db.RegisterMessage(NetStateRule)\n\nParamSpec = _reflection.GeneratedProtocolMessageType('ParamSpec', (_message.Message,), {\n  'DESCRIPTOR' : _PARAMSPEC,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ParamSpec)\n  })\n_sym_db.RegisterMessage(ParamSpec)\n\nLayerParameter = _reflection.GeneratedProtocolMessageType('LayerParameter', (_message.Message,), {\n  'DESCRIPTOR' : _LAYERPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.LayerParameter)\n  })\n_sym_db.RegisterMessage(LayerParameter)\n\nTransformationParameter = _reflection.GeneratedProtocolMessageType('TransformationParameter', (_message.Message,), {\n  'DESCRIPTOR' : _TRANSFORMATIONPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.TransformationParameter)\n  })\n_sym_db.RegisterMessage(TransformationParameter)\n\nLossParameter = _reflection.GeneratedProtocolMessageType('LossParameter', (_message.Message,), {\n  'DESCRIPTOR' : _LOSSPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.LossParameter)\n  })\n_sym_db.RegisterMessage(LossParameter)\n\nAccuracyParameter = _reflection.GeneratedProtocolMessageType('AccuracyParameter', (_message.Message,), {\n  'DESCRIPTOR' : _ACCURACYPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.AccuracyParameter)\n  })\n_sym_db.RegisterMessage(AccuracyParameter)\n\nArgMaxParameter = _reflection.GeneratedProtocolMessageType('ArgMaxParameter', (_message.Message,), {\n  'DESCRIPTOR' : _ARGMAXPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ArgMaxParameter)\n  })\n_sym_db.RegisterMessage(ArgMaxParameter)\n\nClipParameter = _reflection.GeneratedProtocolMessageType('ClipParameter', (_message.Message,), {\n  'DESCRIPTOR' : _CLIPPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ClipParameter)\n  })\n_sym_db.RegisterMessage(ClipParameter)\n\nConcatParameter = _reflection.GeneratedProtocolMessageType('ConcatParameter', (_message.Message,), {\n  'DESCRIPTOR' : _CONCATPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ConcatParameter)\n  })\n_sym_db.RegisterMessage(ConcatParameter)\n\nBatchNormParameter = _reflection.GeneratedProtocolMessageType('BatchNormParameter', (_message.Message,), {\n  'DESCRIPTOR' : _BATCHNORMPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BatchNormParameter)\n  })\n_sym_db.RegisterMessage(BatchNormParameter)\n\nBiasParameter = _reflection.GeneratedProtocolMessageType('BiasParameter', (_message.Message,), {\n  'DESCRIPTOR' : _BIASPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BiasParameter)\n  })\n_sym_db.RegisterMessage(BiasParameter)\n\nContrastiveLossParameter = _reflection.GeneratedProtocolMessageType('ContrastiveLossParameter', (_message.Message,), {\n  'DESCRIPTOR' : _CONTRASTIVELOSSPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ContrastiveLossParameter)\n  })\n_sym_db.RegisterMessage(ContrastiveLossParameter)\n\nConvolutionParameter = _reflection.GeneratedProtocolMessageType('ConvolutionParameter', (_message.Message,), {\n  'DESCRIPTOR' : _CONVOLUTIONPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ConvolutionParameter)\n  })\n_sym_db.RegisterMessage(ConvolutionParameter)\n\nCropParameter = _reflection.GeneratedProtocolMessageType('CropParameter', (_message.Message,), {\n  'DESCRIPTOR' : _CROPPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.CropParameter)\n  })\n_sym_db.RegisterMessage(CropParameter)\n\nDataParameter = _reflection.GeneratedProtocolMessageType('DataParameter', (_message.Message,), {\n  'DESCRIPTOR' : _DATAPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.DataParameter)\n  })\n_sym_db.RegisterMessage(DataParameter)\n\nDropoutParameter = _reflection.GeneratedProtocolMessageType('DropoutParameter', (_message.Message,), {\n  'DESCRIPTOR' : _DROPOUTPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.DropoutParameter)\n  })\n_sym_db.RegisterMessage(DropoutParameter)\n\nDummyDataParameter = _reflection.GeneratedProtocolMessageType('DummyDataParameter', (_message.Message,), {\n  'DESCRIPTOR' : _DUMMYDATAPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.DummyDataParameter)\n  })\n_sym_db.RegisterMessage(DummyDataParameter)\n\nEltwiseParameter = _reflection.GeneratedProtocolMessageType('EltwiseParameter', (_message.Message,), {\n  'DESCRIPTOR' : _ELTWISEPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.EltwiseParameter)\n  })\n_sym_db.RegisterMessage(EltwiseParameter)\n\nELUParameter = _reflection.GeneratedProtocolMessageType('ELUParameter', (_message.Message,), {\n  'DESCRIPTOR' : _ELUPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ELUParameter)\n  })\n_sym_db.RegisterMessage(ELUParameter)\n\nEmbedParameter = _reflection.GeneratedProtocolMessageType('EmbedParameter', (_message.Message,), {\n  'DESCRIPTOR' : _EMBEDPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.EmbedParameter)\n  })\n_sym_db.RegisterMessage(EmbedParameter)\n\nExpParameter = _reflection.GeneratedProtocolMessageType('ExpParameter', (_message.Message,), {\n  'DESCRIPTOR' : _EXPPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ExpParameter)\n  })\n_sym_db.RegisterMessage(ExpParameter)\n\nFlattenParameter = _reflection.GeneratedProtocolMessageType('FlattenParameter', (_message.Message,), {\n  'DESCRIPTOR' : _FLATTENPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.FlattenParameter)\n  })\n_sym_db.RegisterMessage(FlattenParameter)\n\nHDF5DataParameter = _reflection.GeneratedProtocolMessageType('HDF5DataParameter', (_message.Message,), {\n  'DESCRIPTOR' : _HDF5DATAPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.HDF5DataParameter)\n  })\n_sym_db.RegisterMessage(HDF5DataParameter)\n\nHDF5OutputParameter = _reflection.GeneratedProtocolMessageType('HDF5OutputParameter', (_message.Message,), {\n  'DESCRIPTOR' : _HDF5OUTPUTPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.HDF5OutputParameter)\n  })\n_sym_db.RegisterMessage(HDF5OutputParameter)\n\nHingeLossParameter = _reflection.GeneratedProtocolMessageType('HingeLossParameter', (_message.Message,), {\n  'DESCRIPTOR' : _HINGELOSSPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.HingeLossParameter)\n  })\n_sym_db.RegisterMessage(HingeLossParameter)\n\nImageDataParameter = _reflection.GeneratedProtocolMessageType('ImageDataParameter', (_message.Message,), {\n  'DESCRIPTOR' : _IMAGEDATAPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ImageDataParameter)\n  })\n_sym_db.RegisterMessage(ImageDataParameter)\n\nInfogainLossParameter = _reflection.GeneratedProtocolMessageType('InfogainLossParameter', (_message.Message,), {\n  'DESCRIPTOR' : _INFOGAINLOSSPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.InfogainLossParameter)\n  })\n_sym_db.RegisterMessage(InfogainLossParameter)\n\nInnerProductParameter = _reflection.GeneratedProtocolMessageType('InnerProductParameter', (_message.Message,), {\n  'DESCRIPTOR' : _INNERPRODUCTPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.InnerProductParameter)\n  })\n_sym_db.RegisterMessage(InnerProductParameter)\n\nInputParameter = _reflection.GeneratedProtocolMessageType('InputParameter', (_message.Message,), {\n  'DESCRIPTOR' : _INPUTPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.InputParameter)\n  })\n_sym_db.RegisterMessage(InputParameter)\n\nLogParameter = _reflection.GeneratedProtocolMessageType('LogParameter', (_message.Message,), {\n  'DESCRIPTOR' : _LOGPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.LogParameter)\n  })\n_sym_db.RegisterMessage(LogParameter)\n\nLRNParameter = _reflection.GeneratedProtocolMessageType('LRNParameter', (_message.Message,), {\n  'DESCRIPTOR' : _LRNPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.LRNParameter)\n  })\n_sym_db.RegisterMessage(LRNParameter)\n\nMemoryDataParameter = _reflection.GeneratedProtocolMessageType('MemoryDataParameter', (_message.Message,), {\n  'DESCRIPTOR' : _MEMORYDATAPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.MemoryDataParameter)\n  })\n_sym_db.RegisterMessage(MemoryDataParameter)\n\nMVNParameter = _reflection.GeneratedProtocolMessageType('MVNParameter', (_message.Message,), {\n  'DESCRIPTOR' : _MVNPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.MVNParameter)\n  })\n_sym_db.RegisterMessage(MVNParameter)\n\nParameterParameter = _reflection.GeneratedProtocolMessageType('ParameterParameter', (_message.Message,), {\n  'DESCRIPTOR' : _PARAMETERPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ParameterParameter)\n  })\n_sym_db.RegisterMessage(ParameterParameter)\n\nPoolingParameter = _reflection.GeneratedProtocolMessageType('PoolingParameter', (_message.Message,), {\n  'DESCRIPTOR' : _POOLINGPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.PoolingParameter)\n  })\n_sym_db.RegisterMessage(PoolingParameter)\n\nPowerParameter = _reflection.GeneratedProtocolMessageType('PowerParameter', (_message.Message,), {\n  'DESCRIPTOR' : _POWERPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.PowerParameter)\n  })\n_sym_db.RegisterMessage(PowerParameter)\n\nPythonParameter = _reflection.GeneratedProtocolMessageType('PythonParameter', (_message.Message,), {\n  'DESCRIPTOR' : _PYTHONPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.PythonParameter)\n  })\n_sym_db.RegisterMessage(PythonParameter)\n\nRecurrentParameter = _reflection.GeneratedProtocolMessageType('RecurrentParameter', (_message.Message,), {\n  'DESCRIPTOR' : _RECURRENTPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.RecurrentParameter)\n  })\n_sym_db.RegisterMessage(RecurrentParameter)\n\nReductionParameter = _reflection.GeneratedProtocolMessageType('ReductionParameter', (_message.Message,), {\n  'DESCRIPTOR' : _REDUCTIONPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ReductionParameter)\n  })\n_sym_db.RegisterMessage(ReductionParameter)\n\nReLUParameter = _reflection.GeneratedProtocolMessageType('ReLUParameter', (_message.Message,), {\n  'DESCRIPTOR' : _RELUPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ReLUParameter)\n  })\n_sym_db.RegisterMessage(ReLUParameter)\n\nReshapeParameter = _reflection.GeneratedProtocolMessageType('ReshapeParameter', (_message.Message,), {\n  'DESCRIPTOR' : _RESHAPEPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ReshapeParameter)\n  })\n_sym_db.RegisterMessage(ReshapeParameter)\n\nScaleParameter = _reflection.GeneratedProtocolMessageType('ScaleParameter', (_message.Message,), {\n  'DESCRIPTOR' : _SCALEPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ScaleParameter)\n  })\n_sym_db.RegisterMessage(ScaleParameter)\n\nSigmoidParameter = _reflection.GeneratedProtocolMessageType('SigmoidParameter', (_message.Message,), {\n  'DESCRIPTOR' : _SIGMOIDPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SigmoidParameter)\n  })\n_sym_db.RegisterMessage(SigmoidParameter)\n\nSliceParameter = _reflection.GeneratedProtocolMessageType('SliceParameter', (_message.Message,), {\n  'DESCRIPTOR' : _SLICEPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SliceParameter)\n  })\n_sym_db.RegisterMessage(SliceParameter)\n\nSoftmaxParameter = _reflection.GeneratedProtocolMessageType('SoftmaxParameter', (_message.Message,), {\n  'DESCRIPTOR' : _SOFTMAXPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SoftmaxParameter)\n  })\n_sym_db.RegisterMessage(SoftmaxParameter)\n\nSwishParameter = _reflection.GeneratedProtocolMessageType('SwishParameter', (_message.Message,), {\n  'DESCRIPTOR' : _SWISHPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SwishParameter)\n  })\n_sym_db.RegisterMessage(SwishParameter)\n\nTanHParameter = _reflection.GeneratedProtocolMessageType('TanHParameter', (_message.Message,), {\n  'DESCRIPTOR' : _TANHPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.TanHParameter)\n  })\n_sym_db.RegisterMessage(TanHParameter)\n\nTileParameter = _reflection.GeneratedProtocolMessageType('TileParameter', (_message.Message,), {\n  'DESCRIPTOR' : _TILEPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.TileParameter)\n  })\n_sym_db.RegisterMessage(TileParameter)\n\nThresholdParameter = _reflection.GeneratedProtocolMessageType('ThresholdParameter', (_message.Message,), {\n  'DESCRIPTOR' : _THRESHOLDPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ThresholdParameter)\n  })\n_sym_db.RegisterMessage(ThresholdParameter)\n\nWindowDataParameter = _reflection.GeneratedProtocolMessageType('WindowDataParameter', (_message.Message,), {\n  'DESCRIPTOR' : _WINDOWDATAPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.WindowDataParameter)\n  })\n_sym_db.RegisterMessage(WindowDataParameter)\n\nSPPParameter = _reflection.GeneratedProtocolMessageType('SPPParameter', (_message.Message,), {\n  'DESCRIPTOR' : _SPPPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SPPParameter)\n  })\n_sym_db.RegisterMessage(SPPParameter)\n\nV1LayerParameter = _reflection.GeneratedProtocolMessageType('V1LayerParameter', (_message.Message,), {\n  'DESCRIPTOR' : _V1LAYERPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.V1LayerParameter)\n  })\n_sym_db.RegisterMessage(V1LayerParameter)\n\nV0LayerParameter = _reflection.GeneratedProtocolMessageType('V0LayerParameter', (_message.Message,), {\n  'DESCRIPTOR' : _V0LAYERPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.V0LayerParameter)\n  })\n_sym_db.RegisterMessage(V0LayerParameter)\n\nPReLUParameter = _reflection.GeneratedProtocolMessageType('PReLUParameter', (_message.Message,), {\n  'DESCRIPTOR' : _PRELUPARAMETER,\n  '__module__' : 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.PReLUParameter)\n  })\n_sym_db.RegisterMessage(PReLUParameter)\n\n\n_BLOBSHAPE.fields_by_name['dim']._options = None\n_BLOBPROTO.fields_by_name['data']._options = None\n_BLOBPROTO.fields_by_name['diff']._options = None\n_BLOBPROTO.fields_by_name['double_data']._options = None\n_BLOBPROTO.fields_by_name['double_diff']._options = None\n# @@protoc_insertion_point(module_scope)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/caffe2/reader.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport onnx\nfrom onnx.shape_inference import infer_shapes\nimport caffe2.python.onnx.frontend\nfrom caffe2.proto import caffe2_pb2\nfrom ..onnx.reader import onnx_model_to_graph\nfrom ...utils.types import as_str\nfrom google.protobuf import text_format\nfrom collections.abc import Sequence\nfrom caffe2.python.workspace import GlobalInit\nimport json\nimport sys\nimport os\n\ntry:\n    import caffe\nexcept ImportError:\n    from . import caffe\n    sys.modules['caffe'] = caffe\n\nfrom caffe2.python.caffe_translator import TranslateModel, TranslatorRegistry, ConvertTensorProtosToInitNet, \\\n    AddArgument, BaseTranslate\n\n\n@TranslatorRegistry.Register(\"ArgMax\")\ndef TranslateArgmax(layer, pretrained_blobs, is_test, **kwargs):\n    param = layer.argmax_param\n    if param.top_k != 1:\n        raise ValueError(\"Unsupported attribute value 'top_k = {}' in ArgMax layer\".format(param.top_k))\n    if param.out_max_val:\n        raise ValueError(\"Conversion of ArgMax layer with 'out_max_val = True' is not supported\")\n\n    if param.HasField(\"axis\"):\n        caffe_op = BaseTranslate(layer, \"ArgMax\")\n        AddArgument(caffe_op, \"keepdims\", True)\n        AddArgument(caffe_op, \"axis\", param.axis)\n        return caffe_op, []\n    else:\n        flatten_op = caffe2_pb2.OperatorDef()\n        flatten_op.type = \"Flatten\"\n        flatten_op.input.extend(layer.bottom)\n        flatten_op.output.append(layer.bottom[0] + \"_flattened\")\n\n        argmax_op = caffe2_pb2.OperatorDef()\n        argmax_op.type = \"ArgMax\"\n        argmax_op.input.append(layer.bottom[0] + \"_flattened\")\n        argmax_op.output.extend(layer.top)\n        AddArgument(argmax_op, \"keepdims\", True)\n        AddArgument(argmax_op, \"axis\", 1)\n\n        return [flatten_op, argmax_op], []\n\n\n@TranslatorRegistry.Register(\"ReLU\")\ndef TranslateRelu(layer, pretrained_blobs, is_test, **kwargs):\n    param = layer.relu_param\n    if param.HasField(\"negative_slope\") and param.negative_slope:\n        relu_op = BaseTranslate(layer, \"LeakyRelu\")\n        AddArgument(relu_op, \"alpha\", param.negative_slope)\n        return relu_op, []\n    else:\n        return BaseTranslate(layer, \"Relu\"), []\n\n\ndef _HookedTranslateLayer(layer, pretrained_blobs, is_test, **kwargs):\n    _pre_translate(layer, pretrained_blobs)\n    ops, params = _TranslateLayer(layer, pretrained_blobs, is_test, **kwargs)\n    _post_translate(layer, pretrained_blobs, ops, params)\n    return ops, params\n\n\n_TranslateLayer = TranslatorRegistry.TranslateLayer\nTranslatorRegistry.TranslateLayer = _HookedTranslateLayer\n\n\nGlobalInit(['caffe2', '--caffe2_log_level=2'])\n\n\n_UnrecognizedAttribs = {'ws_nbytes_limit'}\n\n\ndef _caffe_to_caffe2(prototxt, caffemodel):\n    if prototxt.layer[0].type == 'Input':\n        input_names = list(prototxt.layer[0].top)\n        input_shapes = list(item.dim for item in prototxt.layer[0].input_param.shape)\n    else:\n        input_names = list(item for item in prototxt.input)\n        input_shapes = list(item.dim for item in prototxt.input_shape)\n        if len(input_shapes) == 0:\n            input_dims = prototxt.input_dim\n            assert len(input_dims) == 4 * len(input_names)\n            input_shapes = [None] * len(input_names)\n            for i in range(len(input_names)):\n                input_shapes[i] = input_dims[4 * i: 4 * (i+1)]\n\n    predict_net, params = TranslateModel(prototxt, caffemodel, is_test=True, remove_legacy_pad=False, input_dims=[])\n\n    # Assume there is one input and one output\n    external_input = predict_net.op[0].input[0]\n    external_output = predict_net.op[-1].output[0]\n\n    predict_net.external_input.extend([external_input])\n    predict_net.external_input.extend([param.name for param in params.protos])\n    predict_net.external_output.extend([external_output])\n    init_net = ConvertTensorProtosToInitNet(params, external_input)\n\n    value_info = {name: (onnx.TensorProto.FLOAT, shape) for name, shape in zip(input_names, input_shapes)}\n\n    return predict_net, init_net, value_info\n\n\n_DeconvGroups = {}\n\n\ndef _pre_translate(layer, blobs):\n    if layer.type == 'Convolution':\n        _fix_conv_pool_param(layer.convolution_param)\n    elif layer.type == 'Deconvolution':\n        _fix_conv_pool_param(layer.convolution_param)\n        if layer.convolution_param.group != 1:\n            _DeconvGroups[layer.name] = layer.convolution_param.group\n            layer.convolution_param.group = 0   # to trick the conversion check\n    elif layer.type == 'Pooling':\n        _fix_conv_pool_param(layer.pooling_param)\n    elif layer.type == 'Eltwise':\n        _fix_eltwise_param(layer.eltwise_param)\n    elif layer.type == 'BatchNorm':\n        _fix_batch_norm_param(layer.batch_norm_param, blobs)\n\n\ndef _post_translate(layer, blobs, ops, params):\n    if layer.type == 'Deconvolution' and layer.convolution_param.group == 0:\n        AddArgument(ops[0], 'group', _DeconvGroups[layer.name])\n\n\ndef _fix_conv_pool_param(param):\n    if isinstance(param.kernel_size, Sequence) and len(param.kernel_size) == 2:\n        param.kernel_h = param.kernel_size[0]\n        param.kernel_w = param.kernel_size[1]\n        del param.kernel_size[1]\n        del param.kernel_size[0]\n    if isinstance(param.stride, Sequence) and len(param.stride) == 2:\n        param.stride_h = param.stride[0]\n        param.stride_w = param.stride[1]\n        del param.stride[1]\n        del param.stride[0]\n    if isinstance(param.pad, Sequence) and len(param.pad) == 2:\n        param.pad_h = param.pad[0]\n        param.pad_w = param.pad[1]\n        del param.pad[1]\n        del param.pad[0]\n\n\ndef _fix_eltwise_param(param):\n    if len(param.coeff) > 0 and all(c == 1 for c in param.coeff):\n        for i in reversed(range(len(param.coeff))):\n            del param.coeff[i]\n\n\ndef _fix_batch_norm_param(param, blobs):\n    if len(blobs) > 2:\n        if blobs[2].data[0] == 0:\n            blobs[2].data[0] = 1\n\n\ndef _caffe2_net_to_onnx_model(predict_net, init_net, value_info):\n    graph = caffe2.python.onnx.frontend.caffe2_net_to_onnx_graph(predict_net, init_net, value_info)\n    if not graph.name:\n        graph.name = 'Graph'\n\n    opset_id = onnx.OperatorSetIdProto()\n    opset_id.domain = ''\n    opset_id.version = 11\n    model = onnx.helper.make_model(graph, opset_imports=[opset_id])\n    onnx.checker.check_model(model)\n    return model\n\n\ndef _remove_unrecognized_attributes(net_def):\n    for op_def in net_def.op:\n        for idx in reversed(range(len(op_def.arg))):\n            name = as_str(op_def.arg[idx].name)\n            if name in _UnrecognizedAttribs:\n                del op_def.arg[idx]\n\n\ndef load_caffe_model(path):\n    base, ext = os.path.splitext(path)\n    assert ext == '.prototxt'\n\n    with open(path) as file:\n        prototxt = caffe.proto.caffe_pb2.NetParameter()\n        text_format.Merge(file.read(), prototxt)\n    with open(base + '.caffemodel', 'rb') as file:\n        caffemodel = caffe.proto.caffe_pb2.NetParameter()\n        caffemodel.ParseFromString(file.read())\n\n    return prototxt, caffemodel\n\n\ndef load_caffe_model_as_onnx(path):\n    prototxt, caffemodel = load_caffe_model(path)\n    predict_net, init_net, value_info = _caffe_to_caffe2(prototxt, caffemodel)\n    _remove_unrecognized_attributes(predict_net)\n    return _caffe2_net_to_onnx_model(predict_net, init_net, value_info)\n\n\ndef load_caffe2_model(folder):\n    predict_net = caffe2_pb2.NetDef()\n    with open(os.path.join(folder, 'predict_net.pb'), 'rb') as file:\n        predict_net.ParseFromString(file.read())\n\n    init_net = caffe2_pb2.NetDef()\n    with open(os.path.join(folder, 'init_net.pb'), 'rb') as file:\n        init_net.ParseFromString(file.read())\n\n    with open(os.path.join(folder, 'value_info.json')) as file:\n        value_info = json.load(file)\n\n    return predict_net, init_net, value_info\n\n\ndef load_caffe2_model_as_onnx(folder):\n    predict_net, init_net, value_info = load_caffe2_model(folder)\n    _remove_unrecognized_attributes(predict_net)\n    return _caffe2_net_to_onnx_model(predict_net, init_net, value_info)\n\n\nclass Reader:\n\n    def __init__(self, legacy=False):\n        self._legacy = legacy\n\n    def __call__(self, path):\n        onnx_model = load_caffe_model_as_onnx(path) if self._legacy else load_caffe2_model_as_onnx(path)\n        onnx.checker.check_model(onnx_model)\n        onnx_model = infer_shapes(onnx_model)\n        return onnx_model_to_graph(onnx_model)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/caffe2/writer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom caffe2.python.onnx.backend import Caffe2Backend\nfrom ..onnx.writer import build_model, build_dtype\nfrom ..onnx.reader import _get_value_info\nfrom ...utils.types import as_str\nimport json\nimport os\n\n\ndef save_caffe2_model(folder, init_net, predict_net, value_info):\n    with open(os.path.join(folder, 'init_net.pb'), 'wb') as file:\n        file.write(init_net.SerializeToString())\n\n    with open(os.path.join(folder, 'predict_net.pb'), 'wb') as file:\n        file.write(predict_net.SerializeToString())\n\n    with open(os.path.join(folder, 'value_info.json'), 'w') as file:\n        json.dump(value_info, file)\n\n\ndef get_value_info(onnx_model):\n    initializer_names = {as_str(info.name) for info in onnx_model.graph.initializer}\n\n    value_info = {}\n    for info in onnx_model.graph.input:\n        name, shape, dtype = _get_value_info(info)\n        if name not in initializer_names:\n            value_info[name] = (build_dtype(dtype), shape)\n\n    return value_info\n\n\nclass Writer:\n\n    def __init__(self):\n        pass\n\n    def __call__(self, graph, folder):\n        onnx_model = build_model(graph, ir_version=6, opset_version=9)\n        if not onnx_model.graph.name:\n            onnx_model.graph.name = 'Graph'\n\n        init_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(onnx_model)\n        value_info = get_value_info(onnx_model)\n\n        if not os.path.exists(folder):\n            os.mkdir(folder)\n\n        save_caffe2_model(folder, init_net, predict_net, value_info)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/nnef/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .reader import Reader\nfrom .writer import Writer\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/nnef/helpers.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport tarfile\n\n\ndef tgz_compress(dir_path, file_path, compression_level=0):\n    target_directory = os.path.dirname(file_path)\n    if target_directory and not os.path.exists(target_directory):\n        os.makedirs(target_directory)\n\n    with tarfile.open(file_path, 'w:gz', compresslevel=compression_level) as tar:\n        for file_ in os.listdir(dir_path):\n            tar.add(dir_path + '/' + file_, file_)\n\n\ndef tgz_extract(file_path, dir_path):\n    if dir_path and not os.path.exists(dir_path):\n        os.makedirs(dir_path)\n\n    with tarfile.open(file_path, 'r:gz') as tar:\n        tar.extractall(dir_path)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/nnef/reader.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport shutil\nimport tempfile\nfrom collections import OrderedDict\n\nimport nnef\nimport numpy as np\nimport six\n\nfrom ...model import *\nfrom ...utils import types\nfrom .helpers import tgz_extract\n\n\n_DtypeToNumpy = {\n    'scalar': np.float32,\n    'integer': np.int64,\n    'logical': np.bool_,\n}\n\n\ndef _recursive_itemize(arg):\n    if type(arg) is tuple or type(arg) is list:\n        for item in arg:\n            yield from _recursive_itemize(item)\n    elif type(arg) is dict or type(arg) is OrderedDict:\n        for item in six.itervalues(arg):\n            yield from _recursive_itemize(item)\n    else:\n        yield arg\n\n\ndef _make_constant_tensor(graph, value):\n    value = types.to_numpy(value)\n    return Tensor(graph=graph, shape=(), dtype=value.dtype.type, data=value)\n\n\ndef _make_tensor(graph, nnef_tensor):\n    dtype = nnef_tensor.data.dtype.type if isinstance(nnef_tensor.data, np.ndarray) else _DtypeToNumpy[nnef_tensor.dtype]\n    return Tensor(graph=graph, name=nnef_tensor.name, shape=tuple(nnef_tensor.shape) if nnef_tensor.shape is not None else None,\n                  dtype=dtype, data=nnef_tensor.data, quant=nnef_tensor.quantization)\n\n\ndef _build_graph(nnef_graph):\n    graph = Graph(name=nnef_graph.name)\n\n    tensor_by_name = {}\n    for nnef_op in nnef_graph.operations:\n        inputs = (tensor_by_name[item] if isinstance(item, nnef.Identifier) else _make_constant_tensor(graph, item)\n                  for item in _recursive_itemize(nnef_op.inputs))\n        inputs = list(inputs) if any(isinstance(item, list) for item in six.itervalues(nnef_op.inputs)) else tuple(inputs)\n\n        outputs = (_make_tensor(graph, nnef_graph.tensors[str(item)])\n                   for item in _recursive_itemize(nnef_op.outputs))\n        outputs = list(outputs) if any(isinstance(item, list) for item in six.itervalues(nnef_op.outputs)) else tuple(outputs)\n\n        for tensor in outputs:\n            tensor_by_name[str(tensor.name)] = tensor\n\n        attribs = dict(nnef_op.attribs)\n        if nnef_op.dtype is not None:\n            attribs['dtype'] = outputs[0].dtype if nnef_op.name == 'constant' or nnef_op.name == 'variable' else \\\n                _DtypeToNumpy[nnef_op.dtype]\n\n        _substitute_empty_array(nnef_op.name, 'stride', attribs, inputs)\n        _substitute_empty_array(nnef_op.name, 'dilation', attribs, inputs)\n\n        custom = nnef_op.name not in nnef.StandardOperations\n\n        Operation(graph=graph, type=nnef_op.name, attribs=attribs, inputs=inputs, outputs=outputs, custom=custom)\n\n    graph.inputs = [tensor_by_name[str(item)] for item in nnef_graph.inputs]\n    graph.outputs = [tensor_by_name[str(item)] for item in nnef_graph.outputs]\n\n    return graph\n\n\ndef _substitute_empty_array(op, key, attribs, inputs):\n    value = attribs.get(key)\n    if value is not None and len(value) == 0:\n        rank = None\n        if op == 'slice':\n            rank = len(attribs['axes'])\n        elif len(inputs) > 0 and inputs[0].rank is not None:\n            rank = inputs[0].rank - 2 if op.endswith('conv') else inputs[0].rank\n\n        if rank is not None:\n            attribs[key] = [1] * rank\n\n\nclass Reader(object):\n\n    def __init__(self, stdlib=None, decomposed=None, custom_shapes=None, infer_shapes=True, load_variables=True):\n        self._stdlib = stdlib\n        self._decomposed = decomposed\n        self._custom_shapes = custom_shapes\n        self._infer_shapes = infer_shapes\n        self._load_variables = load_variables\n\n    def __call__(self, path, input_shapes=None):\n        filename, extension = os.path.splitext(path)\n        compressed = extension in ['.tgz', '.gz'] and not os.path.isdir(path)\n\n        folder = None\n        try:\n            if compressed:\n                folder = tempfile.mkdtemp(prefix=\"nnef_\")\n                tgz_extract(path, folder)\n                path = folder\n\n            if not os.path.isdir(path):\n                raise IOError(\"NNEF model must be a (compressed) folder, but an uncompressed file was provided\")\n\n            nnef_graph = nnef.load_graph(path, stdlib=self._stdlib, lowered=self._decomposed, load_variables=self._load_variables)\n            if self._infer_shapes:\n                nnef.infer_shapes(nnef_graph, external_shapes=input_shapes or {}, custom_shapes=self._custom_shapes or {})\n\n            return _build_graph(nnef_graph)\n        finally:\n            if folder is not None:\n                shutil.rmtree(folder)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/nnef/writer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nimport nnef\nimport numpy as np\nimport tempfile\nimport shutil\nimport six\nimport os\nfrom .helpers import tgz_compress\nfrom ...model import Tensor\nfrom ...utils.types import as_str, from_numpy\n\n\n_DtypeFromNumpy = {\n    np.float16: 'scalar',\n    np.float32: 'scalar',\n    np.float64: 'scalar',\n    np.int8: 'integer',\n    np.uint8: 'integer',\n    np.int16: 'integer',\n    np.uint16: 'integer',\n    np.int32: 'integer',\n    np.uint32: 'integer',\n    np.int64: 'integer',\n    np.uint64: 'integer',\n    np.bool_: 'logical',\n}\n\n\n_DtypeFromPyType = {\n    str: 'string',\n    float: 'scalar',\n    int: 'integer',\n    bool: 'logical',\n    None: 'dtype',\n}\n\n\ndef _nnef_dtype(dtype):\n    return _DtypeFromNumpy[dtype.type if isinstance(dtype, np.dtype) else dtype] if dtype is not None else None\n\n\ndef _print(graph, file, extensions, fragments, version_custom_ops, annotate_shapes):\n    assert graph.is_sorted(), \"graph must be topologically sorted\"\n    assert all(tensor.name is not None or (tensor.producer is None and tensor.data is not None)\n               for tensor in graph.tensors), \\\n        \"all tensors must have names\"\n    assert all(all(s is not None for s in op.attribs['shape'])\n               for op in graph.operations if op.type == 'external'), \\\n        \"external ops must not contain undefined shapes\"\n\n    print(nnef.format_version((1, 0)), file=file)\n    if len(extensions):\n        print(file=file)\n        print(nnef.format_extensions(extensions), file=file)\n    if fragments:\n        print(file=file)\n        print(fragments, file=file)\n    print(file=file)\n\n    graph_name = as_str(graph.name) if graph.name is not None else \"G\"\n    graph_inputs = [as_str(item.name) for item in graph.inputs]\n    graph_outputs = [as_str(item.name) for item in graph.outputs]\n\n    print(\"graph {}({}) -> ({})\".format(graph_name, ', '.join(graph_inputs), ', '.join(graph_outputs)), file=file)\n    print(\"{\", file=file)\n\n    versions = {}\n    for op in graph.operations:\n        assert all(isinstance(item, Tensor) for item in op.outputs)\n\n        inputs = ((from_numpy(item.data) if item.producer is None else nnef.Identifier(as_str(item.name)))\n                  if isinstance(item, Tensor) else item for item in op.inputs)\n        inputs = tuple(inputs) if isinstance(op.inputs, tuple) else (list(inputs),)\n\n        outputs = (nnef.Identifier(as_str(item.name)) for item in op.outputs)\n        outputs = tuple(outputs) if isinstance(op.outputs, tuple) else (list(outputs),)\n\n        attribs = {as_str(key): value for key, value in six.iteritems(op.attribs)}\n\n        name = _next_version(op.type, versions) if op.type not in nnef.StandardOperations and version_custom_ops else op.type\n\n        dtype = attribs.get('dtype')\n        if dtype is not None:\n            dtype = _nnef_dtype(dtype)\n            del attribs['dtype']\n\n        for key, value in six.iteritems(attribs):\n            if isinstance(value, (type, np.dtype)):\n                attribs[key] = _nnef_dtype(value)\n\n        invocation = nnef.format_invocation(name=name, dtype=dtype, attribs=attribs, inputs=inputs, outputs=outputs)\n        annotation = \"    # \" + \", \".join(_nnef_dtype(output.dtype) + str(output.shape) for output in op.outputs) \\\n            if annotate_shapes else ''\n\n        print(\"    {};{}\".format(invocation, annotation), file=file)\n\n    print(\"}\", file=file)\n\n\ndef _write_tensor(array, filename, quantized):\n    directory = os.path.dirname(filename)\n    if directory and not os.path.exists(directory):\n        os.makedirs(directory)\n\n    with open(filename, \"wb\") as file:\n        nnef.write_tensor(file=file, tensor=array, quantized=quantized)\n\n\ndef _write_quantization(graph, file):\n    for tensor in graph.tensors:\n        if tensor.quant:\n            op_name = tensor.quant['op-name']\n            attribs = ', '.join(\"{} = {}\".format(k, _printable_value(v))\n                                for k, v in six.iteritems(tensor.quant)\n                                if k != 'op-name' and v is not None)\n            if attribs:\n                print('\"{}\": {}({});'.format(tensor.name, op_name, attribs), file=file)\n\n\ndef _printable_value(v):\n    if type(v) == bool:\n        return 'true' if v else 'false'\n    elif type(v) == np.ndarray:\n        return v.tolist()\n    else:\n        return v\n\n\ndef _next_version(name, versions):\n    version = versions.get(name, 0) + 1\n    versions[name] = version\n    return '{}_v{}'.format(name, version)\n\n\ndef _generate_custom_fragments(graph, fragments, version):\n    versions = {} if version else None\n    return '\\n'.join(_generate_fragment(op, versions) for op in graph.operations\n                     if op.type not in nnef.StandardOperations and op.type not in fragments)\n\n\ndef _generate_fragment(op, versions):\n    attribs = {key: _make_attrib_type(value) for key, value in op.attribs.items()}\n    inputs = [_make_tensor_type(value) for value in op.inputs]\n    outputs = [_make_tensor_type(value) for value in op.outputs]\n    dtype = _nnef_dtype(op.attribs.get('dtype'))\n    name = _next_version(op.type, versions) if versions is not None else op.type\n\n    return 'fragment ' + _fragment_signature(name, dtype, attribs, inputs, outputs) + ';'\n\n\ndef _fragment_signature(name, dtype, attribs, inputs, outputs):\n    str = name\n    if dtype is not None:\n        str += '<' + dtype + '>'\n    str += '( '\n    str += _types_str(['_I{}'.format(i + 1) for i in range(len(inputs))], inputs, True)\n    if len(inputs) and len(attribs):\n        str += ', '\n    str += _types_str(attribs.keys(), attribs.values(), False)\n    str += ' ) -> ( '\n    str += _types_str(['_O{}'.format(i + 1) for i in range(len(outputs))], outputs, True)\n    str += ' )'\n    return str\n\n\ndef _make_attrib_type(value):\n    repeated = False\n    if isinstance(value, list):\n        if len(value) == 0:\n            return None, False\n        tp = type(value[0])\n        if not all(type(v) == tp for v in value):\n            return None, False\n        repeated = True\n        value = value[0]\n\n    if not isinstance(value, (float, int, bool, str)):\n        return None, False\n\n    return _DtypeFromPyType[type(value)], repeated\n\n\ndef _make_tensor_type(value):\n    repeated = False\n    if isinstance(value, list):\n        if len(value) == 0:\n            return None, False\n        dtype = value[0].dtype\n        if not all(v.dtype == dtype for v in value):\n            return None, False\n        repeated = True\n        value = value[0]\n\n    return _nnef_dtype(value.dtype), repeated\n\n\ndef _types_str(names, items, tensor):\n    return ', '.join(name + ': ' + ('tensor<{}>'.format(type) if tensor else type) + ('[]' if repeated else '')\n                     for name, (type, repeated) in zip(names, items))\n\n\nclass Writer(object):\n\n    def __init__(self, compression=None, extensions=None, fragments=None, fragment_dependencies=None,\n                 generate_custom_fragments=False, version_custom_fragments=True, annotate_shapes=False):\n        self._compression = compression\n        self._extensions = extensions or []\n        self._fragments = fragments or {}\n        self._fragment_dependencies = fragment_dependencies or {}\n        self._generate_custom_fragments = generate_custom_fragments\n        self._version_custom_fragments = version_custom_fragments\n        self._annotate_shapes = annotate_shapes\n\n    def __call__(self, graph, path):\n        folder = None\n        try:\n            if self._compression is not None:\n                folder = tempfile.mkdtemp(prefix=\"nnef_\")\n            else:\n                folder = path\n                if not os.path.exists(folder):\n                    os.makedirs(folder)\n\n            used_operators = self._used_operators(graph, self._fragment_dependencies)\n            fragments = \"\".join(text for name, text in six.iteritems(self._fragments) if name in used_operators)\n            if self._generate_custom_fragments:\n                customs = _generate_custom_fragments(graph, fragments=self._fragments,\n                                                     version=self._version_custom_fragments)\n                if fragments and customs:\n                    fragments += \"\\n\"\n                fragments += customs\n\n            if len(fragments):\n                if \"KHR_enable_fragment_definitions\" not in self._extensions:\n                    self._extensions.append(\"KHR_enable_fragment_definitions\")\n                if \"KHR_enable_operator_expressions\" not in self._extensions:\n                    self._extensions.append(\"KHR_enable_operator_expressions\")\n\n            graph_filename = os.path.join(folder, 'graph.nnef')\n            with open(graph_filename, 'w') as file:\n                _print(graph, file, extensions=self._extensions, fragments=fragments,\n                       version_custom_ops=self._generate_custom_fragments and self._version_custom_fragments,\n                       annotate_shapes=self._annotate_shapes)\n\n            for op in graph.operations:\n                if op.type == 'variable':\n                    filename = op.attribs['label'] + \".dat\"\n                    if filename.startswith('/'):\n                        filename = filename[1:]\n                    _write_tensor(np.asarray(op.output.data, order='C'), os.path.join(folder, filename),\n                                  quantized=True if op.output.quant else False)\n\n            if any(tensor.quant for tensor in graph.tensors):\n                quant_filename = os.path.join(folder, 'graph.quant')\n                with open(quant_filename, 'w') as file:\n                    _write_quantization(graph, file)\n        finally:\n            if self._compression is not None and folder:\n                tgz_compress(folder, path + '.tgz', compression_level=self._compression)\n                shutil.rmtree(folder)\n\n    @staticmethod\n    def _used_operators(graph, dependencies):\n        used = {op.type for op in graph.operations}\n        count = len(used)\n        changed = True\n        while changed:\n            for key, deps in six.iteritems(dependencies):\n                if key in used:\n                    used.update(deps)\n\n            changed = len(used) > count\n            count = len(used)\n\n        return used\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/onnx/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .reader import Reader\nfrom .writer import Writer\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/onnx/reader.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nfrom ...model import *\nfrom ...utils.types import as_str\nfrom onnx.shape_inference import infer_shapes\nimport numpy as np\nimport onnx\nimport sys\n\n\n_is_little_endian_system = (sys.byteorder == 'little')\n\n\n_DtypeToNumpy = {\n    'UNDEFINED': None,\n    'FLOAT': np.float32,\n    'UINT8': np.uint8,\n    'INT8': np.int8,\n    'UINT16': np.uint16,\n    'INT16': np.int16,\n    'INT32': np.int32,\n    'INT64': np.int64,\n    'STRING': np.str_,\n    'BOOL': np.bool_,\n    'FLOAT16': np.float16,\n    'DOUBLE': np.float64,\n    'UINT32': np.uint32,\n    'UINT64': np.uint64,\n    'COMPLEX64': np.complex64,\n    'COMPLEX128': np.complex128,\n}\n\n\ndef _get_shape(tensor_shape_proto):\n    return ([int(dim.dim_value) if dim.HasField('dim_value') else None for dim in tensor_shape_proto.dim]\n            if tensor_shape_proto is not None else None)\n\n\ndef _get_dtype(dtype_int):\n    return _DtypeToNumpy[onnx.TensorProto.DataType.Name(dtype_int)]\n\n\ndef _get_field(proto, name, default=None):\n    return getattr(proto, name) if proto.HasField(name) else default\n\n\ndef _get_value_info(value_info_proto):\n    name = as_str(value_info_proto.name)\n    shape = _get_shape(_get_field(value_info_proto.type.tensor_type, 'shape'))\n    dtype = _get_dtype(value_info_proto.type.tensor_type.elem_type)\n    return name, shape, dtype\n\n\ndef _get_tensor(tensor_proto):\n    assert not tensor_proto.HasField('segment'), 'TensorProto.segment is not supported'\n\n    name = as_str(tensor_proto.name)\n    shape = [int(dim) for dim in tensor_proto.dims]\n    dtype = _get_dtype(tensor_proto.data_type)\n    assert dtype is not None\n\n    if tensor_proto.HasField('raw_data'):\n        assert dtype != np.str_\n\n        data = np.frombuffer(tensor_proto.raw_data, dtype)\n        if not _is_little_endian_system:\n            data = data.byteswap()\n    else:\n        if dtype == np.float32:\n            data = np.array(tensor_proto.float_data, dtype)\n        elif dtype == np.float64:\n            data = np.array(tensor_proto.double_data, dtype)\n        elif dtype == np.int64:\n            data = np.array(tensor_proto.int64_data, dtype)\n        elif dtype == np.str_:\n            data = np.array(as_str(tensor_proto.string_data))\n        elif dtype == np.float16:\n            data = np.array(tensor_proto.int32_data, np.uint16).view(np.float16)\n        elif dtype == np.complex64:\n            data = np.array(tensor_proto.float_data, np.float32)\n            data = data[0::2] + data[1::2] * 1j\n        elif dtype == np.complex128:\n            data = np.array(tensor_proto.double_data, np.float64)\n            data = data[0::2] + data[1::2] * 1j\n        elif dtype in [np.int8, np.uint8, np.int16, np.uint16, np.int32, np.bool_]:\n            data = np.array(tensor_proto.int32_data, dtype)\n        elif dtype in [np.uint32, np.uint64]:\n            data = np.array(tensor_proto.uint64_data, dtype)\n        else:\n            assert False\n\n    data = data.reshape(shape)\n    return name, shape, dtype, data\n\n\ndef _get_tensor_data(tensor_proto):\n    name, shape, dtype, data = _get_tensor(tensor_proto)\n    return data\n\n\ndef _get_tensors(graph_proto, graph, tensors_by_name):\n    for value_info in graph_proto.input:\n        name, shape, dtype = _get_value_info(value_info)\n        tensors_by_name[name] = Tensor(graph=graph, name=name, shape=shape, dtype=dtype)\n    for value_info in graph_proto.output:\n        name, shape, dtype = _get_value_info(value_info)\n        tensors_by_name[name] = Tensor(graph=graph, name=name, shape=shape, dtype=dtype)\n    for value_info in graph_proto.value_info:\n        name, shape, dtype = _get_value_info(value_info)\n        tensors_by_name[name] = Tensor(graph=graph, name=name, shape=shape, dtype=dtype)\n    for tensor_proto in graph_proto.initializer:\n        name, shape, dtype, data = _get_tensor(tensor_proto)\n        tensors_by_name[name] = Tensor(graph=graph, name=name, shape=shape, dtype=dtype, data=data)\n\n    for node in graph_proto.node:\n        for tensor_name in node.output:\n            tensor_name = as_str(tensor_name)\n            if tensor_name not in tensors_by_name:\n                if len(tensor_name) == 0:\n                    tensors_by_name[tensor_name] = Tensor(graph, name='', shape=(), dtype=np.float32,\n                                                          data=np.zeros(shape=(), dtype=np.float32))\n                else:\n                    tensors_by_name[tensor_name] = Tensor(graph, name=tensor_name)\n\n    for node in graph_proto.node:\n        for attribute in node.attribute:\n            if attribute.HasField('g'):\n                _get_tensors(attribute.g, graph, tensors_by_name)\n            if attribute.graphs:\n                for g in attribute.graphs:\n                    _get_tensors(g, graph, tensors_by_name)\n\n\ndef _get_node(node_proto, graph, tensors_by_name):\n    inputs = [as_str(input) for input in node_proto.input]\n    outputs = [as_str(output) for output in node_proto.output]\n    name = as_str(_get_field(node_proto, 'name'))\n    domain = as_str(_get_field(node_proto, 'domain'))\n    op_type = as_str(node_proto.op_type)\n    attributes = {}\n    for attribute in node_proto.attribute:\n        key, value = _get_attribute(attribute, graph, tensors_by_name)\n        attributes[key] = value\n    return inputs, outputs, name, domain, op_type, attributes\n\n\ndef _get_attribute(attribute_proto, graph, tensors_by_name):\n    assert not attribute_proto.HasField('ref_attr_name')\n\n    name = as_str(attribute_proto.name)\n\n    if attribute_proto.HasField('f'):\n        value = float(attribute_proto.f)\n    elif attribute_proto.HasField('i'):\n        value = int(attribute_proto.i)\n    elif attribute_proto.HasField('s'):\n        value = as_str(attribute_proto.s)\n    elif attribute_proto.HasField('t'):\n        value = _get_tensor_data(attribute_proto.t)\n    elif attribute_proto.HasField('g'):\n        g = attribute_proto.g\n        value = _get_block(g, Graph(name=as_str(_get_field(g, 'name'))), tensors_by_name)\n    elif attribute_proto.floats:\n        value = [float(f) for f in attribute_proto.floats]\n    elif attribute_proto.ints:\n        value = [int(i) for i in attribute_proto.ints]\n    elif attribute_proto.strings:\n        value = [as_str(s) for s in attribute_proto.strings]\n    elif attribute_proto.tensors:\n        value = [_get_tensor_data(t) for t in attribute_proto.tensors]\n    elif attribute_proto.graphs:\n        value = [_get_block(g, Graph(name=as_str(_get_field(g, 'name'))), tensors_by_name)\n                 for g in attribute_proto.graphs]\n    else:\n        value = []\n\n    return name, value\n\n\ndef _get_block(graph_proto, graph, tensors_by_name):\n    initializer_names = {as_str(value_info.name) for value_info in graph_proto.initializer}\n    input_names = [as_str(value_info.name) for value_info in graph_proto.input\n                   if as_str(value_info.name) not in initializer_names]\n    output_names = [as_str(value_info.name) for value_info in graph_proto.output]\n    graph.inputs = [tensors_by_name[name] for name in input_names]\n    graph.outputs = [tensors_by_name[name] for name in output_names]\n\n    for value_info in graph_proto.input:\n        name, shape, dtype = _get_value_info(value_info)\n        tensor = tensors_by_name[name]\n        tensor.shape, tensor.dtype = shape, dtype\n    for value_info in graph_proto.output:\n        name, shape, dtype = _get_value_info(value_info)\n        tensor = tensors_by_name[name]\n        tensor.shape, tensor.dtype = shape, dtype\n    for value_info in graph_proto.value_info:\n        name, shape, dtype = _get_value_info(value_info)\n        tensor = tensors_by_name[name]\n        tensor.shape, tensor.dtype = shape, dtype\n    for tensor_proto in graph_proto.initializer:\n        name, shape, dtype, data = _get_tensor(tensor_proto)\n        tensor = tensors_by_name[name]\n        tensor.shape, tensor.dtype, tensor.data = shape, dtype, data\n\n    for annotation in graph_proto.quantization_annotation:\n        tensor = tensors_by_name[annotation.tensor_name]\n        tensor.quant = {item.key: tensors_by_name[item.value].data for item in annotation.quant_parameter_tensor_names}\n\n    for node in graph_proto.node:\n        inputs, outputs, name, domain, op_type, attributes = _get_node(node, graph, tensors_by_name)\n\n        Operation(\n            graph=graph,\n            type=op_type,\n            name=name,\n            inputs=tuple(tensors_by_name[input] for input in inputs),\n            outputs=tuple(tensors_by_name[output] for output in outputs),\n            attribs=attributes)\n    return graph\n\n\ndef _set_input_shapes(graph_proto, input_shapes):\n    for value_info in graph_proto.input:\n        name, shape, dtype = _get_value_info(value_info)\n        input_shape = input_shapes.get(name)\n        if input_shape is not None:\n            assert len(input_shape) == len(shape) and all(s is None or z == s for s, z in zip(shape, input_shape))\n            for i, s in enumerate(input_shape):\n                value_info.type.tensor_type.shape.dim[i].dim_value = s\n\n\n# This is for working around a bug in ONNX IR, see https://github.com/onnx/onnx/issues/2903\ndef _add_value_info_for_constants(model: onnx.ModelProto):\n    \"\"\"\n    Currently onnx.shape_inference doesn't use the shape of initializers, so add\n    that info explicitly as ValueInfoProtos.\n    Mutates the model.\n    Args:\n        model: The ModelProto to update.\n    \"\"\"\n    # All (top-level) constants will have ValueInfos before IRv4 as they are all inputs\n    if model.ir_version < 4:\n        return\n\n    def add_const_value_infos_to_graph(graph: onnx.GraphProto):\n        inputs = {i.name for i in graph.input}\n        existing_info = {vi.name: vi for vi in graph.value_info}\n        for init in graph.initializer:\n            # Check it really is a constant, not an input\n            if init.name in inputs:\n                continue\n\n            # The details we want to add\n            elem_type = init.data_type\n            shape = init.dims\n\n            # Get existing or create new value info for this constant\n            vi = existing_info.get(init.name)\n            if vi is None:\n                vi = graph.value_info.add()\n                vi.name = init.name\n\n            # Even though it would be weird, we will not overwrite info even if it doesn't match\n            tt = vi.type.tensor_type\n            if tt.elem_type == onnx.TensorProto.UNDEFINED:\n                tt.elem_type = elem_type\n            if not tt.HasField(\"shape\"):\n                # Ensure we set an empty list if the const is scalar (zero dims)\n                tt.shape.dim.extend([])\n                for dim in shape:\n                    tt.shape.dim.add().dim_value = dim\n\n        # Handle subgraphs\n        for node in graph.node:\n            for attr in node.attribute:\n                # Ref attrs refer to other attrs, so we don't need to do anything\n                if attr.ref_attr_name != \"\":\n                    continue\n\n                if attr.type == onnx.AttributeProto.GRAPH:\n                    add_const_value_infos_to_graph(attr.g)\n                if attr.type == onnx.AttributeProto.GRAPHS:\n                    for g in attr.graphs:\n                        add_const_value_infos_to_graph(g)\n\n    return add_const_value_infos_to_graph(model.graph)\n\n\ndef onnx_model_to_graph(onnx_model):\n    graph = Graph(name=as_str(_get_field(onnx_model.graph, 'name')))\n\n    tensors_by_name = {'': Tensor(graph, name='', shape=(), dtype=np.float32, data=np.zeros(shape=(), dtype=np.float32))}\n\n    _get_tensors(onnx_model.graph, graph, tensors_by_name)\n    _get_block(onnx_model.graph, graph, tensors_by_name)\n\n    return graph\n\n\ndef read_tensor(filename):\n    with open(filename, 'rb') as file:\n        return _get_tensor_data(onnx.load_tensor(file))\n\n\nclass Reader(object):\n\n    def __init__(self, simplify=False, optimize=None):\n        self._simplify = simplify\n        self._optimize = optimize or simplify\n\n    def __call__(self, filename, input_shapes=None):\n        model_proto = onnx.load_model(filename)\n        _add_value_info_for_constants(model_proto)\n\n        if self._simplify:\n            from onnxsim import simplify\n            model_proto, _ = simplify(model_proto, overwrite_input_shapes=input_shapes, perform_optimization=self._optimize)\n        if input_shapes:\n            _set_input_shapes(model_proto.graph, input_shapes)\n\n        model_proto = infer_shapes(model_proto)\n\n        return onnx_model_to_graph(model_proto)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/onnx/writer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom ...model import *\nimport numpy as np\nimport six\nimport onnx\n\n\n_DtypeFromNumpy = {\n    None: 'UNDEFINED',\n    np.float32: 'FLOAT',\n    np.uint8: 'UINT8',\n    np.int8: 'INT8',\n    np.uint16: 'UINT16',\n    np.int16: 'INT16',\n    np.int32: 'INT32',\n    np.int64: 'INT64',\n    np.str_: 'STRING',\n    np.bool_: 'BOOL',\n    np.float16: 'FLOAT16',\n    np.float64: 'DOUBLE',\n    np.uint32: 'UINT32',\n    np.uint64: 'UINT64',\n    np.complex64: 'COMPLEX64',\n    np.complex128: 'COMPLEX128',\n}\n\n\ndef build_model(graph, ir_version, opset_version):\n    # type: (Graph)->onnx.ModelProto\n\n    model_proto = onnx.ModelProto()\n    build_graph(graph, model_proto.graph)\n\n    model_proto.ir_version = ir_version\n    model_proto.opset_import.add()\n    model_proto.opset_import[0].version = opset_version\n\n    return model_proto\n\n\ndef build_graph(graph, graph_proto):\n    # type: (Graph, onnx.GraphProto)->None\n\n    for idx, op in enumerate(graph.operations):\n        node_proto = graph_proto.node.add()\n        build_node(op, node_proto, idx)\n\n    if graph.name is not None:\n        graph_proto.name = graph.name\n\n    for input in list(graph.inputs) + list(t for t in graph.tensors if t.is_constant and t.name != ''):\n        value_info_proto = graph_proto.input.add()\n        build_value_info(input, value_info_proto)\n\n    for output in graph.outputs:\n        value_info_proto = graph_proto.output.add()\n        build_value_info(output, value_info_proto)\n\n    for tensor in graph.tensors:\n        if tensor.is_constant and tensor.name != '':\n            tensor_proto = graph_proto.initializer.add()\n            build_tensor_proto(tensor, tensor_proto)\n\n        if tensor.quant:\n            build_quantization(tensor, graph_proto)\n\n\ndef build_value_info(tensor, value_info_proto):\n    # type: (Tensor, onnx.ValueInfoProto)->None\n\n    value_info_proto.name = tensor.name\n    value_info_proto.type.tensor_type.elem_type = build_dtype(tensor.dtype)\n\n    if tensor.shape:\n        for s in tensor.shape:\n            dim = value_info_proto.type.tensor_type.shape.dim.add()\n            if s is not None:\n                dim.dim_value = s\n    else:\n        value_info_proto.type.tensor_type.shape.SetInParent()\n\n\ndef build_dtype(dtype):\n    dtype = dtype.type if isinstance(dtype, np.dtype) else dtype\n    return onnx.TensorProto.DataType.Value(_DtypeFromNumpy[dtype])\n\n\ndef build_attribute_type(name):\n    return onnx.AttributeProto.AttributeType.Value(name)\n\n\ndef build_tensor_data(data, tensor_proto):\n    # type: (np.ndarray, onnx.TensorProto)->None\n    for s in data.shape:\n        tensor_proto.dims.append(s)\n\n    tensor_proto.data_type = build_dtype(data.dtype)\n\n    if data.dtype == np.str_:\n        tensor_proto.string_data = str(data)\n    else:\n        data = data.flatten().astype(data.dtype)\n\n        if data.dtype in [np.complex64, np.complex128]:\n            data = np.column_stack((np.real(data), np.imag(data))).flatten()\n\n        if data.dtype.str[0] != \"<\":\n            data = data.byteswap()\n        tensor_proto.raw_data = data.tobytes()\n\n\ndef build_tensor_proto(tensor, tensor_proto):\n    # type: (Tensor, onnx.TensorProto)->None\n    if isinstance(tensor.data, np.ndarray):\n        data = tensor.data\n    elif isinstance(tensor.data, (list, tuple)):\n        data = np.array(tensor.data, dtype=tensor.dtype).reshape(tensor.shape)\n    else:\n        data = np.full(shape=tensor.shape, fill_value=tensor.data, dtype=tensor.dtype)\n\n    build_tensor_data(data, tensor_proto)\n    tensor_proto.name = tensor.name\n\n\ndef build_quantization(tensor, graph_proto):\n    tensor_annotation = graph_proto.quantization_annotation.add()\n    tensor_annotation.tensor_name = tensor.name\n\n    for key, value in tensor.quant:\n        value_tensor_name = tensor.name + '/' + key\n        if not isinstance(value, np.ndarray):\n            value = np.array(value)\n\n        tensor_proto = graph_proto.initializer.add()\n        build_tensor_data(value, tensor_proto)\n        tensor_proto.name = value_tensor_name\n\n        item = tensor_annotation.quant_parameter_tensor_names.add()\n        item.key = key\n        item.value = value_tensor_name\n\n\ndef build_node(op, node_proto, idx):\n    # type: (Operation, onnx.NodeProto)->None\n\n    inputs = op.inputs\n    attribs = op.attribs\n\n    for input in inputs:\n        node_proto.input.append(input.name)\n    for output in op.outputs:\n        node_proto.output.append(output.name)\n\n    node_proto.op_type = op.type\n    node_proto.name = op.name or (op.type + str(idx))\n\n    for k, v in six.iteritems(attribs):\n        attribute_proto = node_proto.attribute.add()\n        build_attribute(k, v, attribute_proto)\n\n\ndef build_attribute(key, value, attribute_proto):\n    # type: (str, typing.Any, onnx.AttributeProto)->None\n\n    attribute_proto.name = key\n\n    if isinstance(value, np.ndarray):\n        attribute_proto.type = build_attribute_type('TENSOR')\n        build_tensor_data(value, attribute_proto.t)\n    elif isinstance(value, int):\n        attribute_proto.type = build_attribute_type('INT')\n        attribute_proto.i = value\n    elif isinstance(value, float):\n        attribute_proto.type = build_attribute_type('FLOAT')\n        attribute_proto.f = value\n    elif isinstance(value, str):\n        attribute_proto.type = build_attribute_type('STRING')\n        attribute_proto.s = value.encode('utf-8')\n    elif isinstance(value, (type, np.dtype)):\n        attribute_proto.type = build_attribute_type('INT')\n        attribute_proto.i = build_dtype(value)\n    elif isinstance(value, Graph):\n        attribute_proto.type = build_attribute_type('GRAPH')\n        build_graph(value, attribute_proto.g)\n    elif isinstance(value, (list, tuple)):\n        if len(value) == 0:\n            attribute_proto.type = build_attribute_type('INTS')  # TODO better\n        else:\n            if isinstance(value[0], int):\n                attribute_proto.type = build_attribute_type('INTS')\n                for v in value:\n                    attribute_proto.ints.append(v)\n            elif isinstance(value[0], float):\n                attribute_proto.type = build_attribute_type('FLOATS')\n                for v in value:\n                    attribute_proto.floats.append(v)\n            elif isinstance(value[0], str):\n                attribute_proto.type = build_attribute_type('STRINGS')\n                for v in value:\n                    attribute_proto.strings.append(v.encode('utf-8'))\n            elif isinstance(value[0], Graph):\n                attribute_proto.type = build_attribute_type('GRAPHS')\n                for v in value:\n                    g = attribute_proto.graphs.add()\n                    build_graph(v, g)\n            else:\n                assert False, \\\n                    \"Unsupported attribute: {}: {} of type: List[{}]\".format(key, value, type(value[0]).__name__)\n    else:\n        assert False, \"Unsupported attribute: {}: {} of type: {}\".format(key, value, type(value).__name__)\n\n\nclass Writer(object):\n\n    def __init__(self, ir_version=6, opset_version=11):\n        self._ir_version = ir_version\n        self._opset_version = opset_version\n\n    def __call__(self, graph, filename):\n        model_proto = build_model(graph, self._ir_version, self._opset_version)\n        onnx.checker.check_model(model_proto)\n        with open(filename, 'wb') as file:\n            file.write(model_proto.SerializeToString())\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/graphdef/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .reader import Reader\nfrom .writer import Writer\nfrom .composite import replace_composites_with_py_functions, reset_composites\nfrom .utils import set_input_shapes, fold_constant_tensors, retain_reachables_from_outputs, insert_rename_identities\nfrom .utils import import_graph_def, export_graph_def, check_finite, check_variables\ntry:\n    import tensorflow.compat.v1 as tf\nexcept ImportError:\n    import tensorflow as tf\n\n\ncomposite_function = composite.function\n\n\ndef save_default_graph(filename, session, outputs, input_shapes=None, fold_constants=True, collapse_composites=True):\n    check_variables(session)\n\n    if not isinstance(outputs, dict):\n        outputs = {tensor: tensor.name[:-2] if tensor.name.endswith(':0') else tensor.name.replace(':', '_')\n                   for tensor in outputs}\n\n    output_names = list(outputs.values())\n\n    graph_def = export_graph_def(tf.get_default_graph())\n    graph_def = insert_rename_identities(graph_def, outputs)\n    graph_def = tf.graph_util.convert_variables_to_constants(session, graph_def, output_names)\n    graph_def = retain_reachables_from_outputs(graph_def, output_names)\n\n    check_finite(graph_def)\n\n    if input_shapes:\n        graph_def = set_input_shapes(graph_def, input_shapes)\n    if fold_constants:\n        graph_def = fold_constant_tensors(graph_def)\n    if collapse_composites:\n        graph_def = replace_composites_with_py_functions(graph_def)\n\n    check_finite(graph_def)\n\n    with open(filename, 'wb') as file:\n        file.write(graph_def.SerializeToString())\n\n\ndef load_default_graph(filename):\n    from .protobuf import GraphDef\n    with tf.io.gfile.GFile(filename, \"rb\") as file:\n        graph_def = GraphDef()\n        graph_def.ParseFromString(file.read())\n        tf.import_graph_def(graph_def, name='')\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/graphdef/composite.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .protobuf import GraphDef, NodeDef\nfrom .writer import _build_attribute\nfrom .utils import import_graph_def\ntry:\n    import tensorflow.compat.v1 as tf\nexcept ImportError:\n    import tensorflow as tf\nfrom collections.abc import Sequence\nimport inspect\n\n\nclass _Composite:\n\n    instances = []\n\n    def __init__(self, id, func, attribs, inputs, outputs):\n        self.id = id\n        self.func = func\n        self.attribs = attribs\n        self.inputs = inputs\n        self.outputs = outputs\n\n    @staticmethod\n    def function(func):\n        def wrapper(*args, **kwargs):\n            results = func(*args, **kwargs)\n\n            name = kwargs.get('name')\n            if name is not None:\n                del kwargs['name']\n            id = name or len(_Composite.instances)\n\n            signature = inspect.signature(func)\n            bound = signature.bind(*args, **kwargs)\n            bound.apply_defaults()\n\n            attribs = {name: value for name, value in bound.arguments.items()\n                       if not isinstance(value, tf.Tensor) and value is not None}\n            inputs = [value for value in bound.arguments.values()\n                      if isinstance(value, tf.Tensor)]\n            outputs = (results,) if not isinstance(results, (list, tuple)) else results\n\n            assert all(isinstance(value, tf.Tensor) for value in outputs), \\\n                \"Results of composite function must be tensors\"\n            assert not any(tensor in inputs for tensor in outputs), \\\n                \"Results of composite function cannot be input arguments at the same time\"\n\n            _Composite.instances.append(_Composite(id, func, attribs, inputs, outputs))\n\n            return results\n        return wrapper\n\n    @property\n    def name(self):\n        return self.id if isinstance(self.id, str) else 'Composite' + str(self.id)\n\n\nfunction = _Composite.function\n\n\ndef _is_tensor(value):\n    return isinstance(value, tf.Tensor) or (isinstance(value, Sequence) and all(_is_tensor(item) for item in value))\n\n\ndef _node_name_from_tensor(name):\n    if name[0] == '^':\n        name = name[1:]\n    pos = name.find(':')\n    if pos != -1 and name[pos+1:].isdigit():\n        name = name[:pos]\n    return name\n\n\ndef _input_name_from_tensor(name):\n    return name[:-2] if name.endswith(':0') else name\n\n\ndef _build_node_def(composite):\n    node_def = NodeDef()\n    node_def.op = 'PyFunc'\n    node_def.name = composite.name\n    node_def.input.extend([_input_name_from_tensor(arg.name) for arg in composite.inputs])\n\n    input_dtypes = [tensor.dtype.as_numpy_dtype for tensor in composite.inputs]\n    output_dtypes = [tensor.dtype.as_numpy_dtype for tensor in composite.outputs]\n    output_shapes = [tuple(tensor.shape.as_list()) for tensor in composite.outputs]\n\n    _build_attribute(node_def.attr['Tin'], input_dtypes)\n    _build_attribute(node_def.attr['Tout'], output_dtypes)\n    _build_attribute(node_def.attr['token'], composite.func.__name__)\n    _build_attribute(node_def.attr['_output_shapes'], output_shapes)\n    for name, value in composite.attribs.items():\n        _build_attribute(node_def.attr['_$' + name + '$_'], value)\n    return node_def\n\n\ndef _remap_tensors(tensors, graph):\n    return type(tensors)(graph.get_tensor_by_name(tensor.name) for tensor in tensors)\n\n\ndef _tensor_producers_and_consumers(graph):\n    producers_and_consumers = {tensor: [tensor.op] for op in graph.get_operations() for tensor in op.outputs}\n    for op in graph.get_operations():\n        for tensor in op.inputs:\n            ops = producers_and_consumers[tensor]\n            if op not in ops:\n                ops.append(op)\n\n    return producers_and_consumers\n\n\ndef _find_subgraph(composite, producers_and_consumers):\n    queue = [tensor.op for tensor in composite.outputs]\n    subgraph = {item.name for item in queue}\n\n    idx = 0\n    while idx < len(queue):\n        op = queue[idx]\n        idx += 1\n        tensors = [tensor for tensor in op.inputs if tensor not in composite.inputs] + \\\n                  [tensor for tensor in op.outputs if tensor not in composite.outputs]\n        for tensor in tensors:\n            for op in producers_and_consumers[tensor]:\n                if op.name not in subgraph:\n                    subgraph.add(op.name)\n                    queue.append(op)\n    return subgraph\n\n\ndef replace_composites_with_py_functions(graph_def):\n    graph = import_graph_def(graph_def)\n    for composite in _Composite.instances:\n        composite.inputs = _remap_tensors(composite.inputs, graph)\n        composite.outputs = _remap_tensors(composite.outputs, graph)\n\n    producers_and_consumers = _tensor_producers_and_consumers(graph)\n\n    tensor_remap = {}\n    subgraph_ops = set()\n    for composite in _Composite.instances:\n        subgraph_ops.update(_find_subgraph(composite, producers_and_consumers))\n\n        for idx, tensor in enumerate(composite.outputs):\n            tensor_remap[_input_name_from_tensor(tensor.name)] = \\\n                composite.name + ':' + str(idx) if idx > 0 else composite.name\n\n    new_graph_def = GraphDef()\n    for node in graph_def.node:\n        if node.name not in subgraph_ops:\n            new_graph_def.node.append(node)\n\n    for composite in _Composite.instances:\n        new_graph_def.node.append(_build_node_def(composite))\n\n    for node in new_graph_def.node:\n        for i in range(len(node.input)):\n            remapped = tensor_remap.get(node.input[i])\n            if remapped is not None:\n                node.input[i] = remapped\n\n    return new_graph_def\n\n\ndef reset_composites():\n    _Composite.instances = []\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/graphdef/protobuf.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom tensorflow.core.framework.graph_pb2 import GraphDef\nfrom tensorflow.core.framework.node_def_pb2 import NodeDef\nfrom tensorflow.core.framework.attr_value_pb2 import AttrValue\nfrom tensorflow.core.framework.types_pb2 import DataType\nfrom tensorflow.core.framework.tensor_pb2 import TensorProto\nfrom tensorflow.core.framework.tensor_shape_pb2 import TensorShapeProto\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/graphdef/reader.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nfrom collections import namedtuple\nfrom ....model import *\nfrom ....utils.types import as_str\nfrom .protobuf import *\nimport numpy as np\nimport six\n\n\nFunction = namedtuple('Function', ['name', 'attrs'])\n\n\n_DtypeToNumpy = {\n    'DT_INVALID': None,\n    'DT_RESOURCE': np.dtype([('resource', np.int32)]),\n    'DT_HALF': np.float16,\n    'DT_FLOAT': np.float32,\n    'DT_DOUBLE': np.float64,\n    'DT_INT8': np.int8,\n    'DT_INT16': np.int16,\n    'DT_INT32': np.int32,\n    'DT_INT64': np.int64,\n    'DT_UINT8': np.uint8,\n    'DT_UINT16': np.uint16,\n    'DT_UINT32': np.uint32,\n    'DT_UINT64': np.uint64,\n    'DT_BOOL': np.bool_,\n    'DT_STRING': np.str_,\n    'DT_COMPLEX64': np.complex64,\n    'DT_COMPLEX128': np.complex128,\n}\n\n\ndef _get_shape(shape_proto):\n    return tuple(int(dim.size) if dim.size >= 0 else None for dim in shape_proto.dim) \\\n        if not shape_proto.unknown_rank else None\n\n\ndef _get_dtype(dtype_enum):\n    dtype = _DtypeToNumpy[DataType.Name(dtype_enum)]\n    assert dtype is not None, \"non-numeric dtype '{}' in attribute\".format(DataType.Name(dtype_enum))\n    return dtype\n\n\ndef _get_nonempty_items(message, fields):\n    for field in fields:\n        items = getattr(message, field)\n        if len(items):\n            return field, items\n\n    return None, None\n\n\ndef _get_tensor(tensor_proto):\n    shape = _get_shape(tensor_proto.tensor_shape)\n    dtype = _get_dtype(tensor_proto.dtype)\n\n    if len(tensor_proto.tensor_content):\n        data = np.frombuffer(tensor_proto.tensor_content, dtype=dtype).reshape(shape)\n    else:\n        field, items = _get_nonempty_items(tensor_proto,\n                                           fields=['half_val', 'float_val', 'double_val', 'int_val', 'int64_val',\n                                                   'bool_val', 'string_val', 'uint32_val', 'uint64_val',\n                                                   'resource_handle_val', 'scomplex_val', 'dcomplex_val'])\n\n        if items is None and any(s == 0 for s in shape):\n            items = []\n\n        assert items is not None, \"tensor items are empty, dtype = {}, shape = {}\".format(dtype, shape)\n\n        items = [item for item in items]\n        if len(items) == int(np.prod(shape)):\n            data = np.array(items, dtype=dtype).reshape(shape)\n        else:\n            assert len(items) == 1\n            data = np.full(shape=shape, dtype=dtype, fill_value=items[0])\n\n    return data\n\n\ndef _get_func(name_attrlist_proto):\n    return Function(name_attrlist_proto.name, _get_attributes(name_attrlist_proto.attr))\n\n\ndef _get_attribute(field, value):\n    if field == 'i' or field == 'f' or field == 'b' or field == 'placeholder':\n        return value\n    elif field == 's':\n        return as_str(value.decode())\n    elif field == 'shape':\n        return _get_shape(value)\n    elif field == 'type':\n        return _get_dtype(value)\n    elif field == 'tensor':\n        return _get_tensor(value)\n    elif field == 'func':\n        return _get_func(value)\n    elif field == 'list':\n        field, items = _get_nonempty_items(value, fields=['i', 'f', 'b', 's', 'shape', 'type', 'tensor', 'func'])\n        return [_get_attribute(field, item) for item in items] if items is not None else []\n\n    assert False\n\n\ndef _get_attributes(attr_map_proto):\n    attributes = {}\n    for name, value in attr_map_proto.items():\n        field = value.WhichOneof('value')\n        if field is not None:\n            value = getattr(value, field)\n            attributes[as_str(name)] = _get_attribute(field, value)\n        else:\n            attributes[as_str(name)] = None\n\n    return attributes\n\n\ndef _get_output_name(node_name, idx):\n    return node_name + ':' + str(idx) if idx > 0 else node_name\n\n\ndef _has_output_shapes(graph_def):\n    return all('_output_shapes' in node.attr and node.attr['_output_shapes'].WhichOneof('value') is not None\n               for node in graph_def.node)\n\n\ndef _add_output_shapes(graph_def):\n    try:\n        import tensorflow.compat.v1 as tf\n    except ImportError:\n        import tensorflow as tf\n\n    graph = tf.Graph()\n    with graph.as_default():\n        tf.import_graph_def(graph_def, name='')\n        return graph.as_graph_def(add_shapes=True)\n\n\ndef _get_dtypes(graph_def):\n    try:\n        import tensorflow.compat.v1 as tf\n    except ImportError:\n        import tensorflow as tf\n\n    dtypes = {}\n\n    graph = tf.Graph()\n    with graph.as_default():\n        tf.import_graph_def(graph_def, name='')\n        for op in graph.get_operations():\n            for tensor in op.outputs:\n                name = tensor.name[:-2] if tensor.name.endswith(':0') else tensor.name\n                dtypes[name] = tensor.dtype.as_numpy_dtype if tensor.dtype != tf.resource else _DtypeToNumpy['DT_RESOURCE'].type\n\n    return dtypes\n\n\ndef _get_output_shapes(attr_map_proto):\n    value = attr_map_proto['_output_shapes']\n    field = value.WhichOneof('value')\n    if field is None:\n        return None\n\n    value = getattr(value, field)\n    return _get_attribute(field, value)\n\n\ndef build_graph(graph_def):\n    graph = Graph()\n\n    dtypes = _get_dtypes(graph_def)\n\n    # create tensors\n    node_outputs = {}\n    for node in graph_def.node:\n        output_shapes = _get_output_shapes(node.attr)\n        if output_shapes is not None:\n            name = as_str(node.name)\n            node_outputs[name] = [Tensor(graph, _get_output_name(name, idx), shape=shape, dtype=dtypes.get(name))\n                                  for idx, shape in enumerate(output_shapes)]\n\n    tensors = {tensor.name: tensor for outputs in six.itervalues(node_outputs) for tensor in outputs}\n\n    # create ops\n    for node in graph_def.node:\n        attributes = _get_attributes(node.attr)\n        inputs = [tensors[name] for name in node.input if not name.startswith('^')]\n        outputs = node_outputs[node.name] if node.name in node_outputs else []\n\n        Operation(graph,\n                  type=as_str(node.op),\n                  name=as_str(node.name),\n                  inputs=inputs,\n                  outputs=outputs,\n                  attribs=attributes)\n\n    graph.inputs = [node_outputs[node.name][0] for node in graph_def.node if node.op == 'Placeholder']\n    graph.outputs = [output for op in graph.operations if all(len(output.consumers) == 0 for output in op.outputs)\n                     for output in op.outputs]\n    return graph\n\n\ndef _unpack_custom_ops(graph):\n    for op in graph.operations:\n        if op.type == 'PyFunc':\n            op.custom = True\n            op.type = op.attribs['token']\n            op.attribs = {key[2:-2]: value for key, value in six.iteritems(op.attribs)\n                          if key.startswith('_$') and key.endswith('$_')}\n\n\ndef read_graphdef(filename, input_shapes, fold_constants):\n    graph_def = GraphDef()\n    with open(filename, 'rb') as file:\n        graph_def.ParseFromString(file.read())\n\n    if not _has_output_shapes(graph_def):\n        graph_def = _add_output_shapes(graph_def)\n\n    if input_shapes is not None:\n        from .utils import set_input_shapes\n        graph_def = set_input_shapes(graph_def, input_shapes)\n\n    if fold_constants:\n        from .utils import fold_constant_tensors\n        graph_def = fold_constant_tensors(graph_def)\n\n    graph = build_graph(graph_def)\n    _unpack_custom_ops(graph)\n\n    return graph\n\n\nclass Reader(object):\n\n    def __init__(self, fold_constants=False):\n        self._fold_constants = fold_constants\n\n    def __call__(self, filename, input_shapes=None):\n        return read_graphdef(filename, input_shapes, self._fold_constants)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/graphdef/utils.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .protobuf import *\nfrom .writer import _build_attribute\nfrom .reader import _get_attributes\nimport numpy as np\nimport six\ntry:\n    import tensorflow.compat.v1 as tf\nexcept ImportError:\n    import tensorflow as tf\n\n\ndef import_graph_def(graph_def):\n    graph = tf.Graph()\n    with graph.as_default():\n        tf.import_graph_def(graph_def, name='')\n    return graph\n\n\ndef export_graph_def(graph):\n    return graph.as_graph_def(add_shapes=True)\n\n\ndef reinfer_shapes(graph_def):\n    return export_graph_def(import_graph_def(graph_def))\n\n\ndef _try_eval(tensor, session):\n    try:\n        if tensor.dtype == tf.resource or tensor.dtype == tf.string:\n            return None\n        value = tensor.eval(session=session)\n        print(\"Evaluated constant tensor '{}'\".format(tensor.name))\n        return value\n    except:\n        return None\n\n\ndef _build_node(type, name, attribs, inputs):\n    node_def = NodeDef()\n    node_def.op = type\n    node_def.name = name\n    if len(inputs):\n        node_def.input.extend(inputs)\n    for name, value in attribs.items():\n        _build_attribute(node_def.attr[name], value)\n    return node_def\n\n\ndef _make_const_node(value, name):\n    return _build_node('Const', name, {'dtype': value.dtype.type, 'value': value, '_output_shapes': [value.shape]}, [])\n\n\ndef _make_identity_node(input, name, dtype, shape):\n    return _build_node('Identity', name, {'T': dtype, '_output_shapes': [shape]}, [input])\n\n\ndef _freeze_shape_tensors(graph_def):\n    graph = import_graph_def(graph_def)\n\n    evaluated = {}\n    for op in graph.get_operations():\n        if op.type == 'Shape':\n            shape = op.inputs[0].shape\n            if shape.dims is not None and all(item is not None for item in shape.as_list()):\n                evaluated[op.name] = np.array(shape, dtype=np.int32)\n                print(\"Evaluated Shape op '{}' to {}\".format(op.name, str(shape)))\n\n    changed = False\n    new_graph_def = GraphDef()\n    for node in graph_def.node:\n        value = evaluated.get(node.name)\n        if value is not None:\n            new_graph_def.node.append(_make_const_node(value, node.name))\n            changed = True\n        else:\n            new_graph_def.node.append(node)\n\n    return new_graph_def, changed\n\n\ndef _remove_const_control_dependencies(graph_def):\n    for node in graph_def.node:\n        if node.op == 'Const':\n            for idx in reversed(range(len(node.input))):\n                name = node.input[idx]\n                if name[0] == '^':\n                    del node.input[idx]\n\n    return graph_def\n\n\ndef _remove_zero_index(name):\n    return name[:-2] if name.endswith(':0') else name\n\n\ndef _remove_const_identities(graph_def):\n    graph = import_graph_def(graph_def)\n\n    removables = {op.name: _remove_zero_index(op.inputs[0].name)\n                  for op in graph.get_operations()\n                  if op.type == 'Identity' and op.inputs[0].op.type == 'Const'}\n\n    for node in graph_def.node:\n        for i in range(len(node.input)):\n            replacement = removables.get(_op_name_from_tensor(node.input[i]))\n            if replacement:\n                node.input[i] = replacement\n\n    new_graph_def = GraphDef()\n    for node in graph_def.node:\n        if node.name not in removables:\n            new_graph_def.node.append(node)\n\n    return new_graph_def\n\n\ndef _eval_candidates(graph):\n    evaluables = set()\n    changed = True\n    while changed:\n        changed = False\n        for op in graph.get_operations():\n            if op not in evaluables and all(tensor.op in evaluables for tensor in op.inputs) and op.type != 'Placeholder':\n                evaluables.add(op)\n                changed = True\n\n    candidates = set()\n    for op in graph.get_operations():\n        if op not in evaluables:\n            for tensor in op.inputs:\n                if tensor.op in evaluables and tensor.op.type != 'Const':\n                    candidates.add(tensor)\n\n    return candidates\n\n\ndef _fold_constant_tensors(graph_def):\n    graph = import_graph_def(graph_def)\n\n    evaluated = {}\n    with tf.Session(graph=graph) as session:\n        for tensor in _eval_candidates(graph):\n            evaluated[tensor.name] = _try_eval(tensor, session)\n\n    results = {}\n    for op in graph.get_operations():\n        results[op.name] = [evaluated.get(tensor.name) for tensor in op.outputs]\n\n    remap = {}\n    changed = False\n    new_graph_def = GraphDef()\n    for node in graph_def.node:\n        values = results[node.name]\n        all_evaluated = all(value is not None for value in values) and len(values) > 0\n\n        for idx, value in enumerate(values):\n            if value is not None:\n                arg_name = node.name if idx == 0 else node.name + ':{}'.format(idx)\n                const_name = node.name if idx == 0 and all_evaluated else node.name + '//{}'.format(idx)\n                remap[arg_name] = const_name\n                new_graph_def.node.append(_make_const_node(value, const_name))\n                changed = True\n        if not all_evaluated:\n            new_graph_def.node.append(node)\n\n    for node in new_graph_def.node:\n        for i in range(len(node.input)):\n            remapped = remap.get(node.input[i])\n            if remapped is not None:\n                node.input[i] = remapped\n\n    return new_graph_def, changed\n\n\ndef _find_reachables_forward(graph, reachables):\n    changed = True\n    while changed:\n        changed = False\n        for op in graph.get_operations():\n            if op.name not in reachables and any(tensor.op.name in reachables for tensor in op.inputs):\n                reachables.add(op.name)\n                changed = True\n    return reachables\n\n\ndef _find_reachables_backward(graph, reachables):\n    changed = True\n    while changed:\n        changed = False\n        for op in reversed(graph.get_operations()):\n            if op.name in reachables:\n                for tensor in op.inputs:\n                    if tensor.op.name not in reachables:\n                        reachables.add(tensor.op.name)\n                        changed = True\n    return reachables\n\n\ndef _retain_nodes(graph_def, node_names):\n    new_graph_def = GraphDef()\n    for node in graph_def.node:\n        if node.name in node_names:\n            new_graph_def.node.append(node)\n\n    for node in new_graph_def.node:\n        for idx in reversed(range(len(node.input))):\n            name = node.input[idx]\n            if name[0] == '^' and name[1:] not in node_names:\n                del node.input[idx]\n\n    return new_graph_def\n\n\ndef _retain_reachables_from_placeholders(graph_def):\n    graph = import_graph_def(graph_def)\n\n    reachables = {op.name for op in graph.get_operations() if op.type == 'Placeholder'}\n    if len(reachables) == 0:\n        return graph_def\n\n    reachables = _find_reachables_forward(graph, reachables)\n    reachables = _find_reachables_backward(graph, reachables)\n\n    return _retain_nodes(graph_def, reachables)\n\n\ndef _op_name_from_tensor(name):\n    if name[0] == '^':\n        name = name[1:]\n    pos = name.find(':')\n    if pos != -1 and name[pos+1:].isdigit():\n        name = name[:pos]\n    return name\n\n\ndef fold_constant_tensors(graph_def):\n    graph_def = _remove_const_control_dependencies(graph_def)\n    graph_def = _remove_const_identities(graph_def)\n\n    graph_def, changed = _freeze_shape_tensors(graph_def)\n    graph_def, changed = _fold_constant_tensors(graph_def)\n    while changed:\n        graph_def, changed = _freeze_shape_tensors(graph_def)\n        if changed:\n            graph_def, changed = _fold_constant_tensors(graph_def)\n\n    graph_def = _retain_reachables_from_placeholders(graph_def)\n\n    return reinfer_shapes(graph_def)\n\n\ndef set_input_shapes(graph_def, input_shapes):\n    graph = import_graph_def(graph_def)\n    placeholders = {op.name: (op.outputs[0].shape, op.outputs[0].dtype)\n                    for op in graph.get_operations() if op.type == 'Placeholder'}\n\n    graph = tf.Graph()\n    with graph.as_default():\n        input_map = {}\n        for name, shape in six.iteritems(input_shapes):\n            if name not in placeholders:\n                raise IOError(\"Model has no input named '{}'\".format(name))\n\n            orig_shape, dtype = placeholders[name]\n            if orig_shape.rank is not None and len(shape) != orig_shape.rank:\n                raise IOError(\"Shape rank for input '{}' does not match that of the model ({} vs {})\"\n                              .format(name, len(shape), orig_shape.rank))\n\n            input_map[name] = tf.placeholder(shape=shape, dtype=dtype, name=name)\n\n        for name, (shape, dtype) in placeholders.items():\n            if name not in input_map:\n                input_map[name] = tf.placeholder(shape=shape, dtype=dtype, name=name)\n\n        tf.import_graph_def(graph_def, name='', input_map=input_map)\n\n    used = {tensor.op.name for op in graph.get_operations() for tensor in op.inputs}\n\n    graph_def = GraphDef()\n    for op in graph.get_operations():\n        if op.type != 'Placeholder' or op.name in used:\n            graph_def.node.append(op.node_def)\n\n    return reinfer_shapes(graph_def)\n\n\ndef retain_reachables_from_outputs(graph_def, output_names):\n    graph = import_graph_def(graph_def)\n\n    reachables = _find_reachables_backward(graph, set(output_names))\n\n    return _retain_nodes(graph_def, reachables)\n\n\ndef insert_rename_identities(graph_def, tensor_rename):\n    for tensor, name in six.iteritems(tensor_rename):\n        tensor_name = _remove_zero_index(tensor.name)\n        if name != tensor_name:\n            graph_def.node.append(_make_identity_node(tensor_name, name,\n                                                      tensor.dtype.as_numpy_dtype,\n                                                      tuple(tensor.shape.as_list())))\n    return graph_def\n\n\ndef check_finite(graph_def):\n    for node in graph_def.node:\n        attribs = _get_attributes(node.attr)\n        for key, value in six.iteritems(attribs):\n            if isinstance(value, np.ndarray) and np.issubdtype(value.dtype, np.number) and not np.all(np.isfinite(value)):\n                raise ValueError(\"Attribute '{}' of op '{}' named '{}' contains nan or inf\".\n                                 format(key, node.op, node.name))\n\n\ndef check_variables(session):\n    variables = tf.global_variables()\n    for variable in variables:\n        value = session.run(variable)\n        if np.issubdtype(value.dtype, np.number) and not np.all(np.isfinite(value)):\n            raise ValueError(\"Variable '{}' contains nan or inf\".format(variable.name))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/graphdef/writer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nfrom .protobuf import *\nimport numpy as np\nimport six\n\n\n_DtypeFromNumpy = {\n    None: 'DT_INVALID',\n    np.float16: 'DT_HALF',\n    np.float32: 'DT_FLOAT',\n    np.float64: 'DT_DOUBLE',\n    np.int8: 'DT_INT8',\n    np.int16: 'DT_INT16',\n    np.int32: 'DT_INT32',\n    np.int64: 'DT_INT64',\n    np.uint8: 'DT_UINT8',\n    np.uint16: 'DT_UINT16',\n    np.uint32: 'DT_UINT32',\n    np.uint64: 'DT_UINT64',\n    np.bool_: 'DT_BOOL',\n    np.str_: 'DT_STRING',\n    np.complex64: 'DT_COMPLEX64',\n    np.complex128: 'DT_COMPLEX128',\n    np.dtype([('resource', np.int32)]): 'DT_RESOURCE',\n}\n\n_NumpyDtypes = {\n    np.int8, np.int16, np.int32, np.int64,\n    np.uint8, np.uint16, np.uint32, np.uint64,\n    np.float32, np.float64,\n    np.complex64, np.complex128,\n    np.bool_, np.str_,\n    np.dtype([('resource', np.int32)]),\n}\n\n\ndef _build_shape(shape_proto, shape):\n    shape_proto.unknown_rank = (shape is None)\n    if shape is not None:\n        for item in shape:\n            dim = shape_proto.dim.add()\n            dim.size = item if item is not None else -1\n\n\ndef _build_dtype(dtype):\n    return DataType.Value(_DtypeFromNumpy[dtype])\n\n\ndef _build_tensor(tensor_proto, data):\n    if data.dtype is not None:\n        tensor_proto.dtype = _build_dtype(data.dtype.type)\n\n    if data.shape is not None:\n        _build_shape(tensor_proto.tensor_shape, data.shape)\n\n    tensor_proto.tensor_content = data.reshape([-1]).view(np.uint8).tobytes()\n    return tensor_proto\n\n\ndef _build_attribute(attr_proto, value):\n    if value is None:\n        return attr_proto\n\n    if type(value) in _NumpyDtypes:\n        value = np.array(value)\n\n    if isinstance(value, bool):  # must be before int\n        attr_proto.b = value\n    elif isinstance(value, int):\n        attr_proto.i = value\n    elif isinstance(value, float):\n        attr_proto.f = value\n    elif isinstance(value, str):\n        attr_proto.s = value.encode()\n    elif isinstance(value, (type, np.dtype)):\n        attr_proto.type = _build_dtype(value)\n    elif isinstance(value, tuple):\n        _build_shape(attr_proto.shape, value)\n    elif isinstance(value, np.ndarray):\n        _build_tensor(attr_proto.tensor, value)\n    elif isinstance(value, list):\n        if len(value) == 0:\n            attr_proto.list.i.extend([])     # to signal that the 'list' is the active in the oneof field\n        else:\n            first = value[0]\n            if isinstance(first, int):\n                attr_proto.list.i.extend(value)\n            elif isinstance(first, float):\n                attr_proto.list.f.extend(value)\n            elif isinstance(first, bool):\n                attr_proto.list.b.extend(value)\n            elif isinstance(first, str):\n                attr_proto.list.s.extend([item.encode() for item in value])\n            elif isinstance(first, (type, np.dtype)):\n                attr_proto.list.type.extend([_build_dtype(item) for item in value])\n            elif isinstance(first, tuple):\n                for item in value:\n                    _build_shape(attr_proto.list.shape.add(), item)\n            elif isinstance(first, np.ndarray):\n                for item in value:\n                    _build_tensor(attr_proto.list.tensor.add(), item)\n            else:\n                raise TypeError('unable to build attribute proto message from type: ' + str(type(first)))\n    else:\n        raise TypeError('unable to build attribute proto message from type: ' + str(type(value)))\n\n    return attr_proto\n\n\ndef _build_output_shapes(attr_proto, output_shapes):\n    for item in output_shapes:\n        _build_shape(attr_proto.list.shape.add(), item)\n\n\ndef _tensor_name(tensor):\n    name = tensor.producer.name\n    idx = tensor.producer.outputs.index(tensor)\n    return name + ':' + str(idx) if idx > 0 else name\n\n\ndef _custom_attribs(operation):\n    attribs = {'_$' + key + '$_': value for key, value in six.iteritems(operation.attribs)}\n    attribs['token'] = operation.type\n    attribs['Tin'] = [tensor.dtype for tensor in operation.inputs]\n    attribs['Tout'] = [tensor.dtype for tensor in operation.outputs]\n\n\ndef _build_node(node_def, operation):\n    node_def.op = operation.type if not operation.custom else 'PyFunc'\n    node_def.name = operation.name\n    node_def.input.extend([_tensor_name(tensor) for tensor in operation.inputs])\n\n    attribs = operation.attribs if not operation.custom else _custom_attribs(operation)\n\n    output_shapes = attribs.get('_output_shapes')\n    if output_shapes is not None:\n        _build_output_shapes(node_def.attr['_output_shapes'], output_shapes)\n        del attribs['_output_shapes']\n    else:\n        _build_output_shapes(node_def.attr['_output_shapes'],\n                             [tensor.shape for tensor in operation.outputs])\n\n    for name, value in attribs.items():\n        _build_attribute(node_def.attr[name], value)\n\n    return node_def\n\n\ndef build_graphdef(graph):\n    graph_def = GraphDef()\n    for operation in graph.operations:\n        node_def = graph_def.node.add()\n        _build_node(node_def, operation)\n    return graph_def\n\n\ndef write_graphdef(graph, filename):\n    graph_def = build_graphdef(graph)\n\n    with open(filename, 'wb') as file:\n        file.write(graph_def.SerializeToString())\n\n\nclass Writer(object):\n\n    def __call__(self, graph, filename):\n        return write_graphdef(graph, filename)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .reader import Reader\nfrom .writer import Writer\nfrom .helpers import CustomOptionsKey\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/AbsOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass AbsOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsAbsOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = AbsOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def AbsOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # AbsOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef AbsOptionsStart(builder): builder.StartObject(0)\ndef AbsOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ActivationFunctionType.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass ActivationFunctionType(object):\n    NONE = 0\n    RELU = 1\n    RELU_N1_TO_1 = 2\n    RELU6 = 3\n    TANH = 4\n    SIGN_BIT = 5\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/AddNOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass AddNOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsAddNOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = AddNOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def AddNOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # AddNOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef AddNOptionsStart(builder): builder.StartObject(0)\ndef AddNOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/AddOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass AddOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsAddOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = AddOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def AddOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # AddOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # AddOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef AddOptionsStart(builder): builder.StartObject(1)\ndef AddOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef AddOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ArgMaxOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ArgMaxOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsArgMaxOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ArgMaxOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ArgMaxOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ArgMaxOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ArgMaxOptions\n    def OutputType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef ArgMaxOptionsStart(builder): builder.StartObject(1)\ndef ArgMaxOptionsAddOutputType(builder, outputType): builder.PrependInt8Slot(0, outputType, 0)\ndef ArgMaxOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ArgMinOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ArgMinOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsArgMinOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ArgMinOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ArgMinOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ArgMinOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ArgMinOptions\n    def OutputType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef ArgMinOptionsStart(builder): builder.StartObject(1)\ndef ArgMinOptionsAddOutputType(builder, outputType): builder.PrependInt8Slot(0, outputType, 0)\ndef ArgMinOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/BatchMatMulOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass BatchMatMulOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsBatchMatMulOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = BatchMatMulOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def BatchMatMulOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # BatchMatMulOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # BatchMatMulOptions\n    def AdjX(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # BatchMatMulOptions\n    def AdjY(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef BatchMatMulOptionsStart(builder): builder.StartObject(2)\ndef BatchMatMulOptionsAddAdjX(builder, adjX): builder.PrependBoolSlot(0, adjX, 0)\ndef BatchMatMulOptionsAddAdjY(builder, adjY): builder.PrependBoolSlot(1, adjY, 0)\ndef BatchMatMulOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/BatchToSpaceNDOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass BatchToSpaceNDOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsBatchToSpaceNDOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = BatchToSpaceNDOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def BatchToSpaceNDOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # BatchToSpaceNDOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef BatchToSpaceNDOptionsStart(builder): builder.StartObject(0)\ndef BatchToSpaceNDOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/BidirectionalSequenceLSTMOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass BidirectionalSequenceLSTMOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsBidirectionalSequenceLSTMOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = BidirectionalSequenceLSTMOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def BidirectionalSequenceLSTMOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # BidirectionalSequenceLSTMOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # BidirectionalSequenceLSTMOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # BidirectionalSequenceLSTMOptions\n    def CellClip(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # BidirectionalSequenceLSTMOptions\n    def ProjClip(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # BidirectionalSequenceLSTMOptions\n    def MergeOutputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # BidirectionalSequenceLSTMOptions\n    def TimeMajor(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return True\n\n    # BidirectionalSequenceLSTMOptions\n    def AsymmetricQuantizeInputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef BidirectionalSequenceLSTMOptionsStart(builder): builder.StartObject(6)\ndef BidirectionalSequenceLSTMOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef BidirectionalSequenceLSTMOptionsAddCellClip(builder, cellClip): builder.PrependFloat32Slot(1, cellClip, 0.0)\ndef BidirectionalSequenceLSTMOptionsAddProjClip(builder, projClip): builder.PrependFloat32Slot(2, projClip, 0.0)\ndef BidirectionalSequenceLSTMOptionsAddMergeOutputs(builder, mergeOutputs): builder.PrependBoolSlot(3, mergeOutputs, 0)\ndef BidirectionalSequenceLSTMOptionsAddTimeMajor(builder, timeMajor): builder.PrependBoolSlot(4, timeMajor, 1)\ndef BidirectionalSequenceLSTMOptionsAddAsymmetricQuantizeInputs(builder, asymmetricQuantizeInputs): builder.PrependBoolSlot(5, asymmetricQuantizeInputs, 0)\ndef BidirectionalSequenceLSTMOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/BidirectionalSequenceRNNOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass BidirectionalSequenceRNNOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsBidirectionalSequenceRNNOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = BidirectionalSequenceRNNOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def BidirectionalSequenceRNNOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # BidirectionalSequenceRNNOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # BidirectionalSequenceRNNOptions\n    def TimeMajor(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # BidirectionalSequenceRNNOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # BidirectionalSequenceRNNOptions\n    def MergeOutputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # BidirectionalSequenceRNNOptions\n    def AsymmetricQuantizeInputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef BidirectionalSequenceRNNOptionsStart(builder): builder.StartObject(4)\ndef BidirectionalSequenceRNNOptionsAddTimeMajor(builder, timeMajor): builder.PrependBoolSlot(0, timeMajor, 0)\ndef BidirectionalSequenceRNNOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(1, fusedActivationFunction, 0)\ndef BidirectionalSequenceRNNOptionsAddMergeOutputs(builder, mergeOutputs): builder.PrependBoolSlot(2, mergeOutputs, 0)\ndef BidirectionalSequenceRNNOptionsAddAsymmetricQuantizeInputs(builder, asymmetricQuantizeInputs): builder.PrependBoolSlot(3, asymmetricQuantizeInputs, 0)\ndef BidirectionalSequenceRNNOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Buffer.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Buffer(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsBuffer(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Buffer()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def BufferBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Buffer\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Buffer\n    def Data(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Uint8Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 1))\n        return 0\n\n    # Buffer\n    def DataAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Uint8Flags, o)\n        return 0\n\n    # Buffer\n    def DataLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Buffer\n    def DataIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\ndef BufferStart(builder): builder.StartObject(1)\ndef BufferAddData(builder, data): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(data), 0)\ndef BufferStartDataVector(builder, numElems): return builder.StartVector(1, numElems, 1)\ndef BufferEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/BuiltinOperator.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass BuiltinOperator(object):\n    ADD = 0\n    AVERAGE_POOL_2D = 1\n    CONCATENATION = 2\n    CONV_2D = 3\n    DEPTHWISE_CONV_2D = 4\n    DEPTH_TO_SPACE = 5\n    DEQUANTIZE = 6\n    EMBEDDING_LOOKUP = 7\n    FLOOR = 8\n    FULLY_CONNECTED = 9\n    HASHTABLE_LOOKUP = 10\n    L2_NORMALIZATION = 11\n    L2_POOL_2D = 12\n    LOCAL_RESPONSE_NORMALIZATION = 13\n    LOGISTIC = 14\n    LSH_PROJECTION = 15\n    LSTM = 16\n    MAX_POOL_2D = 17\n    MUL = 18\n    RELU = 19\n    RELU_N1_TO_1 = 20\n    RELU6 = 21\n    RESHAPE = 22\n    RESIZE_BILINEAR = 23\n    RNN = 24\n    SOFTMAX = 25\n    SPACE_TO_DEPTH = 26\n    SVDF = 27\n    TANH = 28\n    CONCAT_EMBEDDINGS = 29\n    SKIP_GRAM = 30\n    CALL = 31\n    CUSTOM = 32\n    EMBEDDING_LOOKUP_SPARSE = 33\n    PAD = 34\n    UNIDIRECTIONAL_SEQUENCE_RNN = 35\n    GATHER = 36\n    BATCH_TO_SPACE_ND = 37\n    SPACE_TO_BATCH_ND = 38\n    TRANSPOSE = 39\n    MEAN = 40\n    SUB = 41\n    DIV = 42\n    SQUEEZE = 43\n    UNIDIRECTIONAL_SEQUENCE_LSTM = 44\n    STRIDED_SLICE = 45\n    BIDIRECTIONAL_SEQUENCE_RNN = 46\n    EXP = 47\n    TOPK_V2 = 48\n    SPLIT = 49\n    LOG_SOFTMAX = 50\n    DELEGATE = 51\n    BIDIRECTIONAL_SEQUENCE_LSTM = 52\n    CAST = 53\n    PRELU = 54\n    MAXIMUM = 55\n    ARG_MAX = 56\n    MINIMUM = 57\n    LESS = 58\n    NEG = 59\n    PADV2 = 60\n    GREATER = 61\n    GREATER_EQUAL = 62\n    LESS_EQUAL = 63\n    SELECT = 64\n    SLICE = 65\n    SIN = 66\n    TRANSPOSE_CONV = 67\n    SPARSE_TO_DENSE = 68\n    TILE = 69\n    EXPAND_DIMS = 70\n    EQUAL = 71\n    NOT_EQUAL = 72\n    LOG = 73\n    SUM = 74\n    SQRT = 75\n    RSQRT = 76\n    SHAPE = 77\n    POW = 78\n    ARG_MIN = 79\n    FAKE_QUANT = 80\n    REDUCE_PROD = 81\n    REDUCE_MAX = 82\n    PACK = 83\n    LOGICAL_OR = 84\n    ONE_HOT = 85\n    LOGICAL_AND = 86\n    LOGICAL_NOT = 87\n    UNPACK = 88\n    REDUCE_MIN = 89\n    FLOOR_DIV = 90\n    REDUCE_ANY = 91\n    SQUARE = 92\n    ZEROS_LIKE = 93\n    FILL = 94\n    FLOOR_MOD = 95\n    RANGE = 96\n    RESIZE_NEAREST_NEIGHBOR = 97\n    LEAKY_RELU = 98\n    SQUARED_DIFFERENCE = 99\n    MIRROR_PAD = 100\n    ABS = 101\n    SPLIT_V = 102\n    UNIQUE = 103\n    CEIL = 104\n    REVERSE_V2 = 105\n    ADD_N = 106\n    GATHER_ND = 107\n    COS = 108\n    WHERE = 109\n    RANK = 110\n    ELU = 111\n    REVERSE_SEQUENCE = 112\n    MATRIX_DIAG = 113\n    QUANTIZE = 114\n    MATRIX_SET_DIAG = 115\n    ROUND = 116\n    HARD_SWISH = 117\n    IF = 118\n    WHILE = 119\n    NON_MAX_SUPPRESSION_V4 = 120\n    NON_MAX_SUPPRESSION_V5 = 121\n    SCATTER_ND = 122\n    SELECT_V2 = 123\n    DENSIFY = 124\n    SEGMENT_SUM = 125\n    BATCH_MATMUL = 126\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/BuiltinOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass BuiltinOptions(object):\n    NONE = 0\n    Conv2DOptions = 1\n    DepthwiseConv2DOptions = 2\n    ConcatEmbeddingsOptions = 3\n    LSHProjectionOptions = 4\n    Pool2DOptions = 5\n    SVDFOptions = 6\n    RNNOptions = 7\n    FullyConnectedOptions = 8\n    SoftmaxOptions = 9\n    ConcatenationOptions = 10\n    AddOptions = 11\n    L2NormOptions = 12\n    LocalResponseNormalizationOptions = 13\n    LSTMOptions = 14\n    ResizeBilinearOptions = 15\n    CallOptions = 16\n    ReshapeOptions = 17\n    SkipGramOptions = 18\n    SpaceToDepthOptions = 19\n    EmbeddingLookupSparseOptions = 20\n    MulOptions = 21\n    PadOptions = 22\n    GatherOptions = 23\n    BatchToSpaceNDOptions = 24\n    SpaceToBatchNDOptions = 25\n    TransposeOptions = 26\n    ReducerOptions = 27\n    SubOptions = 28\n    DivOptions = 29\n    SqueezeOptions = 30\n    SequenceRNNOptions = 31\n    StridedSliceOptions = 32\n    ExpOptions = 33\n    TopKV2Options = 34\n    SplitOptions = 35\n    LogSoftmaxOptions = 36\n    CastOptions = 37\n    DequantizeOptions = 38\n    MaximumMinimumOptions = 39\n    ArgMaxOptions = 40\n    LessOptions = 41\n    NegOptions = 42\n    PadV2Options = 43\n    GreaterOptions = 44\n    GreaterEqualOptions = 45\n    LessEqualOptions = 46\n    SelectOptions = 47\n    SliceOptions = 48\n    TransposeConvOptions = 49\n    SparseToDenseOptions = 50\n    TileOptions = 51\n    ExpandDimsOptions = 52\n    EqualOptions = 53\n    NotEqualOptions = 54\n    ShapeOptions = 55\n    PowOptions = 56\n    ArgMinOptions = 57\n    FakeQuantOptions = 58\n    PackOptions = 59\n    LogicalOrOptions = 60\n    OneHotOptions = 61\n    LogicalAndOptions = 62\n    LogicalNotOptions = 63\n    UnpackOptions = 64\n    FloorDivOptions = 65\n    SquareOptions = 66\n    ZerosLikeOptions = 67\n    FillOptions = 68\n    BidirectionalSequenceLSTMOptions = 69\n    BidirectionalSequenceRNNOptions = 70\n    UnidirectionalSequenceLSTMOptions = 71\n    FloorModOptions = 72\n    RangeOptions = 73\n    ResizeNearestNeighborOptions = 74\n    LeakyReluOptions = 75\n    SquaredDifferenceOptions = 76\n    MirrorPadOptions = 77\n    AbsOptions = 78\n    SplitVOptions = 79\n    UniqueOptions = 80\n    ReverseV2Options = 81\n    AddNOptions = 82\n    GatherNdOptions = 83\n    CosOptions = 84\n    WhereOptions = 85\n    RankOptions = 86\n    ReverseSequenceOptions = 87\n    MatrixDiagOptions = 88\n    QuantizeOptions = 89\n    MatrixSetDiagOptions = 90\n    HardSwishOptions = 91\n    IfOptions = 92\n    WhileOptions = 93\n    DepthToSpaceOptions = 94\n    NonMaxSuppressionV4Options = 95\n    NonMaxSuppressionV5Options = 96\n    ScatterNdOptions = 97\n    SelectV2Options = 98\n    DensifyOptions = 99\n    SegmentSumOptions = 100\n    BatchMatMulOptions = 101\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/CallOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass CallOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsCallOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = CallOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def CallOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # CallOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # CallOptions\n    def Subgraph(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint32Flags, o + self._tab.Pos)\n        return 0\n\ndef CallOptionsStart(builder): builder.StartObject(1)\ndef CallOptionsAddSubgraph(builder, subgraph): builder.PrependUint32Slot(0, subgraph, 0)\ndef CallOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/CastOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass CastOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsCastOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = CastOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def CastOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # CastOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # CastOptions\n    def InDataType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # CastOptions\n    def OutDataType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef CastOptionsStart(builder): builder.StartObject(2)\ndef CastOptionsAddInDataType(builder, inDataType): builder.PrependInt8Slot(0, inDataType, 0)\ndef CastOptionsAddOutDataType(builder, outDataType): builder.PrependInt8Slot(1, outDataType, 0)\ndef CastOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/CombinerType.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass CombinerType(object):\n    SUM = 0\n    MEAN = 1\n    SQRTN = 2\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ConcatEmbeddingsOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ConcatEmbeddingsOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsConcatEmbeddingsOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ConcatEmbeddingsOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ConcatEmbeddingsOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ConcatEmbeddingsOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ConcatEmbeddingsOptions\n    def NumChannels(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # ConcatEmbeddingsOptions\n    def NumColumnsPerChannel(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # ConcatEmbeddingsOptions\n    def NumColumnsPerChannelAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # ConcatEmbeddingsOptions\n    def NumColumnsPerChannelLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # ConcatEmbeddingsOptions\n    def NumColumnsPerChannelIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        return o == 0\n\n    # ConcatEmbeddingsOptions\n    def EmbeddingDimPerChannel(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # ConcatEmbeddingsOptions\n    def EmbeddingDimPerChannelAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # ConcatEmbeddingsOptions\n    def EmbeddingDimPerChannelLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # ConcatEmbeddingsOptions\n    def EmbeddingDimPerChannelIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        return o == 0\n\ndef ConcatEmbeddingsOptionsStart(builder): builder.StartObject(3)\ndef ConcatEmbeddingsOptionsAddNumChannels(builder, numChannels): builder.PrependInt32Slot(0, numChannels, 0)\ndef ConcatEmbeddingsOptionsAddNumColumnsPerChannel(builder, numColumnsPerChannel): builder.PrependUOffsetTRelativeSlot(1, flatbuffers.number_types.UOffsetTFlags.py_type(numColumnsPerChannel), 0)\ndef ConcatEmbeddingsOptionsStartNumColumnsPerChannelVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef ConcatEmbeddingsOptionsAddEmbeddingDimPerChannel(builder, embeddingDimPerChannel): builder.PrependUOffsetTRelativeSlot(2, flatbuffers.number_types.UOffsetTFlags.py_type(embeddingDimPerChannel), 0)\ndef ConcatEmbeddingsOptionsStartEmbeddingDimPerChannelVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef ConcatEmbeddingsOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ConcatenationOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ConcatenationOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsConcatenationOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ConcatenationOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ConcatenationOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ConcatenationOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ConcatenationOptions\n    def Axis(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # ConcatenationOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef ConcatenationOptionsStart(builder): builder.StartObject(2)\ndef ConcatenationOptionsAddAxis(builder, axis): builder.PrependInt32Slot(0, axis, 0)\ndef ConcatenationOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(1, fusedActivationFunction, 0)\ndef ConcatenationOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Conv2DOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Conv2DOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsConv2DOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Conv2DOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def Conv2DOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Conv2DOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Conv2DOptions\n    def Padding(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # Conv2DOptions\n    def StrideW(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # Conv2DOptions\n    def StrideH(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # Conv2DOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # Conv2DOptions\n    def DilationWFactor(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 1\n\n    # Conv2DOptions\n    def DilationHFactor(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 1\n\ndef Conv2DOptionsStart(builder): builder.StartObject(6)\ndef Conv2DOptionsAddPadding(builder, padding): builder.PrependInt8Slot(0, padding, 0)\ndef Conv2DOptionsAddStrideW(builder, strideW): builder.PrependInt32Slot(1, strideW, 0)\ndef Conv2DOptionsAddStrideH(builder, strideH): builder.PrependInt32Slot(2, strideH, 0)\ndef Conv2DOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(3, fusedActivationFunction, 0)\ndef Conv2DOptionsAddDilationWFactor(builder, dilationWFactor): builder.PrependInt32Slot(4, dilationWFactor, 1)\ndef Conv2DOptionsAddDilationHFactor(builder, dilationHFactor): builder.PrependInt32Slot(5, dilationHFactor, 1)\ndef Conv2DOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/CosOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass CosOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsCosOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = CosOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def CosOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # CosOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef CosOptionsStart(builder): builder.StartObject(0)\ndef CosOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/CustomOptionsFormat.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass CustomOptionsFormat(object):\n    FLEXBUFFERS = 0\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/CustomQuantization.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass CustomQuantization(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsCustomQuantization(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = CustomQuantization()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def CustomQuantizationBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # CustomQuantization\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # CustomQuantization\n    def Custom(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Uint8Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 1))\n        return 0\n\n    # CustomQuantization\n    def CustomAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Uint8Flags, o)\n        return 0\n\n    # CustomQuantization\n    def CustomLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # CustomQuantization\n    def CustomIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\ndef CustomQuantizationStart(builder): builder.StartObject(1)\ndef CustomQuantizationAddCustom(builder, custom): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(custom), 0)\ndef CustomQuantizationStartCustomVector(builder, numElems): return builder.StartVector(1, numElems, 1)\ndef CustomQuantizationEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/DensifyOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass DensifyOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsDensifyOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = DensifyOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def DensifyOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # DensifyOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef DensifyOptionsStart(builder): builder.StartObject(0)\ndef DensifyOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/DepthToSpaceOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass DepthToSpaceOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsDepthToSpaceOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = DepthToSpaceOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def DepthToSpaceOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # DepthToSpaceOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # DepthToSpaceOptions\n    def BlockSize(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef DepthToSpaceOptionsStart(builder): builder.StartObject(1)\ndef DepthToSpaceOptionsAddBlockSize(builder, blockSize): builder.PrependInt32Slot(0, blockSize, 0)\ndef DepthToSpaceOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/DepthwiseConv2DOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass DepthwiseConv2DOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsDepthwiseConv2DOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = DepthwiseConv2DOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def DepthwiseConv2DOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # DepthwiseConv2DOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # DepthwiseConv2DOptions\n    def Padding(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # DepthwiseConv2DOptions\n    def StrideW(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # DepthwiseConv2DOptions\n    def StrideH(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # DepthwiseConv2DOptions\n    def DepthMultiplier(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # DepthwiseConv2DOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # DepthwiseConv2DOptions\n    def DilationWFactor(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 1\n\n    # DepthwiseConv2DOptions\n    def DilationHFactor(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(16))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 1\n\ndef DepthwiseConv2DOptionsStart(builder): builder.StartObject(7)\ndef DepthwiseConv2DOptionsAddPadding(builder, padding): builder.PrependInt8Slot(0, padding, 0)\ndef DepthwiseConv2DOptionsAddStrideW(builder, strideW): builder.PrependInt32Slot(1, strideW, 0)\ndef DepthwiseConv2DOptionsAddStrideH(builder, strideH): builder.PrependInt32Slot(2, strideH, 0)\ndef DepthwiseConv2DOptionsAddDepthMultiplier(builder, depthMultiplier): builder.PrependInt32Slot(3, depthMultiplier, 0)\ndef DepthwiseConv2DOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(4, fusedActivationFunction, 0)\ndef DepthwiseConv2DOptionsAddDilationWFactor(builder, dilationWFactor): builder.PrependInt32Slot(5, dilationWFactor, 1)\ndef DepthwiseConv2DOptionsAddDilationHFactor(builder, dilationHFactor): builder.PrependInt32Slot(6, dilationHFactor, 1)\ndef DepthwiseConv2DOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/DequantizeOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass DequantizeOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsDequantizeOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = DequantizeOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def DequantizeOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # DequantizeOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef DequantizeOptionsStart(builder): builder.StartObject(0)\ndef DequantizeOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/DimensionMetadata.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass DimensionMetadata(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsDimensionMetadata(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = DimensionMetadata()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def DimensionMetadataBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # DimensionMetadata\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # DimensionMetadata\n    def Format(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # DimensionMetadata\n    def DenseSize(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # DimensionMetadata\n    def ArraySegmentsType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint8Flags, o + self._tab.Pos)\n        return 0\n\n    # DimensionMetadata\n    def ArraySegments(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            from flatbuffers.table import Table\n            obj = Table(bytearray(), 0)\n            self._tab.Union(obj, o)\n            return obj\n        return None\n\n    # DimensionMetadata\n    def ArrayIndicesType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint8Flags, o + self._tab.Pos)\n        return 0\n\n    # DimensionMetadata\n    def ArrayIndices(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            from flatbuffers.table import Table\n            obj = Table(bytearray(), 0)\n            self._tab.Union(obj, o)\n            return obj\n        return None\n\ndef DimensionMetadataStart(builder): builder.StartObject(6)\ndef DimensionMetadataAddFormat(builder, format): builder.PrependInt8Slot(0, format, 0)\ndef DimensionMetadataAddDenseSize(builder, denseSize): builder.PrependInt32Slot(1, denseSize, 0)\ndef DimensionMetadataAddArraySegmentsType(builder, arraySegmentsType): builder.PrependUint8Slot(2, arraySegmentsType, 0)\ndef DimensionMetadataAddArraySegments(builder, arraySegments): builder.PrependUOffsetTRelativeSlot(3, flatbuffers.number_types.UOffsetTFlags.py_type(arraySegments), 0)\ndef DimensionMetadataAddArrayIndicesType(builder, arrayIndicesType): builder.PrependUint8Slot(4, arrayIndicesType, 0)\ndef DimensionMetadataAddArrayIndices(builder, arrayIndices): builder.PrependUOffsetTRelativeSlot(5, flatbuffers.number_types.UOffsetTFlags.py_type(arrayIndices), 0)\ndef DimensionMetadataEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/DimensionType.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass DimensionType(object):\n    DENSE = 0\n    SPARSE_CSR = 1\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/DivOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass DivOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsDivOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = DivOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def DivOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # DivOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # DivOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef DivOptionsStart(builder): builder.StartObject(1)\ndef DivOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef DivOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/EmbeddingLookupSparseOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass EmbeddingLookupSparseOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsEmbeddingLookupSparseOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = EmbeddingLookupSparseOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def EmbeddingLookupSparseOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # EmbeddingLookupSparseOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # EmbeddingLookupSparseOptions\n    def Combiner(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef EmbeddingLookupSparseOptionsStart(builder): builder.StartObject(1)\ndef EmbeddingLookupSparseOptionsAddCombiner(builder, combiner): builder.PrependInt8Slot(0, combiner, 0)\ndef EmbeddingLookupSparseOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/EqualOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass EqualOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsEqualOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = EqualOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def EqualOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # EqualOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef EqualOptionsStart(builder): builder.StartObject(0)\ndef EqualOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ExpOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ExpOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsExpOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ExpOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ExpOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ExpOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef ExpOptionsStart(builder): builder.StartObject(0)\ndef ExpOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ExpandDimsOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ExpandDimsOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsExpandDimsOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ExpandDimsOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ExpandDimsOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ExpandDimsOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef ExpandDimsOptionsStart(builder): builder.StartObject(0)\ndef ExpandDimsOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/FakeQuantOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass FakeQuantOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsFakeQuantOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = FakeQuantOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def FakeQuantOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # FakeQuantOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # FakeQuantOptions\n    def Min(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # FakeQuantOptions\n    def Max(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # FakeQuantOptions\n    def NumBits(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # FakeQuantOptions\n    def NarrowRange(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef FakeQuantOptionsStart(builder): builder.StartObject(4)\ndef FakeQuantOptionsAddMin(builder, min): builder.PrependFloat32Slot(0, min, 0.0)\ndef FakeQuantOptionsAddMax(builder, max): builder.PrependFloat32Slot(1, max, 0.0)\ndef FakeQuantOptionsAddNumBits(builder, numBits): builder.PrependInt32Slot(2, numBits, 0)\ndef FakeQuantOptionsAddNarrowRange(builder, narrowRange): builder.PrependBoolSlot(3, narrowRange, 0)\ndef FakeQuantOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/FillOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass FillOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsFillOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = FillOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def FillOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # FillOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef FillOptionsStart(builder): builder.StartObject(0)\ndef FillOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/FloorDivOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass FloorDivOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsFloorDivOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = FloorDivOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def FloorDivOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # FloorDivOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef FloorDivOptionsStart(builder): builder.StartObject(0)\ndef FloorDivOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/FloorModOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass FloorModOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsFloorModOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = FloorModOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def FloorModOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # FloorModOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef FloorModOptionsStart(builder): builder.StartObject(0)\ndef FloorModOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/FullyConnectedOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass FullyConnectedOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsFullyConnectedOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = FullyConnectedOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def FullyConnectedOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # FullyConnectedOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # FullyConnectedOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # FullyConnectedOptions\n    def WeightsFormat(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # FullyConnectedOptions\n    def KeepNumDims(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # FullyConnectedOptions\n    def AsymmetricQuantizeInputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef FullyConnectedOptionsStart(builder): builder.StartObject(4)\ndef FullyConnectedOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef FullyConnectedOptionsAddWeightsFormat(builder, weightsFormat): builder.PrependInt8Slot(1, weightsFormat, 0)\ndef FullyConnectedOptionsAddKeepNumDims(builder, keepNumDims): builder.PrependBoolSlot(2, keepNumDims, 0)\ndef FullyConnectedOptionsAddAsymmetricQuantizeInputs(builder, asymmetricQuantizeInputs): builder.PrependBoolSlot(3, asymmetricQuantizeInputs, 0)\ndef FullyConnectedOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/FullyConnectedOptionsWeightsFormat.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass FullyConnectedOptionsWeightsFormat(object):\n    DEFAULT = 0\n    SHUFFLED4x16INT8 = 1\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/GatherNdOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass GatherNdOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsGatherNdOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = GatherNdOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def GatherNdOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # GatherNdOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef GatherNdOptionsStart(builder): builder.StartObject(0)\ndef GatherNdOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/GatherOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass GatherOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsGatherOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = GatherOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def GatherOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # GatherOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # GatherOptions\n    def Axis(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef GatherOptionsStart(builder): builder.StartObject(1)\ndef GatherOptionsAddAxis(builder, axis): builder.PrependInt32Slot(0, axis, 0)\ndef GatherOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/GreaterEqualOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass GreaterEqualOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsGreaterEqualOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = GreaterEqualOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def GreaterEqualOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # GreaterEqualOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef GreaterEqualOptionsStart(builder): builder.StartObject(0)\ndef GreaterEqualOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/GreaterOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass GreaterOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsGreaterOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = GreaterOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def GreaterOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # GreaterOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef GreaterOptionsStart(builder): builder.StartObject(0)\ndef GreaterOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/HardSwishOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass HardSwishOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsHardSwishOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = HardSwishOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def HardSwishOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # HardSwishOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef HardSwishOptionsStart(builder): builder.StartObject(0)\ndef HardSwishOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/IfOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass IfOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsIfOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = IfOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def IfOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # IfOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # IfOptions\n    def ThenSubgraphIndex(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # IfOptions\n    def ElseSubgraphIndex(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef IfOptionsStart(builder): builder.StartObject(2)\ndef IfOptionsAddThenSubgraphIndex(builder, thenSubgraphIndex): builder.PrependInt32Slot(0, thenSubgraphIndex, 0)\ndef IfOptionsAddElseSubgraphIndex(builder, elseSubgraphIndex): builder.PrependInt32Slot(1, elseSubgraphIndex, 0)\ndef IfOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Int32Vector.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Int32Vector(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsInt32Vector(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Int32Vector()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def Int32VectorBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Int32Vector\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Int32Vector\n    def Values(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # Int32Vector\n    def ValuesAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # Int32Vector\n    def ValuesLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Int32Vector\n    def ValuesIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\ndef Int32VectorStart(builder): builder.StartObject(1)\ndef Int32VectorAddValues(builder, values): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(values), 0)\ndef Int32VectorStartValuesVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef Int32VectorEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/L2NormOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass L2NormOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsL2NormOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = L2NormOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def L2NormOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # L2NormOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # L2NormOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef L2NormOptionsStart(builder): builder.StartObject(1)\ndef L2NormOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef L2NormOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LSHProjectionOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LSHProjectionOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLSHProjectionOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LSHProjectionOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LSHProjectionOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LSHProjectionOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # LSHProjectionOptions\n    def Type(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef LSHProjectionOptionsStart(builder): builder.StartObject(1)\ndef LSHProjectionOptionsAddType(builder, type): builder.PrependInt8Slot(0, type, 0)\ndef LSHProjectionOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LSHProjectionType.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass LSHProjectionType(object):\n    UNKNOWN = 0\n    SPARSE = 1\n    DENSE = 2\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LSTMKernelType.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass LSTMKernelType(object):\n    FULL = 0\n    BASIC = 1\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LSTMOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LSTMOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLSTMOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LSTMOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LSTMOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LSTMOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # LSTMOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # LSTMOptions\n    def CellClip(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # LSTMOptions\n    def ProjClip(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # LSTMOptions\n    def KernelType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # LSTMOptions\n    def AsymmetricQuantizeInputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef LSTMOptionsStart(builder): builder.StartObject(5)\ndef LSTMOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef LSTMOptionsAddCellClip(builder, cellClip): builder.PrependFloat32Slot(1, cellClip, 0.0)\ndef LSTMOptionsAddProjClip(builder, projClip): builder.PrependFloat32Slot(2, projClip, 0.0)\ndef LSTMOptionsAddKernelType(builder, kernelType): builder.PrependInt8Slot(3, kernelType, 0)\ndef LSTMOptionsAddAsymmetricQuantizeInputs(builder, asymmetricQuantizeInputs): builder.PrependBoolSlot(4, asymmetricQuantizeInputs, 0)\ndef LSTMOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LeakyReluOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LeakyReluOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLeakyReluOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LeakyReluOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LeakyReluOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LeakyReluOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # LeakyReluOptions\n    def Alpha(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\ndef LeakyReluOptionsStart(builder): builder.StartObject(1)\ndef LeakyReluOptionsAddAlpha(builder, alpha): builder.PrependFloat32Slot(0, alpha, 0.0)\ndef LeakyReluOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LessEqualOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LessEqualOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLessEqualOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LessEqualOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LessEqualOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LessEqualOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef LessEqualOptionsStart(builder): builder.StartObject(0)\ndef LessEqualOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LessOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LessOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLessOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LessOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LessOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LessOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef LessOptionsStart(builder): builder.StartObject(0)\ndef LessOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LocalResponseNormalizationOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LocalResponseNormalizationOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLocalResponseNormalizationOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LocalResponseNormalizationOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LocalResponseNormalizationOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LocalResponseNormalizationOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # LocalResponseNormalizationOptions\n    def Radius(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # LocalResponseNormalizationOptions\n    def Bias(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # LocalResponseNormalizationOptions\n    def Alpha(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # LocalResponseNormalizationOptions\n    def Beta(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\ndef LocalResponseNormalizationOptionsStart(builder): builder.StartObject(4)\ndef LocalResponseNormalizationOptionsAddRadius(builder, radius): builder.PrependInt32Slot(0, radius, 0)\ndef LocalResponseNormalizationOptionsAddBias(builder, bias): builder.PrependFloat32Slot(1, bias, 0.0)\ndef LocalResponseNormalizationOptionsAddAlpha(builder, alpha): builder.PrependFloat32Slot(2, alpha, 0.0)\ndef LocalResponseNormalizationOptionsAddBeta(builder, beta): builder.PrependFloat32Slot(3, beta, 0.0)\ndef LocalResponseNormalizationOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LogSoftmaxOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LogSoftmaxOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLogSoftmaxOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LogSoftmaxOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LogSoftmaxOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LogSoftmaxOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef LogSoftmaxOptionsStart(builder): builder.StartObject(0)\ndef LogSoftmaxOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LogicalAndOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LogicalAndOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLogicalAndOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LogicalAndOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LogicalAndOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LogicalAndOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef LogicalAndOptionsStart(builder): builder.StartObject(0)\ndef LogicalAndOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LogicalNotOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LogicalNotOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLogicalNotOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LogicalNotOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LogicalNotOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LogicalNotOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef LogicalNotOptionsStart(builder): builder.StartObject(0)\ndef LogicalNotOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/LogicalOrOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass LogicalOrOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsLogicalOrOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = LogicalOrOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def LogicalOrOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # LogicalOrOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef LogicalOrOptionsStart(builder): builder.StartObject(0)\ndef LogicalOrOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/MatrixDiagOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass MatrixDiagOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsMatrixDiagOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = MatrixDiagOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def MatrixDiagOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # MatrixDiagOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef MatrixDiagOptionsStart(builder): builder.StartObject(0)\ndef MatrixDiagOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/MatrixSetDiagOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass MatrixSetDiagOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsMatrixSetDiagOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = MatrixSetDiagOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def MatrixSetDiagOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # MatrixSetDiagOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef MatrixSetDiagOptionsStart(builder): builder.StartObject(0)\ndef MatrixSetDiagOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/MaximumMinimumOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass MaximumMinimumOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsMaximumMinimumOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = MaximumMinimumOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def MaximumMinimumOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # MaximumMinimumOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef MaximumMinimumOptionsStart(builder): builder.StartObject(0)\ndef MaximumMinimumOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Metadata.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Metadata(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsMetadata(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Metadata()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def MetadataBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Metadata\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Metadata\n    def Name(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.String(o + self._tab.Pos)\n        return None\n\n    # Metadata\n    def Buffer(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint32Flags, o + self._tab.Pos)\n        return 0\n\ndef MetadataStart(builder): builder.StartObject(2)\ndef MetadataAddName(builder, name): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(name), 0)\ndef MetadataAddBuffer(builder, buffer): builder.PrependUint32Slot(1, buffer, 0)\ndef MetadataEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/MirrorPadMode.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass MirrorPadMode(object):\n    REFLECT = 0\n    SYMMETRIC = 1\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/MirrorPadOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass MirrorPadOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsMirrorPadOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = MirrorPadOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def MirrorPadOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # MirrorPadOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # MirrorPadOptions\n    def Mode(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef MirrorPadOptionsStart(builder): builder.StartObject(1)\ndef MirrorPadOptionsAddMode(builder, mode): builder.PrependInt8Slot(0, mode, 0)\ndef MirrorPadOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Model.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Model(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsModel(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Model()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ModelBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Model\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Model\n    def Version(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint32Flags, o + self._tab.Pos)\n        return 0\n\n    # Model\n    def OperatorCodes(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            x = self._tab.Vector(o)\n            x += flatbuffers.number_types.UOffsetTFlags.py_type(j) * 4\n            x = self._tab.Indirect(x)\n            from .OperatorCode import OperatorCode\n            obj = OperatorCode()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # Model\n    def OperatorCodesLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Model\n    def OperatorCodesIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        return o == 0\n\n    # Model\n    def Subgraphs(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            x = self._tab.Vector(o)\n            x += flatbuffers.number_types.UOffsetTFlags.py_type(j) * 4\n            x = self._tab.Indirect(x)\n            from .SubGraph import SubGraph\n            obj = SubGraph()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # Model\n    def SubgraphsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Model\n    def SubgraphsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        return o == 0\n\n    # Model\n    def Description(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.String(o + self._tab.Pos)\n        return None\n\n    # Model\n    def Buffers(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            x = self._tab.Vector(o)\n            x += flatbuffers.number_types.UOffsetTFlags.py_type(j) * 4\n            x = self._tab.Indirect(x)\n            from .Buffer import Buffer\n            obj = Buffer()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # Model\n    def BuffersLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Model\n    def BuffersIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        return o == 0\n\n    # Model\n    def MetadataBuffer(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # Model\n    def MetadataBufferAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # Model\n    def MetadataBufferLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Model\n    def MetadataBufferIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        return o == 0\n\n    # Model\n    def Metadata(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(16))\n        if o != 0:\n            x = self._tab.Vector(o)\n            x += flatbuffers.number_types.UOffsetTFlags.py_type(j) * 4\n            x = self._tab.Indirect(x)\n            from .Metadata import Metadata\n            obj = Metadata()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # Model\n    def MetadataLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(16))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Model\n    def MetadataIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(16))\n        return o == 0\n\ndef ModelStart(builder): builder.StartObject(7)\ndef ModelAddVersion(builder, version): builder.PrependUint32Slot(0, version, 0)\ndef ModelAddOperatorCodes(builder, operatorCodes): builder.PrependUOffsetTRelativeSlot(1, flatbuffers.number_types.UOffsetTFlags.py_type(operatorCodes), 0)\ndef ModelStartOperatorCodesVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef ModelAddSubgraphs(builder, subgraphs): builder.PrependUOffsetTRelativeSlot(2, flatbuffers.number_types.UOffsetTFlags.py_type(subgraphs), 0)\ndef ModelStartSubgraphsVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef ModelAddDescription(builder, description): builder.PrependUOffsetTRelativeSlot(3, flatbuffers.number_types.UOffsetTFlags.py_type(description), 0)\ndef ModelAddBuffers(builder, buffers): builder.PrependUOffsetTRelativeSlot(4, flatbuffers.number_types.UOffsetTFlags.py_type(buffers), 0)\ndef ModelStartBuffersVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef ModelAddMetadataBuffer(builder, metadataBuffer): builder.PrependUOffsetTRelativeSlot(5, flatbuffers.number_types.UOffsetTFlags.py_type(metadataBuffer), 0)\ndef ModelStartMetadataBufferVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef ModelAddMetadata(builder, metadata): builder.PrependUOffsetTRelativeSlot(6, flatbuffers.number_types.UOffsetTFlags.py_type(metadata), 0)\ndef ModelStartMetadataVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef ModelEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/MulOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass MulOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsMulOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = MulOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def MulOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # MulOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # MulOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef MulOptionsStart(builder): builder.StartObject(1)\ndef MulOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef MulOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/NegOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass NegOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsNegOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = NegOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def NegOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # NegOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef NegOptionsStart(builder): builder.StartObject(0)\ndef NegOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/NonMaxSuppressionV4Options.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass NonMaxSuppressionV4Options(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsNonMaxSuppressionV4Options(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = NonMaxSuppressionV4Options()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def NonMaxSuppressionV4OptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # NonMaxSuppressionV4Options\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef NonMaxSuppressionV4OptionsStart(builder): builder.StartObject(0)\ndef NonMaxSuppressionV4OptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/NonMaxSuppressionV5Options.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass NonMaxSuppressionV5Options(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsNonMaxSuppressionV5Options(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = NonMaxSuppressionV5Options()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def NonMaxSuppressionV5OptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # NonMaxSuppressionV5Options\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef NonMaxSuppressionV5OptionsStart(builder): builder.StartObject(0)\ndef NonMaxSuppressionV5OptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/NotEqualOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass NotEqualOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsNotEqualOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = NotEqualOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def NotEqualOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # NotEqualOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef NotEqualOptionsStart(builder): builder.StartObject(0)\ndef NotEqualOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/OneHotOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass OneHotOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsOneHotOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = OneHotOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def OneHotOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # OneHotOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # OneHotOptions\n    def Axis(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef OneHotOptionsStart(builder): builder.StartObject(1)\ndef OneHotOptionsAddAxis(builder, axis): builder.PrependInt32Slot(0, axis, 0)\ndef OneHotOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Operator.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Operator(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsOperator(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Operator()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def OperatorBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Operator\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Operator\n    def OpcodeIndex(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint32Flags, o + self._tab.Pos)\n        return 0\n\n    # Operator\n    def Inputs(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # Operator\n    def InputsAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # Operator\n    def InputsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Operator\n    def InputsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        return o == 0\n\n    # Operator\n    def Outputs(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # Operator\n    def OutputsAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # Operator\n    def OutputsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Operator\n    def OutputsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        return o == 0\n\n    # Operator\n    def BuiltinOptionsType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint8Flags, o + self._tab.Pos)\n        return 0\n\n    # Operator\n    def BuiltinOptions(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            from flatbuffers.table import Table\n            obj = Table(bytearray(), 0)\n            self._tab.Union(obj, o)\n            return obj\n        return None\n\n    # Operator\n    def CustomOptions(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Uint8Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 1))\n        return 0\n\n    # Operator\n    def CustomOptionsAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Uint8Flags, o)\n        return 0\n\n    # Operator\n    def CustomOptionsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Operator\n    def CustomOptionsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        return o == 0\n\n    # Operator\n    def CustomOptionsFormat(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(16))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # Operator\n    def MutatingVariableInputs(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(18))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.BoolFlags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 1))\n        return 0\n\n    # Operator\n    def MutatingVariableInputsAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(18))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.BoolFlags, o)\n        return 0\n\n    # Operator\n    def MutatingVariableInputsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(18))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Operator\n    def MutatingVariableInputsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(18))\n        return o == 0\n\n    # Operator\n    def Intermediates(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(20))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # Operator\n    def IntermediatesAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(20))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # Operator\n    def IntermediatesLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(20))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Operator\n    def IntermediatesIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(20))\n        return o == 0\n\ndef OperatorStart(builder): builder.StartObject(9)\ndef OperatorAddOpcodeIndex(builder, opcodeIndex): builder.PrependUint32Slot(0, opcodeIndex, 0)\ndef OperatorAddInputs(builder, inputs): builder.PrependUOffsetTRelativeSlot(1, flatbuffers.number_types.UOffsetTFlags.py_type(inputs), 0)\ndef OperatorStartInputsVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef OperatorAddOutputs(builder, outputs): builder.PrependUOffsetTRelativeSlot(2, flatbuffers.number_types.UOffsetTFlags.py_type(outputs), 0)\ndef OperatorStartOutputsVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef OperatorAddBuiltinOptionsType(builder, builtinOptionsType): builder.PrependUint8Slot(3, builtinOptionsType, 0)\ndef OperatorAddBuiltinOptions(builder, builtinOptions): builder.PrependUOffsetTRelativeSlot(4, flatbuffers.number_types.UOffsetTFlags.py_type(builtinOptions), 0)\ndef OperatorAddCustomOptions(builder, customOptions): builder.PrependUOffsetTRelativeSlot(5, flatbuffers.number_types.UOffsetTFlags.py_type(customOptions), 0)\ndef OperatorStartCustomOptionsVector(builder, numElems): return builder.StartVector(1, numElems, 1)\ndef OperatorAddCustomOptionsFormat(builder, customOptionsFormat): builder.PrependInt8Slot(6, customOptionsFormat, 0)\ndef OperatorAddMutatingVariableInputs(builder, mutatingVariableInputs): builder.PrependUOffsetTRelativeSlot(7, flatbuffers.number_types.UOffsetTFlags.py_type(mutatingVariableInputs), 0)\ndef OperatorStartMutatingVariableInputsVector(builder, numElems): return builder.StartVector(1, numElems, 1)\ndef OperatorAddIntermediates(builder, intermediates): builder.PrependUOffsetTRelativeSlot(8, flatbuffers.number_types.UOffsetTFlags.py_type(intermediates), 0)\ndef OperatorStartIntermediatesVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef OperatorEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/OperatorCode.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass OperatorCode(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsOperatorCode(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = OperatorCode()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def OperatorCodeBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # OperatorCode\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # OperatorCode\n    def BuiltinCode(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # OperatorCode\n    def CustomCode(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.String(o + self._tab.Pos)\n        return None\n\n    # OperatorCode\n    def Version(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 1\n\ndef OperatorCodeStart(builder): builder.StartObject(3)\ndef OperatorCodeAddBuiltinCode(builder, builtinCode): builder.PrependInt8Slot(0, builtinCode, 0)\ndef OperatorCodeAddCustomCode(builder, customCode): builder.PrependUOffsetTRelativeSlot(1, flatbuffers.number_types.UOffsetTFlags.py_type(customCode), 0)\ndef OperatorCodeAddVersion(builder, version): builder.PrependInt32Slot(2, version, 1)\ndef OperatorCodeEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/PackOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass PackOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsPackOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = PackOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def PackOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # PackOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # PackOptions\n    def ValuesCount(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # PackOptions\n    def Axis(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef PackOptionsStart(builder): builder.StartObject(2)\ndef PackOptionsAddValuesCount(builder, valuesCount): builder.PrependInt32Slot(0, valuesCount, 0)\ndef PackOptionsAddAxis(builder, axis): builder.PrependInt32Slot(1, axis, 0)\ndef PackOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/PadOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass PadOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsPadOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = PadOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def PadOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # PadOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef PadOptionsStart(builder): builder.StartObject(0)\ndef PadOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/PadV2Options.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass PadV2Options(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsPadV2Options(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = PadV2Options()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def PadV2OptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # PadV2Options\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef PadV2OptionsStart(builder): builder.StartObject(0)\ndef PadV2OptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Padding.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass Padding(object):\n    SAME = 0\n    VALID = 1\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Pool2DOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Pool2DOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsPool2DOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Pool2DOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def Pool2DOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Pool2DOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Pool2DOptions\n    def Padding(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # Pool2DOptions\n    def StrideW(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # Pool2DOptions\n    def StrideH(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # Pool2DOptions\n    def FilterWidth(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # Pool2DOptions\n    def FilterHeight(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # Pool2DOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef Pool2DOptionsStart(builder): builder.StartObject(6)\ndef Pool2DOptionsAddPadding(builder, padding): builder.PrependInt8Slot(0, padding, 0)\ndef Pool2DOptionsAddStrideW(builder, strideW): builder.PrependInt32Slot(1, strideW, 0)\ndef Pool2DOptionsAddStrideH(builder, strideH): builder.PrependInt32Slot(2, strideH, 0)\ndef Pool2DOptionsAddFilterWidth(builder, filterWidth): builder.PrependInt32Slot(3, filterWidth, 0)\ndef Pool2DOptionsAddFilterHeight(builder, filterHeight): builder.PrependInt32Slot(4, filterHeight, 0)\ndef Pool2DOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(5, fusedActivationFunction, 0)\ndef Pool2DOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/PowOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass PowOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsPowOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = PowOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def PowOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # PowOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef PowOptionsStart(builder): builder.StartObject(0)\ndef PowOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/QuantizationDetails.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass QuantizationDetails(object):\n    NONE = 0\n    CustomQuantization = 1\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/QuantizationParameters.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass QuantizationParameters(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsQuantizationParameters(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = QuantizationParameters()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def QuantizationParametersBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # QuantizationParameters\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # QuantizationParameters\n    def Min(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # QuantizationParameters\n    def MinAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Float32Flags, o)\n        return 0\n\n    # QuantizationParameters\n    def MinLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # QuantizationParameters\n    def MinIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\n    # QuantizationParameters\n    def Max(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # QuantizationParameters\n    def MaxAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Float32Flags, o)\n        return 0\n\n    # QuantizationParameters\n    def MaxLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # QuantizationParameters\n    def MaxIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        return o == 0\n\n    # QuantizationParameters\n    def Scale(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # QuantizationParameters\n    def ScaleAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Float32Flags, o)\n        return 0\n\n    # QuantizationParameters\n    def ScaleLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # QuantizationParameters\n    def ScaleIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        return o == 0\n\n    # QuantizationParameters\n    def ZeroPoint(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int64Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 8))\n        return 0\n\n    # QuantizationParameters\n    def ZeroPointAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int64Flags, o)\n        return 0\n\n    # QuantizationParameters\n    def ZeroPointLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # QuantizationParameters\n    def ZeroPointIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        return o == 0\n\n    # QuantizationParameters\n    def DetailsType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint8Flags, o + self._tab.Pos)\n        return 0\n\n    # QuantizationParameters\n    def Details(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            from flatbuffers.table import Table\n            obj = Table(bytearray(), 0)\n            self._tab.Union(obj, o)\n            return obj\n        return None\n\n    # QuantizationParameters\n    def QuantizedDimension(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(16))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef QuantizationParametersStart(builder): builder.StartObject(7)\ndef QuantizationParametersAddMin(builder, min): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(min), 0)\ndef QuantizationParametersStartMinVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef QuantizationParametersAddMax(builder, max): builder.PrependUOffsetTRelativeSlot(1, flatbuffers.number_types.UOffsetTFlags.py_type(max), 0)\ndef QuantizationParametersStartMaxVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef QuantizationParametersAddScale(builder, scale): builder.PrependUOffsetTRelativeSlot(2, flatbuffers.number_types.UOffsetTFlags.py_type(scale), 0)\ndef QuantizationParametersStartScaleVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef QuantizationParametersAddZeroPoint(builder, zeroPoint): builder.PrependUOffsetTRelativeSlot(3, flatbuffers.number_types.UOffsetTFlags.py_type(zeroPoint), 0)\ndef QuantizationParametersStartZeroPointVector(builder, numElems): return builder.StartVector(8, numElems, 8)\ndef QuantizationParametersAddDetailsType(builder, detailsType): builder.PrependUint8Slot(4, detailsType, 0)\ndef QuantizationParametersAddDetails(builder, details): builder.PrependUOffsetTRelativeSlot(5, flatbuffers.number_types.UOffsetTFlags.py_type(details), 0)\ndef QuantizationParametersAddQuantizedDimension(builder, quantizedDimension): builder.PrependInt32Slot(6, quantizedDimension, 0)\ndef QuantizationParametersEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/QuantizeOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass QuantizeOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsQuantizeOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = QuantizeOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def QuantizeOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # QuantizeOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef QuantizeOptionsStart(builder): builder.StartObject(0)\ndef QuantizeOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/RNNOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass RNNOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsRNNOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = RNNOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def RNNOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # RNNOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # RNNOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # RNNOptions\n    def AsymmetricQuantizeInputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef RNNOptionsStart(builder): builder.StartObject(2)\ndef RNNOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef RNNOptionsAddAsymmetricQuantizeInputs(builder, asymmetricQuantizeInputs): builder.PrependBoolSlot(1, asymmetricQuantizeInputs, 0)\ndef RNNOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/RangeOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass RangeOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsRangeOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = RangeOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def RangeOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # RangeOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef RangeOptionsStart(builder): builder.StartObject(0)\ndef RangeOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/RankOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass RankOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsRankOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = RankOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def RankOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # RankOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef RankOptionsStart(builder): builder.StartObject(0)\ndef RankOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ReducerOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ReducerOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsReducerOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ReducerOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ReducerOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ReducerOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ReducerOptions\n    def KeepDims(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef ReducerOptionsStart(builder): builder.StartObject(1)\ndef ReducerOptionsAddKeepDims(builder, keepDims): builder.PrependBoolSlot(0, keepDims, 0)\ndef ReducerOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ReshapeOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ReshapeOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsReshapeOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ReshapeOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ReshapeOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ReshapeOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ReshapeOptions\n    def NewShape(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # ReshapeOptions\n    def NewShapeAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # ReshapeOptions\n    def NewShapeLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # ReshapeOptions\n    def NewShapeIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\ndef ReshapeOptionsStart(builder): builder.StartObject(1)\ndef ReshapeOptionsAddNewShape(builder, newShape): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(newShape), 0)\ndef ReshapeOptionsStartNewShapeVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef ReshapeOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ResizeBilinearOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ResizeBilinearOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsResizeBilinearOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ResizeBilinearOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ResizeBilinearOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ResizeBilinearOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ResizeBilinearOptions\n    def AlignCorners(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # ResizeBilinearOptions\n    def HalfPixelCenters(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef ResizeBilinearOptionsStart(builder): builder.StartObject(4)\ndef ResizeBilinearOptionsAddAlignCorners(builder, alignCorners): builder.PrependBoolSlot(2, alignCorners, 0)\ndef ResizeBilinearOptionsAddHalfPixelCenters(builder, halfPixelCenters): builder.PrependBoolSlot(3, halfPixelCenters, 0)\ndef ResizeBilinearOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ResizeNearestNeighborOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ResizeNearestNeighborOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsResizeNearestNeighborOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ResizeNearestNeighborOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ResizeNearestNeighborOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ResizeNearestNeighborOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ResizeNearestNeighborOptions\n    def AlignCorners(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # ResizeNearestNeighborOptions\n    def HalfPixelCenters(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef ResizeNearestNeighborOptionsStart(builder): builder.StartObject(2)\ndef ResizeNearestNeighborOptionsAddAlignCorners(builder, alignCorners): builder.PrependBoolSlot(0, alignCorners, 0)\ndef ResizeNearestNeighborOptionsAddHalfPixelCenters(builder, halfPixelCenters): builder.PrependBoolSlot(1, halfPixelCenters, 0)\ndef ResizeNearestNeighborOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ReverseSequenceOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ReverseSequenceOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsReverseSequenceOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ReverseSequenceOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ReverseSequenceOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ReverseSequenceOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ReverseSequenceOptions\n    def SeqDim(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # ReverseSequenceOptions\n    def BatchDim(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef ReverseSequenceOptionsStart(builder): builder.StartObject(2)\ndef ReverseSequenceOptionsAddSeqDim(builder, seqDim): builder.PrependInt32Slot(0, seqDim, 0)\ndef ReverseSequenceOptionsAddBatchDim(builder, batchDim): builder.PrependInt32Slot(1, batchDim, 0)\ndef ReverseSequenceOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ReverseV2Options.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ReverseV2Options(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsReverseV2Options(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ReverseV2Options()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ReverseV2OptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ReverseV2Options\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef ReverseV2OptionsStart(builder): builder.StartObject(0)\ndef ReverseV2OptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SVDFOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SVDFOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSVDFOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SVDFOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SVDFOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SVDFOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SVDFOptions\n    def Rank(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # SVDFOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # SVDFOptions\n    def AsymmetricQuantizeInputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef SVDFOptionsStart(builder): builder.StartObject(3)\ndef SVDFOptionsAddRank(builder, rank): builder.PrependInt32Slot(0, rank, 0)\ndef SVDFOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(1, fusedActivationFunction, 0)\ndef SVDFOptionsAddAsymmetricQuantizeInputs(builder, asymmetricQuantizeInputs): builder.PrependBoolSlot(2, asymmetricQuantizeInputs, 0)\ndef SVDFOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ScatterNdOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ScatterNdOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsScatterNdOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ScatterNdOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ScatterNdOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ScatterNdOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef ScatterNdOptionsStart(builder): builder.StartObject(0)\ndef ScatterNdOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SegmentSumOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SegmentSumOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSegmentSumOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SegmentSumOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SegmentSumOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SegmentSumOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef SegmentSumOptionsStart(builder): builder.StartObject(0)\ndef SegmentSumOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SelectOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SelectOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSelectOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SelectOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SelectOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SelectOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef SelectOptionsStart(builder): builder.StartObject(0)\ndef SelectOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SelectV2Options.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SelectV2Options(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSelectV2Options(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SelectV2Options()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SelectV2OptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SelectV2Options\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef SelectV2OptionsStart(builder): builder.StartObject(0)\ndef SelectV2OptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SequenceRNNOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SequenceRNNOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSequenceRNNOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SequenceRNNOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SequenceRNNOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SequenceRNNOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SequenceRNNOptions\n    def TimeMajor(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # SequenceRNNOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # SequenceRNNOptions\n    def AsymmetricQuantizeInputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef SequenceRNNOptionsStart(builder): builder.StartObject(3)\ndef SequenceRNNOptionsAddTimeMajor(builder, timeMajor): builder.PrependBoolSlot(0, timeMajor, 0)\ndef SequenceRNNOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(1, fusedActivationFunction, 0)\ndef SequenceRNNOptionsAddAsymmetricQuantizeInputs(builder, asymmetricQuantizeInputs): builder.PrependBoolSlot(2, asymmetricQuantizeInputs, 0)\ndef SequenceRNNOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ShapeOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ShapeOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsShapeOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ShapeOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ShapeOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ShapeOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # ShapeOptions\n    def OutType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef ShapeOptionsStart(builder): builder.StartObject(1)\ndef ShapeOptionsAddOutType(builder, outType): builder.PrependInt8Slot(0, outType, 0)\ndef ShapeOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SkipGramOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SkipGramOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSkipGramOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SkipGramOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SkipGramOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SkipGramOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SkipGramOptions\n    def NgramSize(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # SkipGramOptions\n    def MaxSkipSize(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # SkipGramOptions\n    def IncludeAllNgrams(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef SkipGramOptionsStart(builder): builder.StartObject(3)\ndef SkipGramOptionsAddNgramSize(builder, ngramSize): builder.PrependInt32Slot(0, ngramSize, 0)\ndef SkipGramOptionsAddMaxSkipSize(builder, maxSkipSize): builder.PrependInt32Slot(1, maxSkipSize, 0)\ndef SkipGramOptionsAddIncludeAllNgrams(builder, includeAllNgrams): builder.PrependBoolSlot(2, includeAllNgrams, 0)\ndef SkipGramOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SliceOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SliceOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSliceOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SliceOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SliceOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SliceOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef SliceOptionsStart(builder): builder.StartObject(0)\ndef SliceOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SoftmaxOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SoftmaxOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSoftmaxOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SoftmaxOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SoftmaxOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SoftmaxOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SoftmaxOptions\n    def Beta(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\ndef SoftmaxOptionsStart(builder): builder.StartObject(1)\ndef SoftmaxOptionsAddBeta(builder, beta): builder.PrependFloat32Slot(0, beta, 0.0)\ndef SoftmaxOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SpaceToBatchNDOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SpaceToBatchNDOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSpaceToBatchNDOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SpaceToBatchNDOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SpaceToBatchNDOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SpaceToBatchNDOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef SpaceToBatchNDOptionsStart(builder): builder.StartObject(0)\ndef SpaceToBatchNDOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SpaceToDepthOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SpaceToDepthOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSpaceToDepthOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SpaceToDepthOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SpaceToDepthOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SpaceToDepthOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SpaceToDepthOptions\n    def BlockSize(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef SpaceToDepthOptionsStart(builder): builder.StartObject(1)\ndef SpaceToDepthOptionsAddBlockSize(builder, blockSize): builder.PrependInt32Slot(0, blockSize, 0)\ndef SpaceToDepthOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SparseIndexVector.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass SparseIndexVector(object):\n    NONE = 0\n    Int32Vector = 1\n    Uint16Vector = 2\n    Uint8Vector = 3\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SparseToDenseOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SparseToDenseOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSparseToDenseOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SparseToDenseOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SparseToDenseOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SparseToDenseOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SparseToDenseOptions\n    def ValidateIndices(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef SparseToDenseOptionsStart(builder): builder.StartObject(1)\ndef SparseToDenseOptionsAddValidateIndices(builder, validateIndices): builder.PrependBoolSlot(0, validateIndices, 0)\ndef SparseToDenseOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SparsityParameters.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SparsityParameters(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSparsityParameters(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SparsityParameters()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SparsityParametersBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SparsityParameters\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SparsityParameters\n    def TraversalOrder(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # SparsityParameters\n    def TraversalOrderAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # SparsityParameters\n    def TraversalOrderLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # SparsityParameters\n    def TraversalOrderIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\n    # SparsityParameters\n    def BlockMap(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # SparsityParameters\n    def BlockMapAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # SparsityParameters\n    def BlockMapLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # SparsityParameters\n    def BlockMapIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        return o == 0\n\n    # SparsityParameters\n    def DimMetadata(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            x = self._tab.Vector(o)\n            x += flatbuffers.number_types.UOffsetTFlags.py_type(j) * 4\n            x = self._tab.Indirect(x)\n            from .DimensionMetadata import DimensionMetadata\n            obj = DimensionMetadata()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # SparsityParameters\n    def DimMetadataLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # SparsityParameters\n    def DimMetadataIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        return o == 0\n\ndef SparsityParametersStart(builder): builder.StartObject(3)\ndef SparsityParametersAddTraversalOrder(builder, traversalOrder): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(traversalOrder), 0)\ndef SparsityParametersStartTraversalOrderVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef SparsityParametersAddBlockMap(builder, blockMap): builder.PrependUOffsetTRelativeSlot(1, flatbuffers.number_types.UOffsetTFlags.py_type(blockMap), 0)\ndef SparsityParametersStartBlockMapVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef SparsityParametersAddDimMetadata(builder, dimMetadata): builder.PrependUOffsetTRelativeSlot(2, flatbuffers.number_types.UOffsetTFlags.py_type(dimMetadata), 0)\ndef SparsityParametersStartDimMetadataVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef SparsityParametersEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SplitOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SplitOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSplitOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SplitOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SplitOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SplitOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SplitOptions\n    def NumSplits(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef SplitOptionsStart(builder): builder.StartObject(1)\ndef SplitOptionsAddNumSplits(builder, numSplits): builder.PrependInt32Slot(0, numSplits, 0)\ndef SplitOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SplitVOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SplitVOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSplitVOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SplitVOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SplitVOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SplitVOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SplitVOptions\n    def NumSplits(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef SplitVOptionsStart(builder): builder.StartObject(1)\ndef SplitVOptionsAddNumSplits(builder, numSplits): builder.PrependInt32Slot(0, numSplits, 0)\ndef SplitVOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SquareOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SquareOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSquareOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SquareOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SquareOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SquareOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef SquareOptionsStart(builder): builder.StartObject(0)\ndef SquareOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SquaredDifferenceOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SquaredDifferenceOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSquaredDifferenceOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SquaredDifferenceOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SquaredDifferenceOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SquaredDifferenceOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef SquaredDifferenceOptionsStart(builder): builder.StartObject(0)\ndef SquaredDifferenceOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SqueezeOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SqueezeOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSqueezeOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SqueezeOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SqueezeOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SqueezeOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SqueezeOptions\n    def SqueezeDims(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # SqueezeOptions\n    def SqueezeDimsAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # SqueezeOptions\n    def SqueezeDimsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # SqueezeOptions\n    def SqueezeDimsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\ndef SqueezeOptionsStart(builder): builder.StartObject(1)\ndef SqueezeOptionsAddSqueezeDims(builder, squeezeDims): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(squeezeDims), 0)\ndef SqueezeOptionsStartSqueezeDimsVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef SqueezeOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/StridedSliceOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass StridedSliceOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsStridedSliceOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = StridedSliceOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def StridedSliceOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # StridedSliceOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # StridedSliceOptions\n    def BeginMask(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # StridedSliceOptions\n    def EndMask(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # StridedSliceOptions\n    def EllipsisMask(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # StridedSliceOptions\n    def NewAxisMask(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # StridedSliceOptions\n    def ShrinkAxisMask(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef StridedSliceOptionsStart(builder): builder.StartObject(5)\ndef StridedSliceOptionsAddBeginMask(builder, beginMask): builder.PrependInt32Slot(0, beginMask, 0)\ndef StridedSliceOptionsAddEndMask(builder, endMask): builder.PrependInt32Slot(1, endMask, 0)\ndef StridedSliceOptionsAddEllipsisMask(builder, ellipsisMask): builder.PrependInt32Slot(2, ellipsisMask, 0)\ndef StridedSliceOptionsAddNewAxisMask(builder, newAxisMask): builder.PrependInt32Slot(3, newAxisMask, 0)\ndef StridedSliceOptionsAddShrinkAxisMask(builder, shrinkAxisMask): builder.PrependInt32Slot(4, shrinkAxisMask, 0)\ndef StridedSliceOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SubGraph.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SubGraph(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSubGraph(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SubGraph()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SubGraphBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SubGraph\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SubGraph\n    def Tensors(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            x = self._tab.Vector(o)\n            x += flatbuffers.number_types.UOffsetTFlags.py_type(j) * 4\n            x = self._tab.Indirect(x)\n            from .Tensor import Tensor\n            obj = Tensor()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # SubGraph\n    def TensorsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # SubGraph\n    def TensorsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\n    # SubGraph\n    def Inputs(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # SubGraph\n    def InputsAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # SubGraph\n    def InputsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # SubGraph\n    def InputsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        return o == 0\n\n    # SubGraph\n    def Outputs(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # SubGraph\n    def OutputsAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # SubGraph\n    def OutputsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # SubGraph\n    def OutputsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        return o == 0\n\n    # SubGraph\n    def Operators(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            x = self._tab.Vector(o)\n            x += flatbuffers.number_types.UOffsetTFlags.py_type(j) * 4\n            x = self._tab.Indirect(x)\n            from .Operator import Operator\n            obj = Operator()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # SubGraph\n    def OperatorsLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # SubGraph\n    def OperatorsIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        return o == 0\n\n    # SubGraph\n    def Name(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return self._tab.String(o + self._tab.Pos)\n        return None\n\ndef SubGraphStart(builder): builder.StartObject(5)\ndef SubGraphAddTensors(builder, tensors): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(tensors), 0)\ndef SubGraphStartTensorsVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef SubGraphAddInputs(builder, inputs): builder.PrependUOffsetTRelativeSlot(1, flatbuffers.number_types.UOffsetTFlags.py_type(inputs), 0)\ndef SubGraphStartInputsVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef SubGraphAddOutputs(builder, outputs): builder.PrependUOffsetTRelativeSlot(2, flatbuffers.number_types.UOffsetTFlags.py_type(outputs), 0)\ndef SubGraphStartOutputsVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef SubGraphAddOperators(builder, operators): builder.PrependUOffsetTRelativeSlot(3, flatbuffers.number_types.UOffsetTFlags.py_type(operators), 0)\ndef SubGraphStartOperatorsVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef SubGraphAddName(builder, name): builder.PrependUOffsetTRelativeSlot(4, flatbuffers.number_types.UOffsetTFlags.py_type(name), 0)\ndef SubGraphEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/SubOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass SubOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsSubOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = SubOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def SubOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # SubOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # SubOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\ndef SubOptionsStart(builder): builder.StartObject(1)\ndef SubOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef SubOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Tensor.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Tensor(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsTensor(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Tensor()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def TensorBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Tensor\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Tensor\n    def Shape(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # Tensor\n    def ShapeAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # Tensor\n    def ShapeLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Tensor\n    def ShapeIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\n    # Tensor\n    def Type(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # Tensor\n    def Buffer(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Uint32Flags, o + self._tab.Pos)\n        return 0\n\n    # Tensor\n    def Name(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return self._tab.String(o + self._tab.Pos)\n        return None\n\n    # Tensor\n    def Quantization(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            x = self._tab.Indirect(o + self._tab.Pos)\n            from .QuantizationParameters import QuantizationParameters\n            obj = QuantizationParameters()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # Tensor\n    def IsVariable(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(14))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # Tensor\n    def Sparsity(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(16))\n        if o != 0:\n            x = self._tab.Indirect(o + self._tab.Pos)\n            from .SparsityParameters import SparsityParameters\n            obj = SparsityParameters()\n            obj.Init(self._tab.Bytes, x)\n            return obj\n        return None\n\n    # Tensor\n    def ShapeSignature(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(18))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 4))\n        return 0\n\n    # Tensor\n    def ShapeSignatureAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(18))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Int32Flags, o)\n        return 0\n\n    # Tensor\n    def ShapeSignatureLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(18))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Tensor\n    def ShapeSignatureIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(18))\n        return o == 0\n\ndef TensorStart(builder): builder.StartObject(8)\ndef TensorAddShape(builder, shape): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(shape), 0)\ndef TensorStartShapeVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef TensorAddType(builder, type): builder.PrependInt8Slot(1, type, 0)\ndef TensorAddBuffer(builder, buffer): builder.PrependUint32Slot(2, buffer, 0)\ndef TensorAddName(builder, name): builder.PrependUOffsetTRelativeSlot(3, flatbuffers.number_types.UOffsetTFlags.py_type(name), 0)\ndef TensorAddQuantization(builder, quantization): builder.PrependUOffsetTRelativeSlot(4, flatbuffers.number_types.UOffsetTFlags.py_type(quantization), 0)\ndef TensorAddIsVariable(builder, isVariable): builder.PrependBoolSlot(5, isVariable, 0)\ndef TensorAddSparsity(builder, sparsity): builder.PrependUOffsetTRelativeSlot(6, flatbuffers.number_types.UOffsetTFlags.py_type(sparsity), 0)\ndef TensorAddShapeSignature(builder, shapeSignature): builder.PrependUOffsetTRelativeSlot(7, flatbuffers.number_types.UOffsetTFlags.py_type(shapeSignature), 0)\ndef TensorStartShapeSignatureVector(builder, numElems): return builder.StartVector(4, numElems, 4)\ndef TensorEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/TensorType.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nclass TensorType(object):\n    FLOAT32 = 0\n    FLOAT16 = 1\n    INT32 = 2\n    UINT8 = 3\n    INT64 = 4\n    STRING = 5\n    BOOL = 6\n    INT16 = 7\n    COMPLEX64 = 8\n    INT8 = 9\n    FLOAT64 = 10\n\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/TileOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass TileOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsTileOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = TileOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def TileOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # TileOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef TileOptionsStart(builder): builder.StartObject(0)\ndef TileOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/TopKV2Options.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass TopKV2Options(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsTopKV2Options(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = TopKV2Options()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def TopKV2OptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # TopKV2Options\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef TopKV2OptionsStart(builder): builder.StartObject(0)\ndef TopKV2OptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/TransposeConvOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass TransposeConvOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsTransposeConvOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = TransposeConvOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def TransposeConvOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # TransposeConvOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # TransposeConvOptions\n    def Padding(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # TransposeConvOptions\n    def StrideW(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # TransposeConvOptions\n    def StrideH(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef TransposeConvOptionsStart(builder): builder.StartObject(3)\ndef TransposeConvOptionsAddPadding(builder, padding): builder.PrependInt8Slot(0, padding, 0)\ndef TransposeConvOptionsAddStrideW(builder, strideW): builder.PrependInt32Slot(1, strideW, 0)\ndef TransposeConvOptionsAddStrideH(builder, strideH): builder.PrependInt32Slot(2, strideH, 0)\ndef TransposeConvOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/TransposeOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass TransposeOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsTransposeOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = TransposeOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def TransposeOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # TransposeOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef TransposeOptionsStart(builder): builder.StartObject(0)\ndef TransposeOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Uint16Vector.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Uint16Vector(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsUint16Vector(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Uint16Vector()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def Uint16VectorBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Uint16Vector\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Uint16Vector\n    def Values(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Uint16Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 2))\n        return 0\n\n    # Uint16Vector\n    def ValuesAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Uint16Flags, o)\n        return 0\n\n    # Uint16Vector\n    def ValuesLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Uint16Vector\n    def ValuesIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\ndef Uint16VectorStart(builder): builder.StartObject(1)\ndef Uint16VectorAddValues(builder, values): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(values), 0)\ndef Uint16VectorStartValuesVector(builder, numElems): return builder.StartVector(2, numElems, 2)\ndef Uint16VectorEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/Uint8Vector.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass Uint8Vector(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsUint8Vector(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = Uint8Vector()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def Uint8VectorBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # Uint8Vector\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # Uint8Vector\n    def Values(self, j):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            a = self._tab.Vector(o)\n            return self._tab.Get(flatbuffers.number_types.Uint8Flags, a + flatbuffers.number_types.UOffsetTFlags.py_type(j * 1))\n        return 0\n\n    # Uint8Vector\n    def ValuesAsNumpy(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.GetVectorAsNumpy(flatbuffers.number_types.Uint8Flags, o)\n        return 0\n\n    # Uint8Vector\n    def ValuesLength(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.VectorLen(o)\n        return 0\n\n    # Uint8Vector\n    def ValuesIsNone(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        return o == 0\n\ndef Uint8VectorStart(builder): builder.StartObject(1)\ndef Uint8VectorAddValues(builder, values): builder.PrependUOffsetTRelativeSlot(0, flatbuffers.number_types.UOffsetTFlags.py_type(values), 0)\ndef Uint8VectorStartValuesVector(builder, numElems): return builder.StartVector(1, numElems, 1)\ndef Uint8VectorEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/UnidirectionalSequenceLSTMOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass UnidirectionalSequenceLSTMOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsUnidirectionalSequenceLSTMOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = UnidirectionalSequenceLSTMOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def UnidirectionalSequenceLSTMOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # UnidirectionalSequenceLSTMOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # UnidirectionalSequenceLSTMOptions\n    def FusedActivationFunction(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 0\n\n    # UnidirectionalSequenceLSTMOptions\n    def CellClip(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # UnidirectionalSequenceLSTMOptions\n    def ProjClip(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(8))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Float32Flags, o + self._tab.Pos)\n        return 0.0\n\n    # UnidirectionalSequenceLSTMOptions\n    def TimeMajor(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(10))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\n    # UnidirectionalSequenceLSTMOptions\n    def AsymmetricQuantizeInputs(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(12))\n        if o != 0:\n            return bool(self._tab.Get(flatbuffers.number_types.BoolFlags, o + self._tab.Pos))\n        return False\n\ndef UnidirectionalSequenceLSTMOptionsStart(builder): builder.StartObject(5)\ndef UnidirectionalSequenceLSTMOptionsAddFusedActivationFunction(builder, fusedActivationFunction): builder.PrependInt8Slot(0, fusedActivationFunction, 0)\ndef UnidirectionalSequenceLSTMOptionsAddCellClip(builder, cellClip): builder.PrependFloat32Slot(1, cellClip, 0.0)\ndef UnidirectionalSequenceLSTMOptionsAddProjClip(builder, projClip): builder.PrependFloat32Slot(2, projClip, 0.0)\ndef UnidirectionalSequenceLSTMOptionsAddTimeMajor(builder, timeMajor): builder.PrependBoolSlot(3, timeMajor, 0)\ndef UnidirectionalSequenceLSTMOptionsAddAsymmetricQuantizeInputs(builder, asymmetricQuantizeInputs): builder.PrependBoolSlot(4, asymmetricQuantizeInputs, 0)\ndef UnidirectionalSequenceLSTMOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/UniqueOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass UniqueOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsUniqueOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = UniqueOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def UniqueOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # UniqueOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # UniqueOptions\n    def IdxOutType(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int8Flags, o + self._tab.Pos)\n        return 2\n\ndef UniqueOptionsStart(builder): builder.StartObject(1)\ndef UniqueOptionsAddIdxOutType(builder, idxOutType): builder.PrependInt8Slot(0, idxOutType, 2)\ndef UniqueOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/UnpackOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass UnpackOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsUnpackOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = UnpackOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def UnpackOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # UnpackOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # UnpackOptions\n    def Num(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # UnpackOptions\n    def Axis(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef UnpackOptionsStart(builder): builder.StartObject(2)\ndef UnpackOptionsAddNum(builder, num): builder.PrependInt32Slot(0, num, 0)\ndef UnpackOptionsAddAxis(builder, axis): builder.PrependInt32Slot(1, axis, 0)\ndef UnpackOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/WhereOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass WhereOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsWhereOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = WhereOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def WhereOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # WhereOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef WhereOptionsStart(builder): builder.StartObject(0)\ndef WhereOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/WhileOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass WhileOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsWhileOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = WhileOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def WhileOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # WhileOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\n    # WhileOptions\n    def CondSubgraphIndex(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(4))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\n    # WhileOptions\n    def BodySubgraphIndex(self):\n        o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))\n        if o != 0:\n            return self._tab.Get(flatbuffers.number_types.Int32Flags, o + self._tab.Pos)\n        return 0\n\ndef WhileOptionsStart(builder): builder.StartObject(2)\ndef WhileOptionsAddCondSubgraphIndex(builder, condSubgraphIndex): builder.PrependInt32Slot(0, condSubgraphIndex, 0)\ndef WhileOptionsAddBodySubgraphIndex(builder, bodySubgraphIndex): builder.PrependInt32Slot(1, bodySubgraphIndex, 0)\ndef WhileOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/ZerosLikeOptions.py",
    "content": "# automatically generated by the FlatBuffers compiler, do not modify\n\n# namespace: tflite\n\nimport flatbuffers\nfrom flatbuffers.compat import import_numpy\nnp = import_numpy()\n\nclass ZerosLikeOptions(object):\n    __slots__ = ['_tab']\n\n    @classmethod\n    def GetRootAsZerosLikeOptions(cls, buf, offset):\n        n = flatbuffers.encode.Get(flatbuffers.packer.uoffset, buf, offset)\n        x = ZerosLikeOptions()\n        x.Init(buf, n + offset)\n        return x\n\n    @classmethod\n    def ZerosLikeOptionsBufferHasIdentifier(cls, buf, offset, size_prefixed=False):\n        return flatbuffers.util.BufferHasIdentifier(buf, offset, b\"\\x54\\x46\\x4C\\x33\", size_prefixed=size_prefixed)\n\n    # ZerosLikeOptions\n    def Init(self, buf, pos):\n        self._tab = flatbuffers.table.Table(buf, pos)\n\ndef ZerosLikeOptionsStart(builder): builder.StartObject(0)\ndef ZerosLikeOptionsEnd(builder): return builder.EndObject()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/__init__.py",
    "content": "# Copyright (c) 2017 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .AbsOptions import *\nfrom .ActivationFunctionType import *\nfrom .AddNOptions import *\nfrom .AddOptions import *\nfrom .ArgMaxOptions import *\nfrom .ArgMinOptions import *\nfrom .BatchToSpaceNDOptions import *\nfrom .BidirectionalSequenceLSTMOptions import *\nfrom .BidirectionalSequenceRNNOptions import *\nfrom .Buffer import *\nfrom .BuiltinOperator import *\nfrom .BuiltinOptions import *\nfrom .CallOptions import *\nfrom .CastOptions import *\nfrom .CombinerType import *\nfrom .ConcatEmbeddingsOptions import *\nfrom .ConcatenationOptions import *\nfrom .Conv2DOptions import *\nfrom .CosOptions import *\nfrom .CustomOptionsFormat import *\nfrom .CustomQuantization import *\nfrom .DepthToSpaceOptions import *\nfrom .DepthwiseConv2DOptions import *\nfrom .DequantizeOptions import *\nfrom .DivOptions import *\nfrom .EmbeddingLookupSparseOptions import *\nfrom .EqualOptions import *\nfrom .ExpandDimsOptions import *\nfrom .ExpOptions import *\nfrom .FakeQuantOptions import *\nfrom .FillOptions import *\nfrom .FloorDivOptions import *\nfrom .FloorModOptions import *\nfrom .FullyConnectedOptions import *\nfrom .FullyConnectedOptionsWeightsFormat import *\nfrom .GatherNdOptions import *\nfrom .GatherOptions import *\nfrom .GreaterEqualOptions import *\nfrom .GreaterOptions import *\nfrom .HardSwishOptions import *\nfrom .IfOptions import *\nfrom .L2NormOptions import *\nfrom .LeakyReluOptions import *\nfrom .LessEqualOptions import *\nfrom .LessOptions import *\nfrom .LocalResponseNormalizationOptions import *\nfrom .LogicalAndOptions import *\nfrom .LogicalNotOptions import *\nfrom .LogicalOrOptions import *\nfrom .LogSoftmaxOptions import *\nfrom .LSHProjectionOptions import *\nfrom .LSHProjectionType import *\nfrom .LSTMKernelType import *\nfrom .LSTMOptions import *\nfrom .MatrixDiagOptions import *\nfrom .MatrixSetDiagOptions import *\nfrom .MaximumMinimumOptions import *\nfrom .Metadata import *\nfrom .MirrorPadMode import *\nfrom .MirrorPadOptions import *\nfrom .Model import *\nfrom .MulOptions import *\nfrom .NegOptions import *\nfrom .NonMaxSuppressionV4Options import *\nfrom .NonMaxSuppressionV5Options import *\nfrom .NotEqualOptions import *\nfrom .OneHotOptions import *\nfrom .OperatorCode import *\nfrom .Operator import *\nfrom .PackOptions import *\nfrom .Padding import *\nfrom .PadOptions import *\nfrom .PadV2Options import *\nfrom .Pool2DOptions import *\nfrom .PowOptions import *\nfrom .QuantizationDetails import *\nfrom .QuantizationParameters import *\nfrom .QuantizeOptions import *\nfrom .RangeOptions import *\nfrom .RankOptions import *\nfrom .ReducerOptions import *\nfrom .ReshapeOptions import *\nfrom .ResizeBilinearOptions import *\nfrom .ResizeNearestNeighborOptions import *\nfrom .ReverseSequenceOptions import *\nfrom .ReverseV2Options import *\nfrom .RNNOptions import *\nfrom .ScatterNdOptions import *\nfrom .SelectOptions import *\nfrom .SequenceRNNOptions import *\nfrom .ShapeOptions import *\nfrom .SkipGramOptions import *\nfrom .SliceOptions import *\nfrom .SoftmaxOptions import *\nfrom .SpaceToBatchNDOptions import *\nfrom .SpaceToDepthOptions import *\nfrom .SparseToDenseOptions import *\nfrom .SplitOptions import *\nfrom .SplitVOptions import *\nfrom .SquaredDifferenceOptions import *\nfrom .SquareOptions import *\nfrom .SqueezeOptions import *\nfrom .StridedSliceOptions import *\nfrom .SubGraph import *\nfrom .SubOptions import *\nfrom .SVDFOptions import *\nfrom .Tensor import *\nfrom .TensorType import *\nfrom .TileOptions import *\nfrom .TopKV2Options import *\nfrom .TransposeConvOptions import *\nfrom .TransposeOptions import *\nfrom .UnidirectionalSequenceLSTMOptions import *\nfrom .UniqueOptions import *\nfrom .UnpackOptions import *\nfrom .WhereOptions import *\nfrom .WhileOptions import *\nfrom .ZerosLikeOptions import *\nfrom .BatchMatMulOptions import *\nfrom .DensifyOptions import *\nfrom .SegmentSumOptions import *\nfrom .SelectV2Options import *\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/flatbuffers/schema.fbs",
    "content": "// Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Revision History\n// Version 0: Initial version.\n// Version 1: Add subgraphs to schema.\n// Version 2: Rename operators to conform to NN API.\n// Version 3: Move buffer data from Model.Subgraph.Tensors to Model.Buffers.\n\nnamespace tflite;\n\n// This corresponds to the version.\nfile_identifier \"TFL3\";\n// File extension of any written files.\nfile_extension \"tflite\";\n\n// IMPORTANT: All new members of tables, enums and unions must be added at the\n// end to ensure backwards compatibility.\n\n// The type of data stored in a tensor.\nenum TensorType : byte {\n  FLOAT32 = 0,\n  FLOAT16 = 1,\n  INT32 = 2,\n  UINT8 = 3,\n  INT64 = 4,\n  STRING = 5,\n  BOOL = 6,\n  INT16 = 7,\n  COMPLEX64 = 8,\n  INT8 = 9,\n  FLOAT64 = 10,\n}\n\n// Custom quantization parameters for experimenting with new quantization\n// techniques.\ntable CustomQuantization {\n  custom:[ubyte] (force_align: 16);\n}\n\n// Represents a specific quantization technique's parameters.\nunion QuantizationDetails {\n  CustomQuantization,\n}\n\n// Parameters for converting a quantized tensor back to float.\ntable QuantizationParameters {\n  // These four parameters are the asymmetric linear quantization parameters.\n  // Given a quantized value q, the corresponding float value f should be:\n  //   f = scale * (q - zero_point)\n  // For other quantization types, the QuantizationDetails below is used.\n  min:[float];  // For importing back into tensorflow.\n  max:[float];  // For importing back into tensorflow.\n  scale:[float];  // For dequantizing the tensor's values.\n  zero_point:[long];\n\n  // If this is not none, the other quantization parameters (i.e. min, max,\n  // scale, zero_point fields above) are ignored and the value of the\n  // QuantizationDetails union should be used.\n  details:QuantizationDetails;\n\n  // Specifies the dimension of the Tensor's shape that the scales and\n  // zero_points correspond to. For example, a tensor t, with dims=[4, 3, 2, 1]\n  // with quantization params:\n  //   scale=[1.0, 2.0, 3.0], zero_point=[1, 2, 3], quantization_dimension=1\n  // will be quantized across the second dimension of t.\n  //   t[:, 0, :, :] will have scale[0]=1.0, zero_point[0]=1\n  //   t[:, 1, :, :] will have scale[1]=2.0, zero_point[0]=2\n  //   t[:, 2, :, :] will have scale[2]=3.0, zero_point[0]=3\n  quantized_dimension:int;\n}\n\n// Sparse tensors.\n// We use a modification of the TACO format.\n// Reference: http://tensor-compiler.org/kjolstad-oopsla17-tensor-compiler.pdf\n//\n// To encode a conceptual n-dimensional dense tensor with dims (d0, ..., dn-1),\n// potentially with a k-dimensional block (0 <= k <= n) with dims\n// (dn, ..., dn+k-1), the format needs to specify:\n//   1. In what order to traverse these dimensions. For example, to store a 2-D\n//      matrix in row major order, the traversal order would be (d0, d1),\n//      whereas to store it in column major order, the traversal order would be\n//      (d1, d0). If the 2-D matrix has a 2-D inner block, the traversal order\n//      could be (d0, d1, d2, d3).\n//   2. How each block dimension in (dn, ..., dn+k-1) maps to the original\n//      tensor dimension in (d0, ..., dn-1).\n//   3. In the traversal order defined above, the format (dense vs. sparse) and\n//      index metadata for each dimension. For a dense dimension, this is just\n//      the size of that dimension. For a sparse dimension, it's the same as\n//      the compressed index defined in the Compressed Sparse Row (CSR) format.\n//      (http://scipy-lectures.org/advanced/scipy_sparse/csr_matrix.html)\n\n// The storage type for a dimension. Currently we support:\n//   1. DENSE: each coordinate in this dimension is stored implicitly.\n//   2. SPARSE_CSR: only the coordinates with non-zero elements are stored. The\n//      compression technique is the same what CSR uses.\n// More types like a sparse dimension with a different compression technique\n// could be added to the list in the future.\nenum DimensionType : byte {\n  DENSE = 0,\n  SPARSE_CSR = 1,\n}\n\ntable Int32Vector {\n  values:[int];\n}\n\ntable Uint16Vector {\n  values:[ushort] (force_align: 4);\n}\n\ntable Uint8Vector {\n  values:[ubyte] (force_align: 4);\n}\n\n// Variable-typed buffer to store the index metadata for a sparse dimension.\n// The widest type is Int32 instead of UInt32 because tensor's shape is a int32\n// vector. We don't want the per-dimensional index to overflow that range.\nunion SparseIndexVector {\n  Int32Vector,\n  Uint16Vector,\n  Uint8Vector\n}\n\ntable DimensionMetadata {\n  // Whether a dimension is dense or sparse.\n  format:DimensionType;\n  // Index metadata used for a dimension.\n  //   - If format is DimensionType.DENSE then we use the dense_size field to\n  //     store the size of that dimension. Each index in that dimension is\n  //     stored implicitly.\n  //   - If format is DimensionType.SPARSE_CSR then we use array_segments and\n  //     array_indices to encode that dimension. array_segments represents how\n  //     to segment the indices array, each segment corresponds to one element\n  //     in the previous dimension. array_indices represents the index of the\n  //     non-zero elements within this dimension (as those in the CSR matrix\n  //     format, where the first array is row pointers and the second array is\n  //     column indices).\n  dense_size:int;\n  array_segments:SparseIndexVector;\n  array_indices:SparseIndexVector;\n}\n\n// Parameters to encode a sparse TfLite tensor.\ntable SparsityParameters {\n  // The traversal order of the dimensions defined in the `shape` field of the\n  // conceptual dense tensor. For a n-dimensional tensors with dims (d0, d1,\n  // ..., dn-1),\n  //   - if not block sparse, the traversal_order is just a permutation of (d0,\n  //     ..., dn-1). For example, a 2-D matrix stored in row-major order would\n  //     have traversal_order = (d0, d1).\n  //   - if block sparse with a k-dimensional block (0 <= k <= n), the\n  //     traversal_order has n + k elements. The first n elements are still a\n  //     permutation of (d0, ..., dn-1). The lask k elements are a permutation\n  //     of (dn, ..., dn+k-1), defining how to traverse a block internally. For\n  //     example, a 2-D matrix with 2-D blocks, both stored in row-major order\n  //     would have traversal_order = (d0, d1, d2, d3).\n  traversal_order:[int];\n  // For an n-dimensional tensor with a k-dimensional block (0 <= k <= n),\n  // stores how a block dimension in (dn, ..., dn+k-1) maps to the original\n  // tensor dimension in (d0, ..., dn).\n  // It's stored in the order of (dn, ..., dn+k-1).\n  // If not block-sparse, this field is NULL.\n  block_map:[int];\n  // In the traversal order defined above, the metadata needed for\n  // each dimension to locate the non-zero values in the original dense tensor.\n  // The size of the dim_metadata array = the size of the traversal_order array\n  // = n + k.\n  dim_metadata:[DimensionMetadata];\n}\n\ntable Tensor {\n  // The tensor shape. The meaning of each entry is operator-specific but\n  // builtin ops use: [batch size, height, width, number of channels] (That's\n  // Tensorflow's NHWC).\n  shape:[int];\n  type:TensorType;\n  // An index that refers to the buffers table at the root of the model. Or,\n  // if there is no data buffer associated (i.e. intermediate results), then\n  // this is 0 (which refers to an always existent empty buffer).\n  //\n  // The data_buffer itself is an opaque container, with the assumption that the\n  // target device is little-endian. In addition, all builtin operators assume\n  // the memory is ordered such that if `shape` is [4, 3, 2], then index\n  // [i, j, k] maps to data_buffer[i*3*2 + j*2 + k].\n  buffer:uint;\n  name:string;  // For debugging and importing back into tensorflow.\n  quantization:QuantizationParameters;  // Optional.\n\n  is_variable:bool = false;\n\n  // Parameters to encode a sparse tensor. See the example in\n  // tensorflow/lite/testdata/sparse_tensor.json.\n  sparsity:SparsityParameters;  // Optional.\n\n  // Encodes `shape` with unknown dimensions. Unknown dimensions are\n  // represented with -1.\n  shape_signature:[int]; // Optional.\n}\n\n// A list of builtin operators. Builtin operators are slightly faster than custom\n// ones, but not by much. Moreover, while custom operators accept an opaque\n// object containing configuration parameters, builtins have a predetermined\n// set of acceptable options.\n\nenum BuiltinOperator : byte {\n  ADD = 0,\n  AVERAGE_POOL_2D = 1,\n  CONCATENATION = 2,\n  CONV_2D = 3,\n  DEPTHWISE_CONV_2D = 4,\n  DEPTH_TO_SPACE = 5,\n  DEQUANTIZE = 6,\n  EMBEDDING_LOOKUP = 7,\n  FLOOR = 8,\n  FULLY_CONNECTED = 9,\n  HASHTABLE_LOOKUP = 10,\n  L2_NORMALIZATION = 11,\n  L2_POOL_2D = 12,\n  LOCAL_RESPONSE_NORMALIZATION = 13,\n  LOGISTIC = 14,\n  LSH_PROJECTION = 15,\n  LSTM = 16,\n  MAX_POOL_2D = 17,\n  MUL = 18,\n  RELU = 19,\n  // NOTE(aselle): RELU_N1_TO_1 used to be called RELU1, but it was renamed\n  // since different model developers use RELU1 in different ways. Never\n  // create another op called RELU1.\n  RELU_N1_TO_1 = 20,\n  RELU6 = 21,\n  RESHAPE = 22,\n  RESIZE_BILINEAR = 23,\n  RNN = 24,\n  SOFTMAX = 25,\n  SPACE_TO_DEPTH = 26,\n  SVDF = 27,\n  TANH = 28,\n  // TODO(aselle): Consider rename to CONCATENATE_EMBEDDINGS\n  CONCAT_EMBEDDINGS = 29,\n  SKIP_GRAM = 30,\n  CALL = 31,\n  CUSTOM = 32,\n  EMBEDDING_LOOKUP_SPARSE = 33,\n  PAD = 34,\n  UNIDIRECTIONAL_SEQUENCE_RNN = 35,\n  GATHER = 36,\n  BATCH_TO_SPACE_ND = 37,\n  SPACE_TO_BATCH_ND = 38,\n  TRANSPOSE = 39,\n  MEAN = 40,\n  SUB = 41,\n  DIV = 42,\n  SQUEEZE = 43,\n  UNIDIRECTIONAL_SEQUENCE_LSTM = 44,\n  STRIDED_SLICE = 45,\n  BIDIRECTIONAL_SEQUENCE_RNN = 46,\n  EXP = 47,\n  TOPK_V2 = 48,\n  SPLIT = 49,\n  LOG_SOFTMAX = 50,\n  // DELEGATE is a special op type for the operations which are delegated to\n  // other backends.\n  // WARNING: Experimental interface, subject to change\n  DELEGATE = 51,\n  BIDIRECTIONAL_SEQUENCE_LSTM = 52,\n  CAST = 53,\n  PRELU = 54,\n  MAXIMUM = 55,\n  ARG_MAX = 56,\n  MINIMUM = 57,\n  LESS = 58,\n  NEG = 59,\n  PADV2 = 60,\n  GREATER = 61,\n  GREATER_EQUAL = 62,\n  LESS_EQUAL = 63,\n  SELECT = 64,\n  SLICE = 65,\n  SIN = 66,\n  TRANSPOSE_CONV = 67,\n  SPARSE_TO_DENSE = 68,\n  TILE = 69,\n  EXPAND_DIMS = 70,\n  EQUAL = 71,\n  NOT_EQUAL = 72,\n  LOG = 73,\n  SUM = 74,\n  SQRT = 75,\n  RSQRT = 76,\n  SHAPE = 77,\n  POW = 78,\n  ARG_MIN = 79,\n  FAKE_QUANT = 80,\n  REDUCE_PROD = 81,\n  REDUCE_MAX = 82,\n  PACK = 83,\n  LOGICAL_OR = 84,\n  ONE_HOT = 85,\n  LOGICAL_AND = 86,\n  LOGICAL_NOT = 87,\n  UNPACK = 88,\n  REDUCE_MIN = 89,\n  FLOOR_DIV = 90,\n  REDUCE_ANY = 91,\n  SQUARE = 92,\n  ZEROS_LIKE = 93,\n  FILL = 94,\n  FLOOR_MOD = 95,\n  RANGE = 96,\n  RESIZE_NEAREST_NEIGHBOR = 97,\n  LEAKY_RELU = 98,\n  SQUARED_DIFFERENCE = 99,\n  MIRROR_PAD = 100,\n  ABS = 101,\n  SPLIT_V = 102,\n  UNIQUE = 103,\n  CEIL = 104,\n  REVERSE_V2 = 105,\n  ADD_N = 106,\n  GATHER_ND = 107,\n  COS = 108,\n  WHERE = 109,\n  RANK = 110,\n  ELU = 111,\n  REVERSE_SEQUENCE = 112,\n  MATRIX_DIAG = 113,\n  QUANTIZE = 114,\n  MATRIX_SET_DIAG = 115,\n  ROUND = 116,\n  HARD_SWISH = 117,\n  IF = 118,\n  WHILE = 119,\n  NON_MAX_SUPPRESSION_V4 = 120,\n  NON_MAX_SUPPRESSION_V5 = 121,\n  SCATTER_ND = 122,\n  SELECT_V2 = 123,\n  DENSIFY = 124,\n  SEGMENT_SUM = 125,\n  BATCH_MATMUL = 126\n}\n\n\n// Options for the builtin operators.\nunion BuiltinOptions {\n  Conv2DOptions,\n  DepthwiseConv2DOptions,\n  ConcatEmbeddingsOptions,\n  LSHProjectionOptions,\n  Pool2DOptions,\n  SVDFOptions,\n  RNNOptions,\n  FullyConnectedOptions,\n  SoftmaxOptions,\n  ConcatenationOptions,\n  AddOptions,\n  L2NormOptions,\n  LocalResponseNormalizationOptions,\n  LSTMOptions,\n  ResizeBilinearOptions,\n  CallOptions,\n  ReshapeOptions,\n  SkipGramOptions,\n  SpaceToDepthOptions,\n  EmbeddingLookupSparseOptions,\n  MulOptions,\n  PadOptions,\n  GatherOptions,\n  BatchToSpaceNDOptions,\n  SpaceToBatchNDOptions,\n  TransposeOptions,\n  ReducerOptions,\n  SubOptions,\n  DivOptions,\n  SqueezeOptions,\n  SequenceRNNOptions,\n  StridedSliceOptions,\n  ExpOptions,\n  TopKV2Options,\n  SplitOptions,\n  LogSoftmaxOptions,\n  CastOptions,\n  DequantizeOptions,\n  MaximumMinimumOptions,\n  ArgMaxOptions,\n  LessOptions,\n  NegOptions,\n  PadV2Options,\n  GreaterOptions,\n  GreaterEqualOptions,\n  LessEqualOptions,\n  SelectOptions,\n  SliceOptions,\n  TransposeConvOptions,\n  SparseToDenseOptions,\n  TileOptions,\n  ExpandDimsOptions,\n  EqualOptions,\n  NotEqualOptions,\n  ShapeOptions,\n  PowOptions,\n  ArgMinOptions,\n  FakeQuantOptions,\n  PackOptions,\n  LogicalOrOptions,\n  OneHotOptions,\n  LogicalAndOptions,\n  LogicalNotOptions,\n  UnpackOptions,\n  FloorDivOptions,\n  SquareOptions,\n  ZerosLikeOptions,\n  FillOptions,\n  BidirectionalSequenceLSTMOptions,\n  BidirectionalSequenceRNNOptions,\n  UnidirectionalSequenceLSTMOptions,\n  FloorModOptions,\n  RangeOptions,\n  ResizeNearestNeighborOptions,\n  LeakyReluOptions,\n  SquaredDifferenceOptions,\n  MirrorPadOptions,\n  AbsOptions,\n  SplitVOptions,\n  UniqueOptions,\n  ReverseV2Options,\n  AddNOptions,\n  GatherNdOptions,\n  CosOptions,\n  WhereOptions,\n  RankOptions,\n  ReverseSequenceOptions,\n  MatrixDiagOptions,\n  QuantizeOptions,\n  MatrixSetDiagOptions,\n  HardSwishOptions,\n  IfOptions,\n  WhileOptions,\n  DepthToSpaceOptions,\n  NonMaxSuppressionV4Options,\n  NonMaxSuppressionV5Options,\n  ScatterNdOptions,\n  SelectV2Options,\n  DensifyOptions,\n  SegmentSumOptions,\n  BatchMatMulOptions\n}\n\nenum Padding : byte { SAME, VALID }\n\nenum ActivationFunctionType : byte {\n  NONE = 0,\n  RELU = 1,\n  RELU_N1_TO_1 = 2,\n  RELU6 = 3,\n  TANH = 4,\n  SIGN_BIT = 5,\n}\n\ntable Conv2DOptions {\n  padding:Padding;\n  stride_w:int;\n  stride_h:int;\n  fused_activation_function:ActivationFunctionType;\n  dilation_w_factor:int = 1;\n  dilation_h_factor:int = 1;\n}\n\ntable Pool2DOptions {\n  padding:Padding;\n  stride_w:int;\n  stride_h:int;\n  filter_width:int;\n  filter_height:int;\n  fused_activation_function:ActivationFunctionType;\n}\n\ntable DepthwiseConv2DOptions {\n  // Parameters for DepthwiseConv version 1 or above.\n  padding:Padding;\n  stride_w:int;\n  stride_h:int;\n  // `depth_multiplier` is redundant. It's used by CPU kernels in\n  // TensorFlow 2.0 or below, but ignored in versions above.\n  // See comments in lite/c/builtin_op_data.h for more details.\n  depth_multiplier:int;\n  fused_activation_function:ActivationFunctionType;\n  // Parameters for DepthwiseConv version 2 or above.\n  dilation_w_factor:int = 1;\n  dilation_h_factor:int = 1;\n}\n\ntable ConcatEmbeddingsOptions {\n  num_channels:int;\n  num_columns_per_channel:[int];\n  embedding_dim_per_channel:[int]; // This could be inferred from parameters.\n}\n\nenum LSHProjectionType: byte {\n  UNKNOWN = 0,\n  SPARSE = 1,\n  DENSE = 2,\n}\n\ntable LSHProjectionOptions {\n  type: LSHProjectionType;\n}\n\ntable SVDFOptions {\n  rank:int;\n  fused_activation_function:ActivationFunctionType;\n  // For weights-only quantization, use asymmetric quantization for non\n  // constant inputs at evaluation time.\n  asymmetric_quantize_inputs:bool;\n}\n\n// An implementation of TensorFlow RNNCell.\ntable RNNOptions {\n  fused_activation_function:ActivationFunctionType;\n  asymmetric_quantize_inputs:bool;\n}\n\n// An implementation of TensorFlow dynamic_rnn with RNNCell.\ntable SequenceRNNOptions {\n  time_major:bool;\n  fused_activation_function:ActivationFunctionType;\n  asymmetric_quantize_inputs:bool;\n}\n\n// An implementation of TensorFlow bidrectional_dynamic_rnn with RNNCell.\ntable BidirectionalSequenceRNNOptions {\n  time_major:bool;\n  fused_activation_function:ActivationFunctionType;\n  merge_outputs: bool;\n  asymmetric_quantize_inputs:bool;\n}\n\nenum FullyConnectedOptionsWeightsFormat: byte {\n  DEFAULT = 0,\n  SHUFFLED4x16INT8 = 1,\n}\n\n// An implementation of TensorFlow fully_connected (a.k.a Dense) layer.\ntable FullyConnectedOptions {\n  // Parameters for FullyConnected version 1 or above.\n  fused_activation_function:ActivationFunctionType;\n\n  // Parameters for FullyConnected version 2 or above.\n  weights_format:FullyConnectedOptionsWeightsFormat = DEFAULT;\n\n  // Parameters for FullyConnected version 5 or above.\n  // If set to true, then the number of dimension is preserved. Furthermore,\n  // all but the last dimension of the input and output shapes will be equal.\n  keep_num_dims: bool;\n\n  // Parameters for FullyConnected version 7 or above.\n  // If set to true, then weights-only op will use asymmetric quantization for\n  // inputs.\n  asymmetric_quantize_inputs: bool;\n}\n\ntable SoftmaxOptions {\n  beta: float;\n}\n\n// An implementation of TensorFlow concat.\ntable ConcatenationOptions {\n  axis:int;\n  fused_activation_function:ActivationFunctionType;\n}\n\ntable AddOptions {\n  fused_activation_function:ActivationFunctionType;\n}\n\ntable MulOptions {\n  fused_activation_function:ActivationFunctionType;\n}\n\ntable L2NormOptions {\n  fused_activation_function:ActivationFunctionType;\n}\n\ntable LocalResponseNormalizationOptions {\n  radius:int;\n  bias:float;\n  alpha:float;\n  beta:float;\n}\n\nenum LSTMKernelType : byte {\n  // Full LSTM kernel which supports peephole and projection.\n  FULL = 0,\n  // Basic LSTM kernels. Equivalent to TensorFlow BasicLSTMCell.\n  BASIC = 1,\n}\n\n// An implementation of TensorFlow LSTMCell and CoupledInputForgetGateLSTMCell\ntable LSTMOptions {\n  // Parameters for LSTM version 1 or above.\n  fused_activation_function:ActivationFunctionType;\n  cell_clip: float; // Optional, 0.0 means no clipping\n  proj_clip: float; // Optional, 0.0 means no clipping\n\n  // Parameters for LSTM version 2 or above.\n  // Basic kernel is only supported in version 2 or above.\n  kernel_type: LSTMKernelType = FULL;\n\n  // Parameters for LSTM version 4 or above.\n  asymmetric_quantize_inputs: bool;\n}\n\n// An implementation of TensorFlow dynamic_rnn with LSTMCell.\ntable UnidirectionalSequenceLSTMOptions {\n  fused_activation_function:ActivationFunctionType;\n  cell_clip: float; // Optional, 0.0 means no clipping\n  proj_clip: float; // Optional, 0.0 means no clipping\n\n  // If true then first dimension is sequence, otherwise batch.\n  time_major:bool;\n\n  // Parameter for Unidirectional Sequence LSTM version 4.\n  asymmetric_quantize_inputs:bool;\n}\n\ntable BidirectionalSequenceLSTMOptions {\n  // Parameters supported by version 1:\n  fused_activation_function:ActivationFunctionType;\n  cell_clip: float; // Optional, 0.0 means no clipping\n  proj_clip: float; // Optional, 0.0 means no clipping\n\n  // If true, store the outputs of both directions into the first output.\n  merge_outputs: bool;\n\n  // Parameters supported by version 2:\n  // If true then first dimension is sequence, otherwise batch.\n  // Version 1 implementations assumed time_major to be true, so this default\n  // value should never change.\n  time_major: bool = true;\n\n  // Parameters for version 3 or above.\n  asymmetric_quantize_inputs:bool;\n}\n\ntable ResizeBilinearOptions {\n  new_height: int (deprecated);\n  new_width: int (deprecated);\n  align_corners: bool;\n  half_pixel_centers: bool;\n}\n\ntable ResizeNearestNeighborOptions {\n  align_corners: bool;\n  half_pixel_centers: bool;\n}\n\n// A call operation options\ntable CallOptions {\n  // The subgraph index that needs to be called.\n  subgraph:uint;\n}\n\ntable PadOptions {\n}\n\ntable PadV2Options {\n}\n\ntable ReshapeOptions {\n  new_shape:[int];\n}\n\ntable SpaceToBatchNDOptions {\n}\n\ntable BatchToSpaceNDOptions {\n}\n\ntable SkipGramOptions {\n  ngram_size: int;\n  max_skip_size: int;\n  include_all_ngrams: bool;\n}\n\ntable SpaceToDepthOptions {\n  block_size: int;\n}\n\ntable DepthToSpaceOptions {\n  block_size: int;\n}\n\ntable SubOptions {\n  fused_activation_function:ActivationFunctionType;\n}\n\ntable DivOptions {\n  fused_activation_function:ActivationFunctionType;\n}\n\ntable TopKV2Options {\n}\n\nenum CombinerType : byte {\n  SUM = 0,\n  MEAN = 1,\n  SQRTN = 2,\n}\n\ntable EmbeddingLookupSparseOptions {\n  combiner:CombinerType;\n}\n\ntable GatherOptions {\n  axis: int;\n}\n\ntable TransposeOptions {\n}\n\ntable ExpOptions {\n}\n\ntable CosOptions {\n}\n\ntable ReducerOptions {\n  keep_dims: bool;\n}\n\ntable SqueezeOptions {\n  squeeze_dims:[int];\n}\n\ntable SplitOptions {\n  num_splits: int;\n}\n\ntable SplitVOptions {\n  num_splits: int;\n}\n\ntable StridedSliceOptions {\n  begin_mask: int;\n  end_mask: int;\n  ellipsis_mask: int;\n  new_axis_mask: int;\n  shrink_axis_mask: int;\n}\n\ntable LogSoftmaxOptions {\n}\n\ntable CastOptions {\n  in_data_type: TensorType;\n  out_data_type: TensorType;\n}\n\ntable DequantizeOptions {\n}\n\ntable MaximumMinimumOptions {\n}\n\ntable TileOptions {\n}\n\ntable ArgMaxOptions {\n  output_type : TensorType;\n}\n\ntable ArgMinOptions {\n  output_type : TensorType;\n}\n\ntable GreaterOptions {\n}\n\ntable GreaterEqualOptions {\n}\n\ntable LessOptions {\n}\n\ntable LessEqualOptions {\n}\n\ntable NegOptions {\n}\n\ntable SelectOptions {\n}\n\ntable SliceOptions {\n}\n\ntable TransposeConvOptions {\n  padding:Padding;\n  stride_w:int;\n  stride_h:int;\n}\n\ntable ExpandDimsOptions {\n}\n\ntable SparseToDenseOptions {\n  validate_indices:bool;\n}\n\ntable EqualOptions {\n}\n\ntable NotEqualOptions {\n}\n\ntable ShapeOptions {\n  // Optional output type of the operation (int32 or int64). Defaults to int32.\n  out_type : TensorType;\n}\n\ntable RankOptions {\n}\n\ntable PowOptions {\n}\n\ntable FakeQuantOptions {\n  // Parameters supported by version 1:\n  min:float;\n  max:float;\n  num_bits:int;\n\n  // Parameters supported by version 2:\n  narrow_range:bool;\n}\n\ntable PackOptions {\n  values_count:int;\n  axis:int;\n}\n\ntable LogicalOrOptions {\n}\n\ntable OneHotOptions {\n  axis:int;\n}\n\ntable AbsOptions {\n}\n\n\ntable HardSwishOptions {\n}\n\ntable LogicalAndOptions {\n}\n\ntable LogicalNotOptions {\n}\n\ntable UnpackOptions {\n  num:int;\n  axis:int;\n}\n\ntable FloorDivOptions {\n}\n\ntable SquareOptions {\n}\n\ntable ZerosLikeOptions {\n}\n\ntable FillOptions {\n}\n\ntable FloorModOptions {\n}\n\ntable RangeOptions {\n}\n\ntable LeakyReluOptions {\n  alpha:float;\n}\n\ntable SquaredDifferenceOptions {\n}\n\nenum MirrorPadMode : byte {\n  // Doesn't include borders.\n  REFLECT = 0,\n  // Includes borders.\n  SYMMETRIC = 1,\n}\n\ntable MirrorPadOptions {\n  mode:MirrorPadMode;\n}\n\ntable UniqueOptions {\n  idx_out_type:TensorType = INT32;\n}\n\ntable ReverseV2Options {\n}\n\ntable AddNOptions {\n}\n\ntable GatherNdOptions {\n}\n\ntable WhereOptions {\n}\n\ntable ReverseSequenceOptions {\n  seq_dim:int;\n  batch_dim:int = 0;\n}\n\ntable MatrixDiagOptions {\n}\n\ntable QuantizeOptions {\n}\n\ntable MatrixSetDiagOptions {\n}\n\ntable IfOptions {\n  then_subgraph_index:int;\n  else_subgraph_index:int;\n}\n\ntable WhileOptions {\n  cond_subgraph_index:int;\n  body_subgraph_index:int;\n}\n\ntable NonMaxSuppressionV4Options {\n}\n\ntable NonMaxSuppressionV5Options {\n}\n\ntable ScatterNdOptions {\n}\n\ntable SelectV2Options {\n}\n\ntable DensifyOptions {\n}\n\ntable SegmentSumOptions {\n}\n\ntable BatchMatMulOptions {\n  adj_x:bool;\n  adj_y:bool;\n}\n\n// An OperatorCode can be an enum value (BuiltinOperator) if the operator is a\n// builtin, or a string if the operator is custom.\ntable OperatorCode {\n  builtin_code:BuiltinOperator;\n  custom_code:string;\n\n  // The version of the operator. The version need to be bumped whenever new\n  // parameters are introduced into an op.\n  version:int = 1;\n}\n\nenum CustomOptionsFormat : byte {\n  FLEXBUFFERS = 0,\n}\n\n// An operator takes tensors as inputs and outputs. The type of operation being\n// performed is determined by an index into the list of valid OperatorCodes,\n// while the specifics of each operations is configured using builtin_options\n// or custom_options.\ntable Operator {\n  // Index into the operator_codes array. Using an integer here avoids\n  // complicate map lookups.\n  opcode_index:uint;\n\n  // Optional input are indicated by -1.\n  inputs:[int];\n  outputs:[int];\n\n  builtin_options:BuiltinOptions;\n  custom_options:[ubyte];\n  custom_options_format:CustomOptionsFormat;\n\n  // A list of booleans indicating the input tensors which are being mutated by\n  // this operator.(e.g. used by RNN and LSTM).\n  // For example, if the \"inputs\" array refers to 5 tensors and the second and\n  // fifth are mutable variables, then this list will contain\n  // [false, true, false, false, true].\n  //\n  // If the list is empty, no variable is mutated in this operator.\n  // The list either has the same length as `inputs`, or is empty.\n  mutating_variable_inputs:[bool];\n\n  // A list of indices to the subgraph's \"tensors\" that are internal to an Op.\n  // Internal tensors are those that do not flow in or out of the operation,\n  // but instead are part of internal computation. As such, the operation's\n  // implementation may manage its memory more efficiently. They are needed\n  // however (i.e. not just an implementation detail) since they are part of the\n  // computation, which may require relevant metadata such as quantization\n  // parameters.\n  intermediates:[int];\n}\n\n// The root type, defining a subgraph, which typically represents an entire\n// model.\ntable SubGraph {\n  // A list of all tensors used in this subgraph.\n  tensors:[Tensor];\n\n  // Indices of the tensors that are inputs into this subgraph. Note this is\n  // the list of non-static tensors that feed into the subgraph for inference.\n  inputs:[int];\n\n  // Indices of the tensors that are outputs out of this subgraph. Note this is\n  // the list of output tensors that are considered the product of the\n  // subgraph's inference.\n  outputs:[int];\n\n  // All operators, in execution order.\n  operators:[Operator];\n\n  // Name of this subgraph (used for debugging).\n  name:string;\n}\n\n// Table of raw data buffers (used for constant tensors). Referenced by tensors\n// by index. The generous alignment accommodates mmap-friendly data structures.\ntable Buffer {\n  data:[ubyte] (force_align: 16);\n}\n\ntable Metadata {\n  // A human readable string to uniquely identify a Metadata.\n  name:string;\n  // An index to the buffers table.\n  buffer:uint;\n}\n\ntable Model {\n  // Version of the schema.\n  version:uint;\n\n  // A list of all operator codes used in this model. This is\n  // kept in order because operators carry an index into this\n  // vector.\n  operator_codes:[OperatorCode];\n\n  // All the subgraphs of the model. The 0th is assumed to be the main\n  // model.\n  subgraphs:[SubGraph];\n\n  // A description of the model.\n  description:string;\n\n  // Buffers of the model.\n  // Note the 0th entry of this array must be an empty buffer (sentinel).\n  // This is a convention so that tensors without a buffer can provide 0 as\n  // their buffer.\n  buffers:[Buffer];\n\n  // Metadata about the model. Indirects into the existings buffers list.\n  // Deprecated, prefer to use metadata field.\n  metadata_buffer:[int];\n\n  // Metadata about the model.\n  metadata:[Metadata];\n}\n\nroot_type Model;"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/helpers.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom . import flatbuffers as fb\nimport numpy as np\nimport sys\nimport re\n\n# See: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs\n\n\nOUTPUT_FILE_IDENTIFIER = \"TFL3\"\nOUTPUT_SCHEMA_VERSION = 3\n\nBuiltinOptionsClasses = [\n    None,\n    fb.Conv2DOptions,\n    fb.DepthwiseConv2DOptions,\n    fb.ConcatEmbeddingsOptions,\n    fb.LSHProjectionOptions,\n    fb.Pool2DOptions,\n    fb.SVDFOptions,\n    fb.RNNOptions,\n    fb.FullyConnectedOptions,\n    fb.SoftmaxOptions,\n    fb.ConcatenationOptions,\n    fb.AddOptions,\n    fb.L2NormOptions,\n    fb.LocalResponseNormalizationOptions,\n    fb.LSTMOptions,\n    fb.ResizeBilinearOptions,\n    fb.CallOptions,\n    fb.ReshapeOptions,\n    fb.SkipGramOptions,\n    fb.SpaceToDepthOptions,\n    fb.EmbeddingLookupSparseOptions,\n    fb.MulOptions,\n    fb.PadOptions,\n    fb.GatherOptions,\n    fb.BatchToSpaceNDOptions,\n    fb.SpaceToBatchNDOptions,\n    fb.TransposeOptions,\n    fb.ReducerOptions,\n    fb.SubOptions,\n    fb.DivOptions,\n    fb.SqueezeOptions,\n    fb.SequenceRNNOptions,\n    fb.StridedSliceOptions,\n    fb.ExpOptions,\n    fb.TopKV2Options,\n    fb.SplitOptions,\n    fb.LogSoftmaxOptions,\n    fb.CastOptions,\n    fb.DequantizeOptions,\n    fb.MaximumMinimumOptions,\n    fb.ArgMaxOptions,\n    fb.LessOptions,\n    fb.NegOptions,\n    fb.PadV2Options,\n    fb.GreaterOptions,\n    fb.GreaterEqualOptions,\n    fb.LessEqualOptions,\n    fb.SelectOptions,\n    fb.SliceOptions,\n    fb.TransposeConvOptions,\n    fb.SparseToDenseOptions,\n    fb.TileOptions,\n    fb.ExpandDimsOptions,\n    fb.EqualOptions,\n    fb.NotEqualOptions,\n    fb.ShapeOptions,\n    fb.PowOptions,\n    fb.ArgMinOptions,\n    fb.FakeQuantOptions,\n    fb.PackOptions,\n    fb.LogicalOrOptions,\n    fb.OneHotOptions,\n    fb.LogicalAndOptions,\n    fb.LogicalNotOptions,\n    fb.UnpackOptions,\n    fb.FloorDivOptions,\n    fb.SquareOptions,\n    fb.ZerosLikeOptions,\n    fb.FillOptions,\n    fb.BidirectionalSequenceLSTMOptions,\n    fb.BidirectionalSequenceRNNOptions,\n    fb.UnidirectionalSequenceLSTMOptions,\n    fb.FloorModOptions,\n    fb.RangeOptions,\n    fb.ResizeNearestNeighborOptions,\n    fb.LeakyReluOptions,\n    fb.SquaredDifferenceOptions,\n    fb.MirrorPadOptions,\n    fb.AbsOptions,\n    fb.SplitVOptions,\n    fb.UniqueOptions,\n    fb.ReverseV2Options,\n    fb.AddNOptions,\n    fb.GatherNdOptions,\n    fb.CosOptions,\n    fb.WhereOptions,\n    fb.RankOptions,\n    fb.ReverseSequenceOptions,\n    fb.MatrixDiagOptions,\n    fb.QuantizeOptions,\n    fb.MatrixSetDiagOptions,\n    fb.HardSwishOptions,\n    fb.IfOptions,\n    fb.WhileOptions,\n    fb.DepthToSpaceOptions,\n    fb.NonMaxSuppressionV4Options,\n    fb.NonMaxSuppressionV5Options,\n    fb.ScatterNdOptions,\n    fb.SelectV2Options,\n    fb.DensifyOptions,\n    fb.SegmentSumOptions,\n    fb.BatchMatMulOptions,\n]\n\nBuiltinOptionsByOperator = {\n    fb.BuiltinOperator.ADD: fb.BuiltinOptions.AddOptions,\n    fb.BuiltinOperator.AVERAGE_POOL_2D: fb.BuiltinOptions.Pool2DOptions,\n    fb.BuiltinOperator.CONCATENATION: fb.BuiltinOptions.ConcatenationOptions,\n    fb.BuiltinOperator.CONV_2D: fb.BuiltinOptions.Conv2DOptions,\n    fb.BuiltinOperator.DEPTHWISE_CONV_2D: fb.BuiltinOptions.DepthwiseConv2DOptions,\n    fb.BuiltinOperator.DEQUANTIZE: fb.BuiltinOptions.DequantizeOptions,\n    fb.BuiltinOperator.EMBEDDING_LOOKUP: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.FLOOR: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.FULLY_CONNECTED: fb.BuiltinOptions.FullyConnectedOptions,\n    fb.BuiltinOperator.HASHTABLE_LOOKUP: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.L2_NORMALIZATION: fb.BuiltinOptions.L2NormOptions,\n    fb.BuiltinOperator.L2_POOL_2D: fb.BuiltinOptions.Pool2DOptions,\n    fb.BuiltinOperator.LOCAL_RESPONSE_NORMALIZATION: fb.BuiltinOptions.LocalResponseNormalizationOptions,\n    fb.BuiltinOperator.LOGISTIC: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.LSH_PROJECTION: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.LSTM: fb.BuiltinOptions.LSTMOptions,\n    fb.BuiltinOperator.MAX_POOL_2D: fb.BuiltinOptions.Pool2DOptions,\n    fb.BuiltinOperator.MUL: fb.BuiltinOptions.MulOptions,\n    fb.BuiltinOperator.RELU: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.RELU_N1_TO_1: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.RELU6: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.RESHAPE: fb.BuiltinOptions.ReshapeOptions,\n    fb.BuiltinOperator.RESIZE_BILINEAR: fb.BuiltinOptions.ResizeBilinearOptions,\n    fb.BuiltinOperator.RNN: fb.BuiltinOptions.RNNOptions,\n    fb.BuiltinOperator.SOFTMAX: fb.BuiltinOptions.SoftmaxOptions,\n    fb.BuiltinOperator.SPACE_TO_DEPTH: fb.BuiltinOptions.SpaceToDepthOptions,\n    fb.BuiltinOperator.SVDF: fb.BuiltinOptions.SVDFOptions,\n    fb.BuiltinOperator.TANH: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.CONCAT_EMBEDDINGS: fb.BuiltinOptions.ConcatEmbeddingsOptions,\n    fb.BuiltinOperator.SKIP_GRAM: fb.BuiltinOptions.SkipGramOptions,\n    fb.BuiltinOperator.CALL: fb.BuiltinOptions.CallOptions,\n    fb.BuiltinOperator.CUSTOM: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.EMBEDDING_LOOKUP_SPARSE: fb.BuiltinOptions.EmbeddingLookupSparseOptions,\n    fb.BuiltinOperator.PAD: fb.BuiltinOptions.PadOptions,\n    fb.BuiltinOperator.UNIDIRECTIONAL_SEQUENCE_RNN: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.GATHER: fb.BuiltinOptions.GatherOptions,\n    fb.BuiltinOperator.BATCH_TO_SPACE_ND: fb.BuiltinOptions.BatchToSpaceNDOptions,\n    fb.BuiltinOperator.SPACE_TO_BATCH_ND: fb.BuiltinOptions.SpaceToBatchNDOptions,\n    fb.BuiltinOperator.TRANSPOSE: fb.BuiltinOptions.TransposeOptions,\n    fb.BuiltinOperator.MEAN: fb.BuiltinOptions.ReducerOptions,\n    fb.BuiltinOperator.SUB: fb.BuiltinOptions.SubOptions,\n    fb.BuiltinOperator.DIV: fb.BuiltinOptions.DivOptions,\n    fb.BuiltinOperator.SQUEEZE: fb.BuiltinOptions.SqueezeOptions,\n    fb.BuiltinOperator.UNIDIRECTIONAL_SEQUENCE_LSTM: fb.BuiltinOptions.UnidirectionalSequenceLSTMOptions,\n    fb.BuiltinOperator.STRIDED_SLICE: fb.BuiltinOptions.StridedSliceOptions,\n    fb.BuiltinOperator.BIDIRECTIONAL_SEQUENCE_RNN: fb.BuiltinOptions.BidirectionalSequenceRNNOptions,\n    fb.BuiltinOperator.EXP: fb.BuiltinOptions.ExpOptions,\n    fb.BuiltinOperator.TOPK_V2: fb.BuiltinOptions.TopKV2Options,\n    fb.BuiltinOperator.SPLIT: fb.BuiltinOptions.SplitOptions,\n    fb.BuiltinOperator.LOG_SOFTMAX: fb.BuiltinOptions.LogSoftmaxOptions,\n    fb.BuiltinOperator.DELEGATE: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.BIDIRECTIONAL_SEQUENCE_LSTM: fb.BuiltinOptions.BidirectionalSequenceLSTMOptions,\n    fb.BuiltinOperator.CAST: fb.BuiltinOptions.CastOptions,\n    fb.BuiltinOperator.PRELU: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.MAXIMUM: fb.BuiltinOptions.MaximumMinimumOptions,\n    fb.BuiltinOperator.ARG_MAX: fb.BuiltinOptions.ArgMaxOptions,\n    fb.BuiltinOperator.MINIMUM: fb.BuiltinOptions.MaximumMinimumOptions,\n    fb.BuiltinOperator.LESS: fb.BuiltinOptions.LessOptions,\n    fb.BuiltinOperator.NEG: fb.BuiltinOptions.NegOptions,\n    fb.BuiltinOperator.PADV2: fb.BuiltinOptions.PadV2Options,\n    fb.BuiltinOperator.GREATER: fb.BuiltinOptions.GreaterOptions,\n    fb.BuiltinOperator.GREATER_EQUAL: fb.BuiltinOptions.GreaterEqualOptions,\n    fb.BuiltinOperator.LESS_EQUAL: fb.BuiltinOptions.LessEqualOptions,\n    fb.BuiltinOperator.SELECT: fb.BuiltinOptions.SelectOptions,\n    fb.BuiltinOperator.SLICE: fb.BuiltinOptions.SliceOptions,\n    fb.BuiltinOperator.SIN: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.TRANSPOSE_CONV: fb.BuiltinOptions.TransposeConvOptions,\n    fb.BuiltinOperator.SPARSE_TO_DENSE: fb.BuiltinOptions.SparseToDenseOptions,\n    fb.BuiltinOperator.TILE: fb.BuiltinOptions.TileOptions,\n    fb.BuiltinOperator.EXPAND_DIMS: fb.BuiltinOptions.ExpandDimsOptions,\n    fb.BuiltinOperator.EQUAL: fb.BuiltinOptions.EqualOptions,\n    fb.BuiltinOperator.NOT_EQUAL: fb.BuiltinOptions.NotEqualOptions,\n    fb.BuiltinOperator.LOG: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.SUM: fb.BuiltinOptions.ReducerOptions,\n    fb.BuiltinOperator.SQRT: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.RSQRT: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.SHAPE: fb.BuiltinOptions.ShapeOptions,\n    fb.BuiltinOperator.POW: fb.BuiltinOptions.PowOptions,\n    fb.BuiltinOperator.ARG_MIN: fb.BuiltinOptions.ArgMinOptions,\n    fb.BuiltinOperator.FAKE_QUANT: fb.BuiltinOptions.FakeQuantOptions,\n    fb.BuiltinOperator.REDUCE_PROD: fb.BuiltinOptions.ReducerOptions,\n    fb.BuiltinOperator.REDUCE_MAX: fb.BuiltinOptions.ReducerOptions,\n    fb.BuiltinOperator.PACK: fb.BuiltinOptions.PackOptions,\n    fb.BuiltinOperator.LOGICAL_OR: fb.BuiltinOptions.LogicalOrOptions,\n    fb.BuiltinOperator.ONE_HOT: fb.BuiltinOptions.OneHotOptions,\n    fb.BuiltinOperator.LOGICAL_AND: fb.BuiltinOptions.LogicalAndOptions,\n    fb.BuiltinOperator.LOGICAL_NOT: fb.BuiltinOptions.LogicalNotOptions,\n    fb.BuiltinOperator.UNPACK: fb.BuiltinOptions.UnpackOptions,\n    fb.BuiltinOperator.REDUCE_MIN: fb.BuiltinOptions.ReducerOptions,\n    fb.BuiltinOperator.FLOOR_DIV: fb.BuiltinOptions.FloorDivOptions,\n    fb.BuiltinOperator.REDUCE_ANY: fb.BuiltinOptions.ReducerOptions,\n    fb.BuiltinOperator.SQUARE: fb.BuiltinOptions.SquareOptions,\n    fb.BuiltinOperator.ZEROS_LIKE: fb.BuiltinOptions.ZerosLikeOptions,\n    fb.BuiltinOperator.FILL: fb.BuiltinOptions.FillOptions,\n    fb.BuiltinOperator.FLOOR_MOD: fb.BuiltinOptions.FloorModOptions,\n    fb.BuiltinOperator.RANGE: fb.BuiltinOptions.RangeOptions,\n    fb.BuiltinOperator.RESIZE_NEAREST_NEIGHBOR: fb.BuiltinOptions.ResizeNearestNeighborOptions,\n    fb.BuiltinOperator.LEAKY_RELU: fb.BuiltinOptions.LeakyReluOptions,\n    fb.BuiltinOperator.SQUARED_DIFFERENCE: fb.BuiltinOptions.SquaredDifferenceOptions,\n    fb.BuiltinOperator.MIRROR_PAD: fb.BuiltinOptions.MirrorPadOptions,\n    fb.BuiltinOperator.ABS: fb.BuiltinOptions.AbsOptions,\n    fb.BuiltinOperator.SPLIT_V: fb.BuiltinOptions.SplitVOptions,\n    fb.BuiltinOperator.UNIQUE: fb.BuiltinOptions.UniqueOptions,\n    fb.BuiltinOperator.CEIL: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.REVERSE_V2: fb.BuiltinOptions.ReverseV2Options,\n    fb.BuiltinOperator.ADD_N: fb.BuiltinOptions.AddNOptions,\n    fb.BuiltinOperator.GATHER_ND: fb.BuiltinOptions.GatherNdOptions,\n    fb.BuiltinOperator.COS: fb.BuiltinOptions.CosOptions,\n    fb.BuiltinOperator.WHERE: fb.BuiltinOptions.WhereOptions,\n    fb.BuiltinOperator.RANK: fb.BuiltinOptions.RankOptions,\n    fb.BuiltinOperator.ELU: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.REVERSE_SEQUENCE: fb.BuiltinOptions.ReverseSequenceOptions,\n    fb.BuiltinOperator.MATRIX_DIAG: fb.BuiltinOptions.MatrixDiagOptions,\n    fb.BuiltinOperator.QUANTIZE: fb.BuiltinOptions.QuantizeOptions,\n    fb.BuiltinOperator.MATRIX_SET_DIAG: fb.BuiltinOptions.MatrixSetDiagOptions,\n    fb.BuiltinOperator.HARD_SWISH: fb.HardSwishOptions,\n    fb.BuiltinOperator.IF: fb.IfOptions,\n    fb.BuiltinOperator.WHILE: fb.WhileOptions,\n    fb.BuiltinOperator.DEPTH_TO_SPACE: fb.DepthToSpaceOptions,\n    fb.BuiltinOperator.NON_MAX_SUPPRESSION_V4: fb.NonMaxSuppressionV4Options,\n    fb.BuiltinOperator.NON_MAX_SUPPRESSION_V5: fb.NonMaxSuppressionV5Options,\n    fb.BuiltinOperator.SCATTER_ND: fb.ScatterNdOptions,\n    fb.BuiltinOperator.ROUND: fb.BuiltinOptions.NONE,\n    fb.BuiltinOperator.SELECT_V2: fb.BuiltinOptions.SelectV2Options,\n    fb.BuiltinOperator.DENSIFY: fb.BuiltinOptions.DensifyOptions,\n    fb.BuiltinOperator.SEGMENT_SUM: fb.BuiltinOptions.SegmentSumOptions,\n    fb.BuiltinOperator.BATCH_MATMUL: fb.BuiltinOptions.BatchMatMulOptions,\n}\n\nCustomOptionsKey = 'custom_options'\n\nDtypeToNumpy = {\n    fb.TensorType.FLOAT16: np.float16,\n    fb.TensorType.FLOAT32: np.float32,\n    fb.TensorType.INT8: np.int8,\n    fb.TensorType.INT16: np.int16,\n    fb.TensorType.INT32: np.int32,\n    fb.TensorType.INT64: np.int64,\n    fb.TensorType.UINT8: np.uint8,\n    fb.TensorType.STRING: np.str_,\n    fb.TensorType.BOOL: np.bool_,\n    fb.TensorType.COMPLEX64: np.complex64,\n}\n\nDtypeFromNumpy = {\n    np.float16: fb.TensorType.FLOAT16,\n    np.float32: fb.TensorType.FLOAT32,\n    np.int8: fb.TensorType.INT8,\n    np.int16: fb.TensorType.INT16,\n    np.int32: fb.TensorType.INT32,\n    np.int64: fb.TensorType.INT64,\n    np.uint8: fb.TensorType.UINT8,\n    np.str_: fb.TensorType.STRING,\n    np.bool_: fb.TensorType.BOOL,\n    np.complex64: fb.TensorType.COMPLEX64,\n}\n\n\n_regex1 = re.compile('(.)([A-Z][a-z]+)')\n_regex2 = re.compile('([a-z0-9])([A-Z])')\n\n\ndef camel_to_snake(s):\n    subbed = _regex1.sub(r'\\1_\\2', s)\n    return _regex2.sub(r'\\1_\\2', subbed).lower()\n\n\ndef snake_to_camel(s):\n    return ''.join(c for c in s.title() if c != '_')\n\n\ndef substitute_enum_value_with_name(key, value, optionsClass):\n    cls, map = _OptionEnumNameByValueMaps.get(key, (None, None))\n    return map[value] if map is not None and (cls is None or cls == optionsClass) else value\n\n\ndef substitute_enum_name_with_value(key, name, optionsClass):\n    cls, map = _OptionEnumValueByNameMaps.get(key, (None, None))\n    return map[name] if map is not None and (cls is None or cls == optionsClass) else name\n\n\ndef _generate_enum_value_by_name(enumClass):\n    return {name: value for name, value in enumClass.__dict__.items() if not name.startswith('_')}\n\n\ndef _generate_enum_name_by_value(enumClass):\n    return {value: name for name, value in enumClass.__dict__.items() if not name.startswith('_')}\n\n\ndef enumerate_options_getters(optionsClass):\n    return {camel_to_snake(name): func for name, func in optionsClass.__dict__.items()\n            if not name.startswith('_')\n            and name != 'Init' and not name.startswith('GetRootAs')\n            and not name.endswith('AsNumpy') and not name.endswith('Length')\n            and not isinstance(func, classmethod)}\n\n\ndef enumerate_options_length_getters(optionsClass):\n    return {camel_to_snake(name[:-6]): func for name, func in optionsClass.__dict__.items()\n            if not name.startswith('_')\n            and not name.startswith('GetRootAs') and name.endswith('Length')}\n\n\ndef enumerate_options_adders(optionsClass):\n    className = optionsClass.__name__\n    prefix = className + 'Add'\n    optionsModule = sys.modules[optionsClass.__module__]\n    return {camel_to_snake(name[len(prefix):]): func for name, func in optionsModule.__dict__.items()\n            if name.startswith(prefix)}\n\n\ndef enumerate_options_vector_starters(optionsClass):\n    className = optionsClass.__name__\n    prefix, suffix = className + 'Start', 'Vector'\n    optionsModule = sys.modules[optionsClass.__module__]\n    return {camel_to_snake(name[len(prefix):-len(suffix)]): func for name, func in optionsModule.__dict__.items()\n            if name.startswith(prefix) and name.endswith(suffix)}\n\n\ndef get_options_starter_ender(optionsClass):\n    className = optionsClass.__name__\n    optionsModule = sys.modules[optionsClass.__module__]\n    moduleDict = optionsModule.__dict__\n    return moduleDict[className + 'Start'], moduleDict[className + 'End']\n\n\n_OptionEnumNameByValueMaps = {\n    'padding': (None, _generate_enum_name_by_value(fb.Padding)),\n    'fused_activation_function': (None, _generate_enum_name_by_value(fb.ActivationFunctionType)),\n    'weights_format': (\n        fb.FullyConnectedOptions, _generate_enum_name_by_value(fb.FullyConnectedOptionsWeightsFormat)),\n    'type': (fb.LSHProjectionOptions, _generate_enum_name_by_value(fb.LSHProjectionType)),\n    'kernel_type': (fb.LSTMOptions, _generate_enum_name_by_value(fb.LSTMKernelType)),\n    'combiner': (fb.EmbeddingLookupSparseOptions, _generate_enum_name_by_value(fb.CombinerType)),\n}\n\n_OptionEnumValueByNameMaps = {\n    'padding': (None, _generate_enum_value_by_name(fb.Padding)),\n    'fused_activation_function': (None, _generate_enum_value_by_name(fb.ActivationFunctionType)),\n    'weights_format': (\n        fb.FullyConnectedOptions, _generate_enum_value_by_name(fb.FullyConnectedOptionsWeightsFormat)),\n    'type': (fb.LSHProjectionOptions, _generate_enum_value_by_name(fb.LSHProjectionType)),\n    'kernel_type': (fb.LSTMOptions, _generate_enum_value_by_name(fb.LSTMKernelType)),\n    'combiner': (fb.EmbeddingLookupSparseOptions, _generate_enum_value_by_name(fb.CombinerType)),\n}\n\nBuiltinOperatorTypeByValue = _generate_enum_name_by_value(fb.BuiltinOperator)\nBuiltinOperatorValueByType = _generate_enum_value_by_name(fb.BuiltinOperator)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/reader.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nfrom .helpers import *\nfrom ....model import *\ntry:\n    from flatbuffers import flexbuffers\n    has_flexbuffers = True\nexcept ImportError:\n    has_flexbuffers = False\nimport sys\nimport six\n\n\ndef _get_quantization(tensor):\n    quant = tensor.Quantization()\n    if quant is None:\n        return None\n\n    if quant.MinLength() == 0:\n        min = None\n    elif quant.MinLength() == 1:\n        min = float(quant.Min(0))\n    else:\n        min = quant.MinAsNumpy()\n\n    if quant.MaxLength() == 0:\n        max = None\n    elif quant.MaxLength() == 1:\n        max = float(quant.Max(0))\n    else:\n        max = quant.MaxAsNumpy()\n\n    if quant.ScaleLength() == 0:\n        scale = None\n    elif quant.ScaleLength() == 1:\n        scale = float(quant.Scale(0))\n    else:\n        scale = quant.ScaleAsNumpy()\n\n    if quant.ZeroPointLength() == 0:\n        zero_point = None\n    elif quant.ZeroPointLength() == 1:\n        zero_point = int(quant.ZeroPoint(0))\n    else:\n        zero_point = quant.ZeroPointAsNumpy()\n\n    if all(x is None for x in [min, max, scale, zero_point]):\n        return None\n    else:\n        return dict(min=min, max=max, zero_point=zero_point, scale=scale)\n\n\ndef _get_data_as_ndarray(buffer, dtype, shape):\n    return buffer.DataAsNumpy().view(dtype).reshape(shape) if buffer.DataLength() != 0 else None\n\n\ndef _get_options_starter_ender(optionsClass):\n    className = optionsClass.__name__\n    optionsModule = sys.modules[optionsClass.__module__]\n    moduleDict = optionsModule.__dict__\n    return moduleDict[className + 'Start'], moduleDict[className + 'End']\n\n\ndef _enumerate_attributes(optionsClass, optionsObject):\n    getters = enumerate_options_getters(optionsClass)\n    length_getters = enumerate_options_length_getters(optionsClass)\n\n    attribs = {}\n    for name, getter in getters.items():\n        length_getter = length_getters.get(name)\n\n        value = getter(optionsObject) if length_getter is None else \\\n            [getter(optionsObject, i) for i in range(length_getter(optionsObject))]\n\n        attribs[name] = substitute_enum_value_with_name(name, value, optionsClass)\n\n    return attribs\n\n\ndef _decode_custom_options(bytes):\n    root = flexbuffers.GetRoot(bytes)\n    assert root.IsMap\n    return {key: value for key, value in six.iteritems(root.AsMap.Value) if not key.startswith('_')}\n\n\ndef read_flatbuffers(filename):\n    with open(filename, 'rb') as file:\n        bytes = bytearray(file.read())\n\n    model = fb.Model.GetRootAsModel(bytes, 0)\n\n    if model.SubgraphsLength() != 1:\n        raise NotImplementedError('graphs with multiple sub-graphs are not supported')\n\n    subgraph = model.Subgraphs(0)\n    name = subgraph.Name()\n\n    graph = Graph(name.decode() if name is not None else None)\n\n    tensors = []\n    for i in range(subgraph.TensorsLength()):\n        tensor = subgraph.Tensors(i)\n        name = tensor.Name().decode()\n        shape = tuple(tensor.Shape(i) for i in range(tensor.ShapeLength()))\n        dtype = DtypeToNumpy[tensor.Type()]\n        buffer = model.Buffers(tensor.Buffer())\n        data = _get_data_as_ndarray(buffer, dtype, shape)\n        quant = _get_quantization(tensor)\n        tensors.append(Tensor(graph, name, shape, dtype, data, quant))\n\n    for i in range(subgraph.OperatorsLength()):\n        operator = subgraph.Operators(i)\n        operatorCode = model.OperatorCodes(operator.OpcodeIndex())\n        builtinCode = operatorCode.BuiltinCode()\n        opType = BuiltinOperatorTypeByValue[builtinCode] if builtinCode != fb.BuiltinOperator.CUSTOM else \\\n            operatorCode.CustomCode().decode('ascii')\n        custom = builtinCode == fb.BuiltinOperator.CUSTOM\n\n        options = operator.BuiltinOptions()\n        optionsClass = BuiltinOptionsClasses[operator.BuiltinOptionsType()]\n\n        inputs = [tensors[operator.Inputs(i)] for i in range(operator.InputsLength()) if operator.Inputs(i) != -1]\n        outputs = [tensors[operator.Outputs(i)] for i in range(operator.OutputsLength()) if operator.Outputs(i) != -1]\n\n        if options is not None and optionsClass is not None:\n            optionsObject = optionsClass()\n            optionsObject.Init(options.Bytes, options.Pos)\n            attribs = _enumerate_attributes(optionsClass, optionsObject)\n        elif custom:\n            bytes = operator.CustomOptionsAsNumpy().tobytes()\n            attribs = _decode_custom_options(bytes) if has_flexbuffers else {CustomOptionsKey: bytes}\n        else:\n            attribs = {}\n\n        Operation(graph, type=opType, custom=custom, attribs=attribs, inputs=tuple(inputs), outputs=tuple(outputs))\n\n    inputs = []\n    for i in range(subgraph.InputsLength()):\n        tensor_index = subgraph.Inputs(i)\n        inputs.append(tensors[tensor_index])\n\n    outputs = []\n    for i in range(subgraph.OutputsLength()):\n        tensor_index = subgraph.Outputs(i)\n        outputs.append(tensors[tensor_index])\n\n    graph.inputs = inputs\n    graph.outputs = outputs\n\n    return graph\n\n\nclass Reader(object):\n\n    def __call__(self, filename):\n        return read_flatbuffers(filename)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/io/tf/lite/writer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nfrom .helpers import *\nimport flatbuffers\ntry:\n    from flatbuffers import flexbuffers\n    has_flexbuffers = True\nexcept ImportError:\n    has_flexbuffers = False\nimport numpy as np\n\n\ndef _CreateNumpyVector(builder, x):\n    if not isinstance(x, np.ndarray):\n        raise TypeError(\"non-numpy-ndarray passed to CreateNumpyVector\")\n\n    if x.dtype.kind not in ['b', 'i', 'u', 'f']:\n        raise TypeError(\"numpy-ndarray holds elements of unsupported datatype\")\n\n    if x.ndim > 1:\n        raise TypeError(\"multidimensional-ndarray passed to CreateNumpyVector\")\n\n    builder.StartVector(x.itemsize, x.size, x.dtype.alignment)\n\n    # Ensure little endian byte ordering\n    if x.dtype.str[0] != \"<\":\n        x = x.byteswap()\n\n    length = x.itemsize * x.size\n    builder.head -= length\n    builder.Bytes[builder.head: builder.head + length] = x.tobytes()\n\n    return builder.EndVector(x.size)\n\n\ndef _build_buffer(builder, bytes):\n    data = _CreateNumpyVector(builder, bytes)\n    fb.BufferStart(builder)\n    fb.BufferAddData(builder, data)\n    return fb.BufferEnd(builder)\n\n\ndef _build_tensor(builder, tensor, buffer_index):\n    name = builder.CreateString(tensor.name)\n    type = DtypeFromNumpy[tensor.data.dtype.type if isinstance(tensor.data, np.ndarray) else tensor.dtype]\n\n    fb.TensorStartShapeVector(builder, len(tensor.shape))\n    for s in reversed(tensor.shape):\n        builder.PrependInt32(s)\n    shape = builder.EndVector(len(tensor.shape))\n\n    buffer = buffer_index if tensor.data is not None else 0\n\n    quant = _build_quantization(builder, tensor.quant, type)\n\n    fb.TensorStart(builder)\n    fb.TensorAddName(builder, name)\n    fb.TensorAddShape(builder, shape)\n    fb.TensorAddType(builder, type)\n    fb.TensorAddBuffer(builder, buffer)\n    if quant is not None:\n        fb.TensorAddQuantization(builder, quant)\n    return fb.TensorEnd(builder)\n\n\ndef _ensure_numpy_array(x, dtype):\n    if isinstance(x, np.ndarray):\n        assert x.dtype == dtype\n        return x\n    else:\n        return np.array(x, dtype=dtype)\n\n\ndef _build_quantization(builder, quant, dtype):\n    if quant is None:\n        return None\n\n    min = quant.get('min')\n    max = quant.get('max')\n    zero_point = quant.get('zero_point')\n    scale = quant.get('scale')\n\n    if all(item is None or item == 0 for item in [min, max, zero_point, scale]):\n        return None\n\n    min = _CreateNumpyVector(builder, _ensure_numpy_array(min, dtype=np.float32)) if min is not None else None\n    max = _CreateNumpyVector(builder, _ensure_numpy_array(max, dtype=np.float32)) if max is not None else None\n    scale = _CreateNumpyVector(builder, _ensure_numpy_array(scale, dtype=np.float32)) if scale is not None else None\n    zero_point = _CreateNumpyVector(builder, _ensure_numpy_array(zero_point, dtype=np.int64)) if zero_point is not None else None\n\n    fb.QuantizationParametersStart(builder)\n    if dtype != fb.TensorType.INT32:\n        if min is not None:\n            fb.QuantizationParametersAddMin(builder, min)\n        if max is not None:\n            fb.QuantizationParametersAddMax(builder, max)\n    if scale is not None:\n        fb.QuantizationParametersAddScale(builder, scale)\n    if zero_point is not None:\n        fb.QuantizationParametersAddZeroPoint(builder, zero_point)\n    return fb.QuantizationParametersEnd(builder)\n\n\ndef _build_operator_code(builder, operation):\n    builtinCode = BuiltinOperatorValueByType[operation.type] if not operation.custom else fb.BuiltinOperator.CUSTOM\n    customCode = builder.CreateString(operation.type) if operation.custom else None\n\n    fb.OperatorCodeStart(builder)\n    fb.OperatorCodeAddBuiltinCode(builder, builtinCode)\n    if customCode:\n        fb.OperatorCodeAddCustomCode(builder, customCode)\n    return fb.OperatorCodeEnd(builder)\n\n\ndef _build_operator_options(builder, attribs, optionsClass):\n    starter, ender = get_options_starter_ender(optionsClass)\n    adders = enumerate_options_adders(optionsClass)\n    vector_starters = enumerate_options_vector_starters(optionsClass)\n\n    vector_values = {}\n    for name, vector_starter in vector_starters.items():\n        value = attribs[name]\n        assert isinstance(value, (list, tuple)) and (len(value) == 0 or isinstance(value[0], int))\n        vector_starter(builder, len(value))\n        for i in reversed(value):\n            builder.PrependInt32(i)\n        vector_values[name] = builder.EndVector(len(value))\n\n    starter(builder)\n    for name, adder in adders.items():\n        if name == 'fused_activation_function' and name not in attribs:\n            value = 'NONE'\n        else:\n            value = attribs[name]\n            if isinstance(value, type):\n                value = DtypeFromNumpy[value]\n\n        value = vector_values.get(name, value)\n        value = substitute_enum_name_with_value(name, value, optionsClass)\n\n        adder(builder, value)\n\n    return ender(builder)\n\n\ndef _encode_custom_options(attribs):\n    builder = flexbuffers.Builder()\n    builder.MapFromElements(attribs)\n    return builder.Finish()\n\n\ndef _build_operator_custom_options(builder, attribs):\n    value = _encode_custom_options(attribs) if has_flexbuffers else attribs[CustomOptionsKey]\n\n    fb.OperatorStartCustomOptionsVector(builder, len(value))\n    for b in reversed(value):\n        builder.PrependUint8(b)\n    return builder.EndVector(len(value))\n\n\ndef _build_operator(builder, operation, op_code_index, tensor_index):\n    inputs = [tensor_index[tensor] for tensor in operation.inputs]\n    fb.OperatorStartInputsVector(builder, len(inputs))\n    for input in reversed(inputs):\n        builder.PrependInt32(input)\n    inputs = builder.EndVector(len(inputs))\n\n    outputs = [tensor_index[tensor] for tensor in operation.outputs]\n    fb.OperatorStartOutputsVector(builder, len(outputs))\n    for output in reversed(outputs):\n        builder.PrependInt32(output)\n    outputs = builder.EndVector(len(outputs))\n\n    attribs = {name: value for name, value in operation.attribs.items()}\n\n    optionsType = BuiltinOptionsByOperator[BuiltinOperatorValueByType.get(operation.type, fb.BuiltinOperator.CUSTOM)]\n    optionsClass = BuiltinOptionsClasses[optionsType]\n    options = _build_operator_options(builder, attribs, optionsClass) if optionsClass is not None else None\n    custom_options = _build_operator_custom_options(builder, attribs) if operation.custom else None\n\n    fb.OperatorStart(builder)\n    fb.OperatorAddOpcodeIndex(builder, op_code_index[operation.type])\n    fb.OperatorAddInputs(builder, inputs)\n    fb.OperatorAddOutputs(builder, outputs)\n    fb.OperatorAddBuiltinOptionsType(builder, optionsType)\n\n    if options:\n        fb.OperatorAddBuiltinOptions(builder, options)\n\n    if custom_options:\n        fb.OperatorAddCustomOptions(builder, custom_options)\n\n    return fb.OperatorEnd(builder)\n\n\n# https://github.com/google/flatbuffers/issues/4814\ndef FinishWithFileIdentifier(builder, rootTable, fid):\n    from flatbuffers import number_types as N\n    from flatbuffers import encode\n\n    if fid is None or len(fid) != 4:\n        raise Exception('fid must be 4 chars')\n\n    flags = N.Uint8Flags\n    prepSize = 4\n    builder.Prep(builder.minalign, prepSize + len(fid))\n    for i in range(3, -1, -1):\n        builder.head = builder.head - flags.bytewidth\n        encode.Write(flags.packer_type, builder.Bytes, builder.Head(), ord(fid[i]))\n\n    return builder.Finish(rootTable)\n\n\ndef write_flatbuffers(graph, filename):\n    graph.sort()\n    builder = flatbuffers.Builder(0)\n\n    fb.BufferStartDataVector(builder, 0)\n    data = builder.EndVector(0)\n    fb.BufferStart(builder)\n    fb.BufferAddData(builder, data)\n    buffer = fb.BufferEnd(builder)\n\n    buffers = [buffer]\n    for tensor in graph.tensors:\n        if tensor.data is not None:\n            tensor_data = tensor.data\n            if not isinstance(tensor_data, np.ndarray):\n                tensor_data = np.array(tensor_data, dtype=tensor.dtype)\n            bytes = tensor_data.reshape([-1]).view(np.uint8)\n            buffers.append(_build_buffer(builder, bytes))\n\n    fb.ModelStartBuffersVector(builder, len(buffers))\n    for buffer in reversed(buffers):\n        builder.PrependUOffsetTRelative(buffer)\n    buffers = builder.EndVector(len(buffers))\n\n    buffer_index = 1\n\n    tensors = []\n    tensor_index = {}\n    for tensor in graph.tensors:\n        tensor_index[tensor] = len(tensors)\n        tensors.append(_build_tensor(builder, tensor, buffer_index))\n        if tensor.data is not None:\n            buffer_index += 1\n\n    fb.SubGraphStartTensorsVector(builder, len(tensors))\n    for tensor in reversed(tensors):\n        builder.PrependUOffsetTRelative(tensor)\n    tensors = builder.EndVector(len(tensors))\n\n    op_codes = []\n    op_code_index = {}\n    for operation in graph.operations:\n        if operation.type not in op_code_index:\n            op_code_index[operation.type] = len(op_codes)\n            op_codes.append(_build_operator_code(builder, operation))\n\n    fb.ModelStartOperatorCodesVector(builder, len(op_codes))\n    for op_code in reversed(op_codes):\n        builder.PrependUOffsetTRelative(op_code)\n    op_codes = builder.EndVector(len(op_codes))\n\n    operators = []\n    for operation in graph.operations:\n        operators.append(_build_operator(builder, operation, op_code_index, tensor_index))\n\n    fb.SubGraphStartOperatorsVector(builder, len(operators))\n    for operator in reversed(operators):\n        builder.PrependUOffsetTRelative(operator)\n    operators = builder.EndVector(len(operators))\n\n    name = builder.CreateString(graph.name) if graph.name is not None else None\n\n    inputs = graph.inputs\n    fb.SubGraphStartInputsVector(builder, len(inputs))\n    for input in reversed(inputs):\n        builder.PrependInt32(tensor_index[input])\n    inputs = builder.EndVector(len(inputs))\n\n    outputs = graph.outputs\n    fb.SubGraphStartInputsVector(builder, len(outputs))\n    for output in reversed(outputs):\n        builder.PrependInt32(tensor_index[output])\n    outputs = builder.EndVector(len(outputs))\n\n    fb.SubGraphStart(builder)\n    if name is not None:\n        fb.SubGraphAddName(builder, name)\n    fb.SubGraphAddTensors(builder, tensors)\n    fb.SubGraphAddOperators(builder, operators)\n    fb.SubGraphAddInputs(builder, inputs)\n    fb.SubGraphAddOutputs(builder, outputs)\n    subgraph = fb.SubGraphEnd(builder)\n\n    fb.ModelStartSubgraphsVector(builder, 1)\n    builder.PrependUOffsetTRelative(subgraph)\n    subgraphs = builder.EndVector(1)\n\n    fb.ModelStart(builder)\n    fb.ModelAddVersion(builder, OUTPUT_SCHEMA_VERSION)\n    fb.ModelAddBuffers(builder, buffers)\n    fb.ModelAddOperatorCodes(builder, op_codes)\n    fb.ModelAddSubgraphs(builder, subgraphs)\n    model = fb.ModelEnd(builder)\n\n    FinishWithFileIdentifier(builder, model, OUTPUT_FILE_IDENTIFIER)\n\n    with open(filename, 'wb') as file:\n        file.write(builder.Output())\n\n\nclass Writer(object):\n\n    def __call__(self, graph, filename):\n        write_flatbuffers(graph, filename)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/model/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .graph import Tensor, Operation, Graph\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/model/graph.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import division, print_function, absolute_import\n\nfrom collections.abc import Sequence\nfrom functools import reduce\n\nimport six\nimport typing\nimport numpy as np\n\n\n# noinspection PyProtectedMember\nclass Tensor:\n\n    def __init__(self,\n                 graph,             # type: Graph,\n                 name=None,         # type: typing.Optional[str]\n                 shape=None,        # type: typing.Optional[typing.Tuple[int, ...]]\n                 dtype=None,        # type: typing.Optional[type]\n                 data=None,         # type: typing.Union[None, np.ndarray, typing.Any]\n                 quant=None         # type: typing.Optional[typing.Dict[str, typing.Any]]\n                 ):\n        # type: (...)->None\n        self._graph = graph\n        self._producers = []\n        self._consumers = []\n\n        self.name = name            # type: typing.Optional[str]\n        self.shape = shape          # type: typing.Optional[typing.Tuple[int, ...]]\n        self.dtype = dtype          # type: typing.Optional[type]\n        self.data = data            # type: typing.Union[None, np.ndarray, typing.Any]\n        self.quant = quant or {}    # type: typing.Optional[typing.Dict[str, typing.Any]]\n\n        assert isinstance(graph, Graph)\n        graph._tensors.append(self)\n\n    def copy_with(self, graph=None, name=None, dtype=None, shape=None, data=None, quant=None):\n        return Tensor(graph=graph if graph is not None else self.graph,\n                      name=name if name is not None else self.name,\n                      dtype=dtype if dtype is not None else self.dtype,\n                      shape=shape if shape is not None else self.shape,\n                      data=data if data is not None else self.data,\n                      quant=quant if quant is not None else self.quant)\n\n    @property\n    def graph(self):\n        # type: ()->typing.Optional[Graph]\n        return self._graph\n\n    @property\n    def has_producer(self):\n        return len(self._producers) != 0\n\n    @property\n    def producers(self):\n        # type: ()->typing.List[Operation]\n        return self._producers\n\n    @property\n    def producer(self):\n        # type: ()->typing.Optional[Operation]\n        assert len(self._producers) <= 1\n        return self._producers[0] if len(self._producers) == 1 else None\n\n    @property\n    def has_consumer(self):\n        return len(self._consumers) != 0\n\n    @property\n    def consumers(self):\n        # type: ()->typing.List[Operation]\n        return self._consumers\n\n    @property\n    def consumer(self):\n        # type: ()->typing.Optional[Operation]\n        return self._consumers[0] if len(self._consumers) == 1 else None\n\n    @property\n    def name(self):\n        return self._name\n\n    @name.setter\n    def name(self, name):\n        assert name is None or isinstance(name, str)\n        self._name = name\n\n    @property\n    def shape(self):\n        return self._shape\n\n    @shape.setter\n    def shape(self, shape):\n        assert shape is None or isinstance(shape, (list, tuple))\n        assert shape is None or all(s is None or isinstance(s, int) for s in shape)\n        self._shape = tuple(shape) if shape is not None else None\n\n    @property\n    def dtype(self):\n        return self._dtype\n\n    @dtype.setter\n    def dtype(self, dtype):\n        assert dtype is None or isinstance(dtype, type)\n        self._dtype = dtype\n\n    @property\n    def rank(self):\n        # type: ()->typing.Optional[int]\n        return len(self.shape) if self.shape is not None else None\n\n    @property\n    def volume(self):\n        # type: ()->typing.Optional[int]\n        return reduce((lambda x, y: x * y), self.shape) if self.shape is not None and \\\n                                                           all(s is not None for s in self.shape) else None\n\n    @property\n    def is_constant(self):\n        # type: ()->bool\n        return self.data is not None\n\n    def __repr__(self):\n        return self.name if self.name is not None else _hex_id(self)\n\n    def __str__(self):\n        return '{name}: {dtype}[{shape}]'.format(\n            name=self.name if self.name is not None else _hex_id(self),\n            dtype=self.dtype.__name__,\n            shape=', '.join(str(s) for s in self.shape) if self.shape else '...')\n\n\n_TensorListOrTupleT = typing.Union[typing.List[Tensor], typing.Tuple[Tensor, ...]]\n\n\n# noinspection PyProtectedMember\nclass Operation:\n\n    def __init__(self,\n                 graph,             # type: Graph\n                 type=None,         # type: typing.Optional[str]\n                 name=None,         # type: typing.Optional[str]\n                 attribs=None,      # type: typing.Dict[str, typing.Any]\n                 inputs=None,       # type: typing.Union[None, Tensor, _TensorListOrTuple]\n                 outputs=None,      # type: typing.Union[None, Tensor, _TensorListOrTuple]\n                 custom=False,      # type: bool\n                 ):\n        # type:(...)->None\n        self._graph = graph\n        self._inputs = tuple()\n        self._outputs = tuple()\n\n        assert name is None or isinstance(name, str)\n        if attribs is not None:\n            assert isinstance(attribs, dict)\n            assert all(isinstance(key, str) for key in six.iterkeys(attribs))\n            assert all(not isinstance(value, Tensor) for value in six.itervalues(attribs))\n\n        self.type = type                # type: typing.Optional[str]\n        self.name = name                # type: typing.Optional[str]\n        self.attribs = attribs or {}    # type: typing.Dict[str, typing.Any]\n        self.custom = custom            # type: bool\n\n        assert isinstance(graph, Graph)\n        graph._operations.append(self)\n\n        if inputs is not None:\n            self.inputs = inputs\n        if outputs is not None:\n            self.outputs = outputs\n\n    def copy_with(self, graph=None, type=None, name=None, attribs=None, inputs=None, outputs=None, custom=None):\n        return Operation(graph=graph if graph is not None else self.graph,\n                         type=type if type is not None else self.type,\n                         name=name if name is not None else self.name,\n                         attribs=attribs if attribs is not None else self.attribs,\n                         inputs=inputs if inputs is not None else self.inputs,\n                         outputs=outputs if outputs is not None else self.outputs,\n                         custom=custom if custom is not None else self.custom)\n\n    @property\n    def graph(self):\n        # type: ()->typing.Optional[Graph]\n        return self._graph\n\n    @property\n    def inputs(self):\n        # type: ()->_TensorListOrTupleT\n        return self._inputs\n\n    @property\n    def input(self):\n        # type: ()->Tensor\n        assert len(self._inputs) == 1\n        return self._inputs[0]\n\n    @inputs.setter\n    def inputs(self, tensors):\n        # type: (typing.Union[Tensor, _TensorListOrTupleT])->None\n        if isinstance(tensors, Tensor):\n            tensors = (tensors,)\n\n        for tensor in self._inputs:\n            assert self in tensor._consumers\n        for tensor in self._inputs:\n            if self in tensor._consumers:\n                tensor._consumers.remove(self)\n\n        self._inputs = _ListView(tensors) if isinstance(tensors, list) else tensors\n        for tensor in tensors:\n            assert isinstance(tensor, Tensor), \"got {}\".format(type(tensor))\n            if self not in tensor._consumers:\n                tensor._consumers.append(self)\n\n    @property\n    def outputs(self):\n        # type: ()->_TensorListOrTupleT\n        return self._outputs\n\n    @property\n    def output(self):\n        # type: ()->Tensor\n        assert len(self._outputs) == 1\n        return self._outputs[0]\n\n    @outputs.setter\n    def outputs(self, tensors):\n        # type: (typing.Union[Tensor, _TensorListOrTupleT])->None\n\n        if isinstance(tensors, Tensor):\n            tensors = (tensors,)\n\n        for tensor in self._outputs:\n            assert self in tensor._producers\n            tensor._producers.remove(self)\n\n        self._outputs = _ListView(tensors) if isinstance(tensors, list) else tensors\n        for tensor in tensors:\n            assert isinstance(tensor, Tensor), \"got {}\".format(type(tensor))\n            assert self not in tensor._producers\n            tensor._producers.append(self)\n\n    @property\n    def type(self):\n        return self._type\n\n    @type.setter\n    def type(self, type):\n        assert type is None or isinstance(type, str), \"got '{}'\".format(type)\n        self._type = type\n\n    @property\n    def name(self):\n        return self._name\n\n    @name.setter\n    def name(self, name):\n        assert name is None or isinstance(name, str), \"got '{}'\".format(name)\n        self._name = name\n\n    def __repr__(self):\n        return self.type if self.type is not None else _hex_id(self)\n\n    def __str__(self):\n        return '{outputs} = {op}{{{attribs}}}({inputs})'.format(\n            op=self.type if self.type is not None else _hex_id(self),\n            inputs=', '.join(repr(tensor) for tensor in self._inputs),\n            outputs=', '.join(str(tensor) for tensor in self._outputs),\n            attribs=', '.join('{}={}'.format(key, value) for key, value in self.attribs.items()))\n\n\n# noinspection PyProtectedMember\nclass Graph:\n\n    def __init__(self, name=None):\n        # type:(typing.Optional[str])->None\n        self._operations = []\n        self._tensors = []\n        self._inputs = []\n        self._outputs = []\n        self._name = name\n\n    @property\n    def name(self):\n        return self._name\n\n    @name.setter\n    def name(self, name):\n        assert name is None or isinstance(name, str)\n        self._name = name\n\n    @property\n    def operations(self):\n        # type: ()->typing.Sequence[Operation]\n        return _ListView(self._operations)\n\n    @property\n    def tensors(self):\n        # type: ()->typing.Sequence[Tensor]\n        return _ListView(self._tensors)\n\n    @property\n    def inputs(self):\n        # type: ()->typing.Sequence[Tensor]\n        return _ListView(self._inputs)\n\n    @inputs.setter\n    def inputs(self, tensors):\n        # type: (_TensorListOrTupleT)->None\n        assert isinstance(tensors, (list, tuple))\n\n        self._inputs = tensors\n\n        for tensor in self._inputs:\n            assert isinstance(tensor, Tensor)\n\n    @property\n    def outputs(self):\n        # type: ()->typing.Sequence[Tensor]\n        return _ListView(self._outputs)\n\n    @outputs.setter\n    def outputs(self, tensors):\n        # type: (_TensorListOrTupleT)->None\n        assert isinstance(tensors, (list, tuple))\n\n        self._outputs = tensors\n\n        for tensor in self._outputs:\n            assert isinstance(tensor, Tensor)\n\n    def remove_tensor(self, tensor):\n        # type: (Tensor)->None\n        assert len(tensor.producers) == 0\n        assert len(tensor.consumers) == 0\n        assert tensor not in self._inputs\n        assert tensor not in self._outputs\n        self._tensors.remove(tensor)\n        tensor._graph = None\n\n    def remove_tensors(self, tensors):\n        # type: (typing.Iterable[Tensor])->None\n        for tensor in tensors:\n            assert len(tensor.producers) == 0\n            assert len(tensor.consumers) == 0\n            assert tensor not in self._inputs\n            assert tensor not in self._outputs\n        self._tensors = [tensor for tensor in self._tensors if tensor not in tensors]\n        for tensor in tensors:\n            tensor._graph = None\n\n    def remove_operation(self, operation, unlink=False):\n        # type: (Operation, bool)->None\n        if unlink:\n            operation.inputs = []\n            operation.outputs = []\n        else:\n            assert len(operation.inputs) == 0\n            assert len(operation.outputs) == 0\n        self._operations.remove(operation)\n        operation._graph = None\n\n    def remove_operations(self, operations, unlink=False):\n        # type: (typing.Iterable[Operation], bool)->None\n        operations = operations if isinstance(operations, set) else set(operations)\n        for operation in operations:\n            if unlink:\n                operation.inputs = []\n                operation.outputs = []\n            else:\n                assert len(operation.inputs) == 0\n                assert len(operation.outputs) == 0\n        self._operations = [op for op in self._operations if op not in operations]\n        for operation in operations:\n            operation._graph = None\n\n    def is_unique(self):\n        return all(len(t.producers) <= 1 for t in self.tensors)\n\n    def is_sorted(self):\n        seen = set()\n        for op in self.operations:\n            for tensor in op.inputs:\n                for producer in tensor.producers:\n                    if producer not in seen:\n                        return False\n            seen.add(op)\n        return True\n\n    def sort(self, offset=0):\n        count = len(self._operations)\n        sorted = {op: False for op in self._operations[offset:]}\n        for idx in range(offset, count):\n            i = idx\n            while i < count and not all(sorted.get(tensor.producer, True) for tensor in self._operations[i].inputs):\n                i += 1\n            if i == count:  # the graph contains a loop\n                return False\n            while i > idx:\n                self._operations[i-1], self._operations[i] = self._operations[i], self._operations[i-1]\n                i -= 1\n            sorted[self._operations[i]] = True\n        return True\n\n    def move_operation(self, at_idx, to_idx):\n        self._operations.insert(to_idx, self._operations.pop(at_idx))\n\n    def reverse(self, offset=0):\n        self._operations[offset:] = reversed(self._operations[offset:])\n\n    def __repr__(self):\n        return self.name if self.name is not None else _hex_id(self)\n\n    def __str__(self):\n        return \"graph {name}({inputs}) -> ({outputs})\".format(\n            name=repr(self),\n            inputs=', '.join(repr(input) for input in self.inputs),\n            outputs=', '.join(repr(input) for input in self.outputs),\n        )\n\n    def print(self, file=None):\n        print(f'graph {repr(self)} {{', file=file)\n\n        print(f'\\tinputs {{', file=file)\n        for tensor in self.inputs:\n            print('\\t\\t' + str(tensor) + ',', file=file)\n        print(f'\\t}}', file=file)\n\n        print(f'\\toutputs {{', file=file)\n        for tensor in self.outputs:\n            print('\\t\\t' + str(tensor) + ',', file=file)\n        print(f'\\t}}', file=file)\n\n        print(f'\\tparams {{', file=file)\n        for tensor in self.tensors:\n            if tensor.producer is None and tensor.data is not None:\n                print('\\t\\t' + str(tensor) + ',', file=file)\n        print(f'\\t}}', file=file)\n\n        print(f'\\toperators {{', file=file)\n        for operation in self._operations:\n            print('\\t\\t' + str(operation) + ',', file=file)\n        print(f'\\t}}', file=file)\n\n        print(f'}}')\n\n    def assert_consistent(self):\n        assert len(self.tensors) == len(set(self.tensors))\n        assert len(self.operations) == len(set(self.operations))\n        for t in self.tensors:\n            assert t._graph == self\n            assert all(t in consumer.inputs for consumer in t.consumers)\n            assert all(t in producer.outputs for producer in t.producers)\n            assert all(consumer in self.operations for consumer in t.consumers)\n            assert all(producer in self.operations for producer in t.producers)\n        for op in self.operations:\n            assert op._graph == self\n            assert all(op in t.consumers for t in op.inputs)\n            assert all(op in t.producers for t in op.outputs)\n        for t in self.inputs:\n            assert t in self.tensors\n        for t in self.outputs:\n            assert t in self.tensors\n\n\nclass _ListView(Sequence):\n\n    def __init__(self, lst):\n        self._list = lst\n\n    def __len__(self):\n        return self._list.__len__()\n\n    def __getitem__(self, item):\n        return self._list.__getitem__(item)\n\n    def __iter__(self):\n        return self._list.__iter__()\n\n    def __repr__(self):\n        return self._list.__repr__()\n\n    def __str__(self):\n        return self._list.__str__()\n\n    def __contains__(self, item):\n        return self._list.__contains__(item)\n\n    def __reversed__(self):\n        return reversed(self._list)\n\n\ndef _hex_id(obj):\n    return '@' + hex(id(obj))[2:]\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/model/utils.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections.abc import Iterable\n\n\ndef _split_counter_from_name(str):\n    if len(str) > 0 and not str[-1].isdigit():\n        return str, None\n\n    i = len(str)\n    while i > 0:\n        if not str[i-1].isdigit():\n            return str[:i], int(str[i:])\n        i -= 1\n    return None, int(str)\n\n\ndef generate_tensor_names_from_op_type(graph, keep_io_names=False):\n    used_names = set()\n    if keep_io_names:\n        used_names.update(tensor.name for tensor in graph.inputs if tensor.name is not None)\n        used_names.update(tensor.name for tensor in graph.outputs if tensor.name is not None)\n\n    op_counts = {}\n\n    for op in graph.operations:\n        for tensor in op.outputs:\n            if keep_io_names and tensor.name is not None and (tensor in graph.inputs or tensor in graph.outputs):\n                continue\n\n            idx = op_counts.get(op.type, 0) + 1\n            while op.type + str(idx) in used_names:\n                idx += 1\n\n            op_counts[op.type] = idx\n            tensor.name = op.type + str(idx)\n\n    for tensor in graph.tensors:\n        if tensor.producer is None:\n            tensor.name = None\n\n\ndef generate_missing_tensor_names_from_op_type(graph):\n    counters = {}\n    for tensor in graph.tensors:\n        if tensor.name is not None:\n            name, count = _split_counter_from_name(tensor.name)\n            if name is not None and count is not None:\n                counters[name] = max(counters.get(name, 0), count)\n\n    for tensor in graph.tensors:\n        if tensor.name is None and tensor.producer is not None:\n            op = tensor.producer\n            idx = counters.get(op.type, 0) + 1\n            counters[op.type] = idx\n            tensor.name = op.type + str(idx)\n\n\ndef generate_op_names_from_op_type(graph):\n    op_counts = {}\n    for op in graph.operations:\n        idx = op_counts.get(op.type, 0) + 1\n        op_counts[op.type] = idx\n        op.name = op.type + str(idx)\n\n\ndef replace_tensor_in_graph_inputs(graph, old_tensor, new_tensor):\n    graph.inputs = [new_tensor if t is old_tensor else t for t in graph.inputs]\n\n\ndef replace_tensor_in_graph_outputs(graph, old_tensor, new_tensor):\n    graph.outputs = [new_tensor if t is old_tensor else t for t in graph.outputs]\n\n\ndef replace_tensor_in_consumers(graph, old_tensor, new_tensor):\n    for consumer in list(old_tensor.consumers):     # copy list to avoid changes during iteration\n        sequence = tuple if isinstance(consumer.inputs, tuple) else list\n        consumer.inputs = sequence(new_tensor if t is old_tensor else t for t in consumer.inputs)\n\n    replace_tensor_in_graph_outputs(graph, old_tensor, new_tensor)\n\n\ndef replace_tensor_in_producers(graph, old_tensor, new_tensor):\n    for producer in list(old_tensor.producers):     # copy list to avoid changes during iteration\n        sequence = tuple if isinstance(producer.outputs, tuple) else list\n        producer.outputs = sequence(new_tensor if t is old_tensor else t for t in producer.outputs)\n\n    replace_tensor_in_graph_inputs(graph, old_tensor, new_tensor)\n\n\ndef bypass_and_remove(graph, op, remove_input_not_output=False):\n    assert len(op.outputs) == 1 and len(op.inputs) == 1\n\n    op_input = op.input\n    op_output = op.output\n\n    graph.remove_operation(op, unlink=True)\n\n    if remove_input_not_output:\n        replace_tensor_in_consumers(graph, op_input, op_output)\n        replace_tensor_in_producers(graph, op_input, op_output)\n        graph.remove_tensor(op_input)\n    else:\n        replace_tensor_in_consumers(graph, op_output, op_input)\n        replace_tensor_in_producers(graph, op_output, op_input)\n        graph.remove_tensor(op_output)\n\n\ndef replace_chain(graph, types, func, allow_forks=False):\n    def _match_type(type, template):\n        return type == template if isinstance(template, str) else\\\n            type in template if isinstance(template, Iterable) else False\n\n    def _match_link(op, template, is_last):\n        return _match_type(op.type, template) and (len(op.outputs) == 1 or is_last)\n\n    def _match_chain(op, types, allow_forks):\n        if not _match_link(op, types[0], is_last=len(types) == 1):\n            return None\n\n        chain = [op]\n        tensor = op.output\n        for idx, type in enumerate(types[1:]):\n            is_last = idx + 1 == len(types) - 1\n\n            if not allow_forks and len(tensor.consumers) > 1:\n                return None\n\n            op = next((consumer for consumer in tensor.consumers if _match_link(consumer, type, is_last)), None)\n            if op is None:\n                return None\n\n            chain.append(op)\n            if not is_last:\n                tensor = op.output\n\n        return chain\n\n    changed = False\n    i = 0\n    while i < len(graph.operations):\n        count = len(graph.operations)\n        chain = _match_chain(graph.operations[i], types, allow_forks)\n        if chain is not None and func(*chain) is not False:\n            k = i\n            while graph.operations[k] is not chain[-1]:\n                k += 1\n\n            for j in range(count, len(graph.operations)):\n                graph.move_operation(j, k)\n                k += 1\n\n            offs = len(chain) - 1\n            while offs > 0 and len(chain[offs - 1].output.consumers) == 1:\n                offs -= 1\n\n            interns = [op.output for op in chain[offs:-1]]\n            graph.remove_operations(chain[offs:], unlink=True)\n            graph.remove_tensors(interns)\n            changed = True\n        else:\n            i += 1\n    return changed\n\n\ndef remove_unreachable(graph):\n    visited = {tensor.producer for tensor in graph.outputs}\n    queue = list(visited)\n\n    k = 0\n    while k < len(queue):\n        op = queue[k]\n        k += 1\n\n        for tensor in op.inputs:\n            if tensor.producer is not None and tensor.producer not in visited and \\\n                    (tensor not in graph.inputs or len(tensor.producer.inputs) == 0):\n                visited.add(tensor.producer)\n                queue.append(tensor.producer)\n\n    graph.remove_operations({op for op in graph.operations if op not in visited}, unlink=True)\n    graph.remove_tensors({tensor for tensor in graph.tensors\n                          if len(tensor.producers) == 0 and len(tensor.consumers) == 0\n                          and tensor not in graph.inputs and tensor not in graph.outputs})\n\n\ndef remove_dynamic(graph):\n    for tensor in graph.inputs:\n        if tensor.shape is None or any(s is None for s in tensor.shape):\n            return False\n\n    dynamic_tensors = {tensor for tensor in graph.tensors if tensor.shape is None or any(s is None for s in tensor.shape)}\n    dynamic_ops = {tensor.producer for tensor in dynamic_tensors}\n\n    queue = list(dynamic_ops)\n\n    k = 0\n    while k < len(queue):\n        op = queue[k]\n        k += 1\n\n        for tensor in op.outputs:\n            dynamic_tensors.add(tensor)\n            for op in tensor.consumers:\n                if op not in dynamic_ops:\n                    dynamic_ops.add(op)\n                    queue.append(op)\n\n    kept_outputs = [tensor for tensor in graph.outputs if tensor not in dynamic_tensors]\n    new_outputs = kept_outputs + [tensor for tensor in graph.tensors\n                                  if all(op in dynamic_ops for op in tensor.consumers) and tensor not in dynamic_tensors]\n\n    graph.outputs = kept_outputs\n    graph.remove_operations(dynamic_ops, unlink=True)\n    graph.remove_tensors(dynamic_tensors)\n    graph.outputs = new_outputs\n\n    return True\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/operation_mapping.md",
    "content": "# TensorFlow\n\nThe following table lists the correspondence between operations in TensorFlow and NNEF.\n\n| TensorFlow | NNEF\n| --- | ---\n| tf.Variable | variable\n| tf.get_variable | variable\n| tf.placeholder | external\n| tf.constant | constant\n| tf.zeros | constant\n| tf.ones | constant\n| tf.zeros_like | constant\n| tf.ones_like | constant\n| tf.concat | concat\n| tf.split | split\n| tf.stack | stack\n| tf.unstack | unstack\n| tf.reshape | reshape\n| tf.squeeze | squeeze\n| tf.expand_dims | unsqueeze\n| tf.transpose | transpose\n| tf.add | add\n| tf.subtract | sub\n| tf.multiply | mul\n| tf.divide | div\n| tf.pow | pow\n| tf.logical_and | and\n| tf.logical_or | or\n| tf.logical_not | not\n| tf.negative | neg\n| tf.identity | copy\n| tf.abs | abs\n| tf.sign | sign\n| tf.exp | exp\n| tf.log | log\n| tf.sqrt | sqrt\n| tf.rsqrt | rsqrt\n| tf.square | sqr\n| tf.floor | floor\n| tf.ceil | ceil\n| tf.round | round\n| tf.where | select\n| tf.greater | gt\n| tf.greater_equal | ge\n| tf.less | lt\n| tf.less_equal | le\n| tf.equal | eq\n| tf.not_equal | ne\n| tf.minimum | min\n| tf.maximum | max\n| tf.assign | update\n| tf.reduce_sum | sum_reduce\n| tf.reduce_mean | mean_reduce\n| tf.reduce_max | max_reduce\n| tf.argmax | argmax_reduce\n| tf.matmul | matmul\n| tf.add_n | add_n\n| tf.sigmoid | sigmoid\n| tf.nn.sigmoid | sigmoid\n| tf.tanh | tanh\n| tf.nn.tanh | tanh\n| tf.nn.elu | elu\n| tf.nn.relu | relu\n| tf.nn.softsign | softsign\n| tf.nn.softplus | softplus\n| tf.nn.conv1d | conv\n| tf.nn.conv2d | conv\n| tf.nn.conv3d | conv\n| tf.nn.convolution | conv\n| tf.nn.conv2d_transpose | deconv\n| tf.nn.conv3d_transpose | deconv\n| tf.nn.depthwise_conv2d | conv\n| tf.nn.depthwise_conv2d_native | conv\n| tf.nn.separable_conv2d | separable_conv\n| tf.nn.max_pool | max_pool\n| tf.nn.max_pool_with_argmax | max_pool_with_index\n| tf.nn.avg_pool | avg_pool\n| tf.nn.bias_add | add\n| tf.nn.lrn | local_response_normalization\n| tf.nn.local_response_normalization | local_response_normalization\n| tf.nn.batch_normalization | batch_normalization\n| tf.nn.fused_batch_norm | batch_normalization\n| tf.nn.l2_normalize | l2_normalization\n| tf.nn.softmax | softmax\n| tf.nn.moments | moments\n| tf.image.resize_images | multilinear_upsample\n|                        | nearest_upsample\n|                        | nearest_downsample\n|                        | area_downsample\n| tf.image.resize_bilinear | multilinear_upsample\n| tf.image.resize_nearest_neighbor | nearest_upsample\n|                                  | nearest_downsample\n| tf.image.resize_area | area_downsample\n| tf.sin | sin\n| tf.cos | cos\n| tf.pad | pad\n| tf.tile | tile\n| tf.reduce_any | any_reduce\n| tf.reduce_all | all_reduce\n\n# Caffe\nThe following table lists the correspondence between operations in Caffe and NNEF.\n\n| Caffe | NNEF | Notes\n| --- | --- | ---\n| Input | external\n| Convolution | conv\n| Pooling | max_pool | if pool == MAX and not global_pooling\n|         | avg_pool | if pool == AVE and not global_pooling\n|         | max_reduce | if pool == MAX and global_pooling\n|         | mean_reduce | if pool == AVE and global_pooling\n| Crop | slice\n| Deconvolution | multilinear_upsample | if weight_filler.type == 'bilinear' and depth-wise\n|               | deconv | otherwise\n| InnerProduct | linear | if bias_term and not transpose and axis == 1\n|              | add(matmul) | if bias_term and (transpose or axis != 1)\n|              | matmul | if not bias_term\n|              | | + reshape if axis != input-rank - 1 <br> + unsqueeze if axis == 0\n| Dropout | | skipped in inference\n| LRN | local_response_normalization\n| MVN | local_contrast_normalization | if normalize_variance \n|     | local_mean_normalization | if not normalize_variance\n| BatchNorm | batch_normalization | scale factor merged into mean and variance if not 1 <br> merged with following scale layer if any\n| ReLU | relu | if negative_slope == 0\n|      | leaky_relu | if negative_slope != 0\n| PReLU | prelu\n| ELU | elu | if alpha == 1\n|     | select(x > 0.0, x, alpha * (exp(x) - 1.0)) | if alpha != 1\n| Sigmoid | sigmoid\n| TanH | tanh\n| AbsVal | abs\n| Power(a, b, n) | pow(a * x + b, n) | '*' or '+' omitted if the corresponding parameter is 1 or 0\n| Exp(base, a, b) | exp(a * x + b)       | if base == -1\n|                 | pow(base, a * x + b) | if base != -1\n|                 |                      | '*' or '+' omitted if the corresponding parameter is 1 or 0\n| Log(base, a, b) | log(a * x + b)             | if base == -1\n|                 | log2(a * x + b)            | if base == 2\n|                 | log(a * x + b) / log(base) | otherwise\n|                 |                            | '*' or '+' omitted if the corresponding parameter is 1 or 0\n| BNLL | softplus\n| Threshold(x, t) | select(x > t, 1.0, 0.0)\n| Bias(x) | add(x, weight) | + unsqueeze if axis > 0\n| Scale(x) | mul(x + bias, weight) | '+' omitted if no bias_term <br> + unsqueeze if axis > 0  \n| Flatten | reshape\n| Reshape | reshape\n| Split | copy_n\n| Concat | concat\n| Slice | split\n| Eltwise | x_1 * ... * x_n | if operation == PROD\n|         | x_1 + ... + x_n | if operation == SUM and coeff == []\n|         | coeff_1 * x_1 + ... + coeff_n * x_n | if operation == SUM and coeff != []\n|         | max(x_1, ... , x_n) | if operation == MAX\n| Reduction | squeeze(sum_reduce) * coeff | if operation == SUM\n|           | squeeze(sum_reduce(abs)) * coeff | if operation == ASUM\n|           | squeeze(sum_reduce(sqr)) * coeff | if operation == SUMSQ\n|           | squeeze(mean_reduce) * coeff | if operation == MEAN\n|           |                              | '*' omitted if coeff == 1\n| Silence | | skipped in inference\n| ArgMax | argmax_reduce | if top_k == 1 and out_max_val == false\n| Softmax | softmax\n\n# Caffe2\n\nThe following tables show the correspondence between operations in Caffe2 and NNEF.\n\nAll operations without outputs (e.g. Assert) are stripped from the graph.\n\nOnly the NCHW version of the operations are supported (as opposed to NHWC). \n\n**Normal operations:**\n\n| Caffe2 | NNEF | Notes\n| --- | --- | ---\n| Abs | abs\n| Add | add | + unsqueeze if axis != 0\n| And | and | + unsqueeze if axis != 0\n| ArgMax | argmax_reduce | + squeeze if not keepdims\n| ArgMin | argmin_reduce | + squeeze if not keepdims\n| AveragePool <br> AveragePool1D <br> AveragePool2D <br> AveragePool3D | avg_pool | if not global_pooling\n| | mean_reduce | if global_pooling\n| BatchMatMul | matmul | + reshape if input ranks are not equal or less than 2\n| Cast | select | logical to scalar or integer\n|      | ne | scalar to logical\n|      | copy | cast to same type, may be optimized away\n| Ceil | ceil\n| ChannelShuffle | reshape(transpose(reshape))\n| Clip | clamp | if both min and max is given\n| | min | if only max is given\n| | max | if only min is given\n| | copy | if neither min nor max is given, may be optimized away\n| Concat <br> DepthConcat <br> Append | concat\n| Conditional | select\n| Conv <br> Conv1D <br> Conv2D <br> Conv3D | conv\n| ConvTranspose | deconv\n| Copy <br> CopyFromCPUInput <br> CopyOnDeviceLike <br> EnsureCPUOutput <br> StopGradient | copy | may be optimized away\n| Cos | cos\n| Div | div | + unsqueeze if axis != 0\n| DotProduct | mul | + sum_reduce + squeeze if input-rank > 1\n| Dropout | copy | may be optimized away\n| ElementwiseLinear(x, w, b) | x * w + b | + reshape if X.rank != 2 or axis != 1\n| EQ | eq | + unsqueeze if axis != 0\n| FC | linear | + reshape if X.rank != 2 or W.rank != 2 or axis != 1 or axis_w != 1\n| FCTransposed | add(matmul) | + reshape if X.rank != 2 or W.rank != 2 or axis != 1 or axis_w != 1\n| Flatten | reshape\n| FlattenToVec | reshape\n| Floor | floor\n| GE | ge | + unsqueeze if axis != 0\n| GT | gt | + unsqueeze if axis != 0\n| L1Distance(a, b) | abs(a-b) | + sum_reduce + squeeze if input-rank > 1\n| LE | le | + unsqueeze if axis != 0\n| LT | lt | + unsqueeze if axis != 0\n| LayerNorm(x) | mean_, std_ = moments(x);<br> y = (x - mean_) / sqrt(std_ + epsilon);<br> mean=squeeze(mean_);<br> std=squeeze(sqrt(std_ + epsilon))\n| LeakyRelu | leaky_relu\n| Log | log\n| Logit(x) | x_ = clamp(x, eps, 1.0-eps);<br> y = log(x_ / (1 - x_))\n| LpNorm | sum_reduce(abs) | if p = 1 and not average\n| | mean_reduce(abs) | if p = 1 and average\n| | sum_reduce(sqr) | if p = 2 and not average\n| | mean_reduce(sqr) | if p = 2 and average\n| | | + reshape if input-rank != 1\n| LpPool(x) | pow(box(pow(abs(x), p)), 1.0/p)\n| MatMul | matmul | + reshape if A.rank != 2 or B.rank != 2 or axis_a != 1 or axis_b != 1\n| Max | max, \\[max, ...\\] | if input-count >= 2\n| | copy | if input-count == 1, may be optimized away\n| MaxPool <br> MaxPool1D <br> MaxPool2D <br> MaxPool3D | max_pool | if not global_pooling\n| | max_reduce | if global_pooling\n| MaxPoolWithIndex | max_pool_with_index | Caffe2 supports it only on GPU\n| Mean | div(add_n) | if input-count >= 3\n| | div(add) | if input-count == 2\n| | copy | if input-count == 1, may be optimized away\n| MergeDim | reshape\n| Min | min, \\[min, ...\\] | if input-count >= 2\n| | copy | if input-count == 1, may be optimized away\n| Mul | mul | + unsqueeze if axis != 0\n| NE | ne | + unsqueeze if axis != 0\n| Negative | neg\n| Normalize | l2_normalization\n| NormalizeL1 | l1_normalization\n| Not | not\n| Or | or | + unsqueeze if axis != 0\n| PadImage | pad\n| PRelu | prelu\n| Pow | pow | + unsqueeze if axis != 0\n| PrependDim | reshape\n| ReduceMin | min_reduce | + squeeze if not keepdims\n| ReduceMax <br> ReduceFrontMax <br> ReduceBackMax <br> ColwiseMax <br> RowwiseMax | max_reduce | + squeeze if not keepdims\n| ReduceSum <br> ReduceFrontSum <br> ReduceBackSum <br> ReduceTailSum <br> SumElements | sum_reduce | + squeeze if not keepdims\n| ReduceMean <br> ReduceFrontMean <br> ReduceBackMean | mean_reduce | + squeeze if not keepdims\n| ReduceL1 | sum_reduce(abs) | + squeeze if not keepdims\n| ReduceL2 | sqrt(sum_reduce(sqr)) | + squeeze if not keepdims\n| Relu | relu\n| Reshape | reshape | if shape parameter is constant or the result of Shape or the 2nd result of Reshape (old_shape)\n| ResizeLike | reshape\n| ResizeNearest | nearest_upsample | if upsample (or same size) in both dimensions\n| | nearest_downsample | if downsample (or same size) in both dimensions\n| | nearest_upsample(nearest_downsample) | if downsample in one dimension and upsample in the other\n| | copy | if output-shape = input-shape, may be optimized away\n| Scale | mul\n| RowMul(x, w) | mul | + reshape if w.rank != 1\n| Selu(x, alpha, scale) | select(x > 0, x, exp(x) * alpha - alpha) * scale \n| Sigmoid | sigmoid\n| Sign | sign\n| Sin | sin\n| Slice | slice\n| Softsign(x) | x / (abs(x) + 1)\n| Split | split | if split parameter is constant or the 2nd result of Concat (split_info)\n| Sqr | sqr\n| Sqrt | sqrt\n| SquaredL2Distance | (x - y) ^ 2 / 2 | + sum_reduce + squeeze if input-rank > 1\n| Squeeze | squeeze\n| StumpFunc(x, threshold, low_value, high_value) | select(x > threshold, high_value, low_value)\n| Sub | sub | + unsqueeze if axis != 0\n| Sum | add_n | if input-count >= 3\n| | add | if input-count == 2\n| | copy | if input-count == 1, may be optimized away\n| SumSqrElements | squeeze(sum_reduce(sqr)) | if not average\n| | squeeze(mean_reduce(sqr)) | if average\n| SumReduceLike | sum_reduce and/or squeeze | if output-shape != input-shape\n| | copy | if output-shape = input-shape, may be optimized away\n| Summarize(x) | min_ = min_reduce(x);<br> max_ = max_reduce(x);<br> mean_, std_ = moments(x);<br>min = reshape(min_);<br> max = reshape(max_);<br>mean = reshape(mean_);<br> std = reshape(std_);<br>y = concat(\\[min, max, mean, sqrt(std * N / (N - 1))\\]) | where N = count of x\n| Swish(x) | x / (1 + exp(-x))\n| Tanh | tanh\n| ThresholdedRelu | select(x > alpha, x, 0.0)\n| Tile | tile\n| Transpose <br> NCHW2NHWC <br> NHWC2NCHW | transpose\n| Where | select\n| Xor(x, y) | or(and(x, not(y)), and(y, not(x))) | + unsqueeze if axis != 0\n\n**Constants:** These operations/tensors are converted to constants.\n\n| Caffe2 | NNEF | Notes\n| --- | --- | ---\n| Shape | constant\\<integer\\> | Can be used as Reshape's 2nd input (shape)\n| Size | constant\\<integer\\>\n| Reshape's 2nd output (old_shape) | constant\\<integer\\> | Can be used as Reshape's 2nd input (shape) \n| Concat's 2nd output (split_info) | constant\\<integer\\> | Can be used as Split's 2nd input (split)\n| Range | constant\\<scalar\\>\n\n**Variables:** These operations (in the param initializer network) are converted to variable tensors.\n\n| Caffe2 | NNEF | Representation\n| --- | --- | ---\n| GivenTensorFill | variable\\<scalar\\> | float32\n| GivenTensorDoubleFill | variable\\<scalar\\> | float64\n| GivenTensorInt16Fill | variable\\<integer\\> | int16\n| GivenTensorIntFill | variable\\<integer\\> | int32\n| GivenTensorInt64Fill | variable\\<integer\\> | int64\n| GivenTensorBoolFill | variable\\<logical\\> | bool\n\n# ONNX\n\nThe following table lists the correspondence between operations in ONNX and NNEF.\n\n| ONNX | NNEF | Notes\n| --- | --- | ---\n| Abs | abs\n| Acos | acos\n| Acosh | acosh\n| Add | add\n| And | and\n| ArgMax | argmax_reduce\n| ArgMin | argmin_reduce\n| Asin | asin\n| Asinh | asinh\n| Atan | atan\n| Atanh | atanh\n| AveragePool | avg_pool\n| BatchNormalization | batch_normalization\n| Cast | select | logical to scalar or integer\n|      | ne | scalar to logical\n|      | copy | to same type\n| Ceil | ceil\n| Clip | clamp\n| Compress | -\n| Concat | concat\n| Constant | constant\n| ConstantOfShape | constant\n| Conv | conv\n| ConvTranspose | deconv\n| Cos | cos\n| Cosh | cosh\n| DepthToSpace | reshape(transpose(reshape))\n| Div | div\n| Dropout | copy\n| Elu | elu | + arithmetic when alpha != 1.0\n| Equal | eq\n| Erf | erf\n| Exp | exp\n| Expand | tile\n| EyeLike | -\n| Flatten | reshape\n| Floor | floor\n| GRU | -\n| Gather | gather\n| Gemm | matmul\n| GlobalAveragePool | mean_reduce\n| GlobalLpPool | sum_reduce(abs) | if p == 1\n|              | sqrt(sum_reduce(sqr)) | if p == 2\n| GlobalMaxPool | max_reduce\n| Greater | gt\n| HardSigmoid | clamp(add(mul))\n| HardMax | -\n| Identity | copy\n| If | -\n| InstanceNormalization | div(moments) | + further arithmetic\n| IsNan | -\n| LRN | local_response_normalization\n| LSTM | -\n| LeakyRelu | leaky_relu\n| Less | lt\n| Log | log\n| LogSoftmax | log(softmax)\n| Loop | -\n| LpNormalization | l1_normalization | if p == 1\n|                 | l2_normalization | if p == 2\n| LpPool | avg_pool(abs) | if p == 1\n|        | sqrt(avg_pool(sqr)) | if p == 2\n| MatMul | matmul\n| Max | max\n| MaxPool | max_pool\n| MaxRoiPool | max_roi_pool\n| MaxUnpool | desample\n| Mean | div(add)\n| Min | min\n| Mul | mul\n| Multinomial | -\n| Neg | neg\n| Not | not\n| OneHot | -\n| Or | or\n| PRelu | prelu\n| Pad | pad\n| Pow | pow\n| RNN | -\n| RandomNormal | -\n| RandomNormalLike | -\n| RandomUniform | -\n| RandomUniformLike | -\n| Reciprocal | rcp\n| ReduceL1 | sum_reduce(abs)\n| ReduceL2 | sqrt(sum_reduce(sqr))\n| ReduceLogSum | log(sum_reduce)\n| ReduceLogSumExp | log(sum_reduce(exp))\n| ReduceMax | max_reduce\n| ReduceMean | mean_reduce\n| ReduceMin | min_reduce\n| ReduceProd | -\n| ReduceSum | sum_reduce\n| ReduceSumSquare | sum_reduce(sqr)\n| Relu | relu\n| Reshape | reshape\n| Scan | -\n| Scatter | -\n| Selu | selu\n| Shape | constant | if can be evaluated\n| Shrink | -\n| Sigmoid | sigmoid\n| Sign | sign\n| Sin | sin\n| Sinh | sinh\n| Size | constant | if can be evaluated\n| Slice | slice\n| Softmax | softmax\n| Softplus | softplus\n| Softsign | div(x,1+abs(x))\n| SpaceToDepth | -\n| Split | split\n| Sqrt | sqrt\n| Squeeze | squeeze\n| Sub | sub\n| Sum | add\n| Tan | tan\n| Tanh | tanh\n| Tile | tile\n| TopK | -\n| Transpose | transpose\n| Unsqueeze | unsqueeze\n| Upsample | multilinear_upsample\n|          | nearest_upsample\n| Where | select\n| Xor | or(and(x,not(y)),and(y,not(x)))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/optimization/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/optimization/nnef_optimizer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom ..model.utils import bypass_and_remove, replace_chain\nfrom ..model.utils import generate_tensor_names_from_op_type, generate_missing_tensor_names_from_op_type\nfrom ..model.graph import *\n\n\nclass Optimizer:\n\n    def __init__(self, keep_tensor_names=True, custom_optimizers=None, dequantize=False):\n        self._keep_tensor_names = keep_tensor_names\n        self._custom_optimizers = custom_optimizers or {}\n        self._dequantize = dequantize\n\n    def __call__(self, graph, only_required=False):\n        self._fix_inputs_as_output(graph)\n        self._fix_inputs_without_producer(graph)\n\n        if not only_required:\n            changed = True\n            while changed:\n                changed = False\n\n                changed |= self._remove_identity_ops(graph, 'copy', lambda op: True)\n                changed |= self._remove_identity_ops(graph, 'transpose', lambda op:\n                    self._is_sorted(op.attribs['axes']))\n                changed |= self._remove_identity_ops(graph, 'reshape', lambda op:\n                    op.output.shape == op.input.shape)\n                changed |= self._remove_identity_ops(graph, 'squeeze', lambda op:\n                    op.attribs['axes'] == [])\n                changed |= self._remove_identity_ops(graph, 'unsqueeze', lambda op:\n                    op.attribs['axes'] == [])\n                changed |= self._remove_identity_ops(graph, 'mul', lambda op:\n                    self._is_constant(op.inputs[0], 1.0) or self._is_constant(op.inputs[1], 1.0))\n                changed |= self._remove_identity_ops(graph, 'add', lambda op:\n                    self._is_constant(op.inputs[0], 0.0) or self._is_constant(op.inputs[1], 0.0))\n                changed |= self._remove_identity_ops(graph, ('box', 'debox', 'avg_pool', 'max_pool'), lambda op:\n                    self._is_uniform(op.attribs['size'], 1) and\n                    self._is_uniform(op.attribs['stride'], 1) and\n                    self._is_uniform(op.attribs['dilation'], 1) and\n                    self._is_uniform(op.attribs['padding'], 0))\n                changed |= self._remove_identity_ops(graph,\n                    ('nearest_downsample', 'area_downsample', 'nearest_upsample', 'multilinear_upsample'), lambda op:\n                    self._is_uniform(op.attribs['factor'], 1))\n\n                changed |= self._remove_inverse_ops(graph, 'squeeze', 'unsqueeze', lambda op1, op2:\n                    op1.attribs['axes'] == op2.attribs['axes'])\n                changed |= self._remove_inverse_ops(graph, 'unsqueeze', 'squeeze', lambda op1, op2:\n                    op1.attribs['axes'] == op2.attribs['axes'])\n                changed |= self._remove_inverse_ops(graph, 'transpose', 'transpose', lambda op1, op2:\n                    self._is_sorted(Optimizer._permute(op1.attribs['axes'], op2.attribs['axes'])))\n\n                changed |= self._merge_op_into_variables_and_constants(graph, 'transpose', lambda data, attribs:\n                    data.transpose(attribs['axes']))\n                changed |= self._merge_op_into_variables_and_constants(graph, 'reshape', lambda data, attribs:\n                    data.reshape(self._get_reshape_shape(data, attribs)))\n                changed |= self._merge_op_into_variables_and_constants(graph, 'squeeze', lambda data, attribs:\n                    data.squeeze(tuple(attribs['axes'])))\n                changed |= self._merge_op_into_variables_and_constants(graph, 'unsqueeze', lambda data, attribs:\n                    data.reshape(self._unsqueeze_shape(data.shape, attribs['axes'])))\n\n                changed |= self._merge_reshape_sequence(graph)\n\n                changed |= replace_chain(graph, ['pad', {'conv', 'deconv', 'max_pool', 'avg_pool'}],\n                                         self._merge_pad_with_sliding)\n                changed |= replace_chain(graph, [{'mul', 'div'}, {'conv', 'deconv', 'linear'}],\n                                         self._merge_mul_linear, allow_forks=True)\n                changed |= replace_chain(graph, [{'conv', 'deconv', 'linear'}, {'add', 'sub'}],\n                                         self._merge_linear_add)\n                changed |= replace_chain(graph, [{'conv', 'deconv', 'linear'}, {'mul', 'div'}],\n                                         self._merge_linear_mul)\n                changed |= replace_chain(graph, ['matmul', {'add', 'sub'}],\n                                         self._merge_matmul_bias)\n                changed |= replace_chain(graph, [{'conv', 'deconv'}, 'batch_normalization'],\n                                         self._merge_conv_batch_norm)\n                changed |= replace_chain(graph, ['batch_normalization'], self._merge_batch_norm)\n                changed |= replace_chain(graph, ['transpose', 'squeeze'], self._merge_transpose_squeeze)\n                changed |= replace_chain(graph, ['reshape'], self._substitute_squeeze)\n\n                for chain, replacer in six.iteritems(self._custom_optimizers):\n                    changed |= replace_chain(graph, chain, replacer)\n\n                changed |= self._remove_unused_variables_and_constants(graph)\n\n        if self._keep_tensor_names:\n            generate_missing_tensor_names_from_op_type(graph)\n        else:\n            generate_tensor_names_from_op_type(graph)\n\n        if self._dequantize:\n            Optimizer._dequantize_variables(graph)\n            Optimizer._remove_quantization_attribs(graph)\n\n        return graph\n\n    @staticmethod\n    def _fix_inputs_without_producer(graph):\n        idx = 0\n        for tensor in graph.inputs:\n            if tensor.producer is None:\n                cnt = len(graph.operations)\n                Operation(tensor.graph, type='external', outputs=tensor,\n                          attribs={'shape': list(tensor.shape), 'dtype': tensor.dtype})\n                graph.move_operation(cnt, idx)\n                idx += 1\n        return idx > 0\n\n    @staticmethod\n    def _fix_inputs_as_output(graph):\n        graph.outputs = [Optimizer._insert_copy(tensor) if tensor in graph.inputs else tensor\n                         for tensor in graph.outputs]\n\n    @staticmethod\n    def _insert_copy(tensor, copy=None):\n        if copy is None:\n            copy = Tensor(tensor.graph, name=tensor.name + '_copy', dtype=tensor.dtype, shape=tensor.shape,\n                          data=tensor.data, quant=tensor.quant)\n        Operation(tensor.graph, type='copy', inputs=tensor, outputs=copy)\n        return copy\n\n    @staticmethod\n    def _match_op_type(type, types):\n        return type in types if isinstance(types, tuple) else type == types\n\n    def _remove_identity_ops(self, graph, type, cond):\n        changed = False\n        for op in graph.operations:\n            if self._match_op_type(op.type, type) and cond(op) and op.input.quant == op.output.quant:\n                changed |= self._bypass_and_remove(graph, op)\n\n        return changed\n\n    def _merge_op_into_variables_and_constants(self, graph, type, func):\n        changed = False\n        for op in graph.operations:\n            if (op.type == 'variable' or op.type == 'constant') and len(op.output.consumers) > 0:\n                if self._all_consumers_same(op.output, type):\n                    data = op.output.data if op.output.data is not None else np.zeros(op.output.shape)\n                    attribs = op.output.consumers[0].attribs\n                    data = func(data, attribs)\n                    if op.output.data is not None:\n                        op.output.data = data\n                    op.output.shape = data.shape\n                    op.attribs['shape'] = list(data.shape)\n                    for consumer in list(op.output.consumers):  # copy the list before removals!\n                        changed |= self._bypass_and_remove(graph, consumer)\n\n        return changed\n\n    def _remove_inverse_ops(self, graph, type1, type2, cond):\n        changed = False\n        for op in graph.operations:\n            if op.type == type1 and len(op.output.consumers) == 1:\n                consumer = op.output.consumer\n                if consumer.type == type2 and cond(op, consumer):\n                    changed |= self._bypass_and_remove(graph, op)\n                    changed |= self._bypass_and_remove(graph, consumer)\n\n        return changed\n\n    def _merge_reshape_sequence(self, graph):\n        changed = False\n        for op in graph.operations:\n            if op.type == 'reshape' and len(op.output.consumers) == 1:\n                consumer = op.output.consumer\n                if consumer.type == 'reshape':\n                    new_shape = self._get_reshape_shape(consumer.input, consumer.attribs)\n                    if any(s == 0 for s in new_shape):\n                        old_shape = self._get_reshape_shape(op.input, op.attribs)\n                        new_shape = [old_shape[i] if s == 0 else s for i, s in enumerate(new_shape)]\n\n                    consumer.attribs['shape'] = new_shape\n                    del consumer.attribs['axis_start']\n                    del consumer.attribs['axis_count']\n\n                    changed |= self._bypass_and_remove(graph, op)\n\n        return changed\n\n    def _get_reshape_shape(self, input, attribs):\n        start = attribs.get('axis_start', 0)\n        count = attribs.get('axis_count', len(input.shape) - start)\n        shape = attribs['shape']\n        return input.shape[:start] + tuple(shape) + input.shape[start + count:]\n\n    def _bypass_and_remove(self, graph, op):\n        if op.output in graph.outputs and (op.input in graph.inputs or op.input in graph.outputs):\n            self._insert_copy(op.input, op.output)\n            graph.remove_operation(op, unlink=True)\n            return False\n        else:\n            bypass_and_remove(graph, op, remove_input_not_output=op.output in graph.outputs)\n            return True\n\n    @staticmethod\n    def _is_channelwise_shape(shape):\n        return len(shape) <= 1 or all(s == 1 or i == 1 for i, s in enumerate(shape))\n\n    @staticmethod\n    def _merge_linear_add(linear, add, type=None):\n        bias = add.inputs[1] if add.inputs[0] == linear.output else add.inputs[0]\n        if bias.data is None or not Optimizer._is_channelwise_shape(bias.shape):\n            return False\n\n        if len(linear.inputs) > 2 and linear.inputs[2].data is None:\n            return None\n\n        if len(bias.shape) == 0:\n            bias.data = np.expand_dims(bias.data, axis=0)\n        elif len(bias.shape) >= 2:\n            bias.data = Optimizer._squeeze_batch_and_spatial_dims(bias.data)\n\n        bias.shape = bias.data.shape\n\n        if add.type == 'sub':\n            bias.data = -bias.data\n\n        if len(linear.inputs) == 2:\n            new_shape = (1, 1) if len(bias.shape) == 0 else (1, *bias.shape) if len(bias.shape) == 1 else None\n            if new_shape is not None:\n                bias.data = np.reshape(bias.data, newshape=new_shape)\n                bias.shape = new_shape\n        else:\n            bias.data = linear.inputs[2].data + bias.data\n            bias.shape = bias.data.shape\n\n        Optimizer._ensure_variable_producer(bias, label=linear.output.name + '_bias')\n\n        linear.copy_with(type=type or linear.type,\n                         attribs=linear.attribs if type != 'linear' else {},\n                         inputs=(linear.inputs[0], linear.inputs[1], bias),\n                         outputs=add.output)\n\n    @staticmethod\n    def _merge_matmul_bias(matmul, add):\n        bias = add.inputs[1] if add.inputs[0] == matmul.output else add.inputs[0]\n        if not Optimizer._is_channelwise_shape(bias.shape):\n            return False\n\n        transposeA = matmul.attribs.get('transposeA') or False\n        transposeB = matmul.attribs.get('transposeB') or False\n\n        if transposeA:\n            return False\n\n        if not transposeB:\n            producer = matmul.inputs[1].producer\n            data = matmul.inputs[1].data\n            if data is None or producer.type != 'variable':\n                return False\n\n            rank = len(data.shape)\n            data = np.transpose(data, axes=list(range(rank - 2)) + [rank - 1, rank - 2])\n            matmul.inputs[1].data = data\n            producer.attribs['shape'] = list(data.shape)\n            matmul.attribs['transposeB'] = True\n\n        return Optimizer._merge_linear_add(matmul, add, type='linear')\n\n    @staticmethod\n    def _is_sorted(array):\n        return all(array[i] <= array[i + 1] for i in range(len(array) - 1))\n\n    @staticmethod\n    def _all_consumers_same(tensor, type):\n        attribs = tensor.consumers[0].attribs\n        return all(consumer.type == type and consumer.attribs == attribs for consumer in tensor.consumers)\n\n    @staticmethod\n    def _unsqueeze_shape(shape, axes):\n        for axis in axes:\n            shape = shape[:axis] + (1,) + shape[axis:]\n        return shape\n\n    @staticmethod\n    def _permute(items, perm):\n        permuted = list(items)\n        for i in range(len(perm)):\n            permuted[i] = items[perm[i]]\n        return type(items)(permuted)\n\n    @staticmethod\n    def _add_variable(graph, data, name, label=None):\n        output = Tensor(graph, name=name, shape=data.shape, dtype=data.dtype.type, data=data)\n        Operation(graph, type='variable', outputs=output, attribs={'shape': list(data.shape), 'label': label or name})\n        return output\n\n    @staticmethod\n    def _ensure_variable_producer(tensor, label):\n        if tensor.producer is None and len(tensor.shape) != 0:\n            Operation(tensor.graph, type='variable', outputs=tensor,\n                      attribs={'shape': list(tensor.shape), 'label': label})\n        elif tensor.producer is not None:\n            tensor.producer.attribs['shape'] = list(tensor.shape)\n\n    @staticmethod\n    def _merged_conv_batch_norm_params(weights, bias, mean, variance, offset, scale, epsilon, axis):\n        std = np.sqrt(variance + epsilon)\n        factor = scale / std\n        new_weights = weights * np.reshape(factor, newshape=(1,) * axis + factor.shape + (1,) * (len(weights.shape) - axis - 1))\n        new_bias = (bias - mean) * factor + offset\n        return new_weights, new_bias\n\n    @staticmethod\n    def _merge_conv_batch_norm(conv, bn):\n        if any(tensor.quant for tensor in conv.inputs) or any(tensor.quant for tensor in bn.inputs):\n            return False\n\n        if conv.inputs[1].data is None:\n            return False\n\n        weights, bias = Optimizer._merged_conv_batch_norm_params(conv.inputs[1].data,\n                                    np.squeeze(conv.inputs[2].data if len(conv.inputs) > 2 else 0, axis=0),\n                                    np.squeeze(bn.inputs[1].data, axis=0),\n                                    np.squeeze(bn.inputs[2].data, axis=0),\n                                    np.squeeze(bn.inputs[3].data if len(bn.inputs) > 3 else 0, axis=0),\n                                    np.squeeze(bn.inputs[4].data if len(bn.inputs) > 4 else 1, axis=0),\n                                    bn.attribs['epsilon'],\n                                    axis=1 if conv.type == 'deconv' else 0)\n\n        bias = np.expand_dims(bias, axis=0)\n\n        conv.inputs[1].data = weights\n\n        if len(conv.inputs) > 2:\n            conv.inputs[2].data = bias\n            conv.inputs[2].shape = bias.shape\n            Optimizer._ensure_variable_producer(conv.inputs[2], label=conv.output.name + '_bias')\n            conv.copy_with(outputs=bn.output)\n        else:\n            bias = Optimizer._add_variable(conv.graph, data=bias, name=conv.output.name + '_bias')\n            conv.copy_with(inputs=(*conv.inputs[:2], bias), outputs=bn.output)\n\n    @staticmethod\n    def _merged_batch_norm_params(mean, variance, offset, scale, epsilon):\n        std = np.sqrt(variance + epsilon)\n        factor = scale / std\n        return factor, offset - factor * mean\n\n    @staticmethod\n    def _merge_batch_norm(bn):\n        if any(tensor.quant for tensor in bn.inputs):\n            return False\n\n        scale, offset = Optimizer._merged_batch_norm_params(\n                                    bn.inputs[1].data,\n                                    bn.inputs[2].data,\n                                    bn.inputs[3].data if len(bn.inputs) > 3 else 0,\n                                    bn.inputs[4].data if len(bn.inputs) > 4 else 1,\n                                    bn.attribs['epsilon'])\n\n        scale = Optimizer._add_variable(bn.graph, data=scale, name=bn.output.name + '_scale')\n        offset = Optimizer._add_variable(bn.graph, data=offset, name=bn.output.name + '_offset')\n\n        scaled = Tensor(graph=bn.graph, name=bn.output.name + '_scaled', shape=bn.output.shape, dtype=bn.output.dtype)\n\n        Operation(graph=bn.graph, type='mul', inputs=(bn.inputs[0], scale), outputs=scaled)\n        Operation(graph=bn.graph, type='add', inputs=(scaled, offset), outputs=bn.output)\n\n    @staticmethod\n    def _merge_mul_linear(mul, linear):\n        which = 0 if mul.inputs[0].data is not None else 1\n        other = 1 - which\n\n        variable = mul.inputs[which]\n        if variable.data is None or not Optimizer._is_channelwise_shape(variable.shape):\n            return False\n\n        if len(variable.shape) == 0:\n            scale = np.expand_dims(variable.data, axis=0)\n        elif len(variable.shape) >= 2:\n            scale = Optimizer._squeeze_batch_and_spatial_dims(variable.data)\n\n        weights = linear.inputs[1]\n        if weights.data is None:\n            return False\n\n        rank = len(weights.shape)\n        shape = scale.shape + (1,) * (rank - 1) if linear.type == 'deconv' else (1,) + scale.shape + (1,) * (rank - 2)\n        scale = np.reshape(scale, newshape=shape)\n\n        weights.data = weights.data * scale if mul.type != 'div' else weights.data / scale\n\n        linear.copy_with(inputs=(mul.inputs[other], weights, *linear.inputs[2:]), outputs=linear.output)\n\n    @staticmethod\n    def _merge_linear_mul(linear, mul):\n        variable = mul.inputs[1] if mul.inputs[0] == linear.output else mul.inputs[0]\n        if variable.data is None or not Optimizer._is_channelwise_shape(variable.shape):\n            return False\n\n        if len(variable.shape) == 0:\n            scale = np.expand_dims(variable.data, axis=0)\n        elif len(variable.shape) >= 2:\n            scale = Optimizer._squeeze_batch_and_spatial_dims(variable.data)\n\n        negate = mul.type == 'div'\n\n        weights = linear.inputs[1]\n        if weights.data is None:\n            return False\n\n        if len(linear.inputs) > 2:\n            bias = linear.inputs[2]\n            if bias.data is None:\n                return False\n\n            bias.data = bias.data * scale if not negate else bias.data / scale\n            bias.shape = bias.data.shape\n\n            Optimizer._ensure_variable_producer(bias, label=linear.output.name + '_bias')\n\n        rank = len(weights.shape)\n        shape = (1,) + scale.shape + (1,) * (rank - 2) if linear.type == 'deconv' else scale.shape + (1,) * (rank - 1)\n        scale = np.reshape(scale, newshape=shape)\n\n        weights.data = weights.data * scale if not negate else weights.data / scale\n\n        linear.copy_with(inputs=(linear.inputs[0], weights, *linear.inputs[2:]), outputs=mul.output)\n\n    @staticmethod\n    def _remove_unused_variables_and_constants(graph):\n        ops = {op for op in graph.operations\n               if (op.type == 'variable' or op.type == 'constant') and len(op.output.consumers) == 0}\n        tensors = {op.output for op in ops}\n        graph.remove_operations(ops, unlink=True)\n        graph.remove_tensors(tensors)\n        return len(ops) != 0\n\n    @staticmethod\n    def _merge_pad_with_sliding(pad, sliding):\n        offset = 2 if sliding.type == 'conv' or sliding.type == 'deconv' else 0\n        padding = pad.attribs['padding']\n\n        if not all(p == 0 and q == 0 for p, q in sliding.attribs['padding']) or \\\n                len(padding) < offset or not all(p == 0 and q == 0 for p, q in padding[:offset]):\n            return False\n\n        attribs = dict(sliding.attribs)\n        attribs['padding'] = pad.attribs['padding'][offset:]\n        attribs['border'] = pad.attribs['border']\n\n        sliding.copy_with(inputs=(pad.input, *sliding.inputs[1:]), attribs=attribs)\n\n    @staticmethod\n    def _squeeze_batch_and_spatial_dims(data):\n        return np.squeeze(data, axis=(0,) + tuple(i for i in range(2, len(data.shape))))\n\n    @staticmethod\n    def _is_constant(tensor, value):\n        if tensor.producer is not None and tensor.producer.name == 'constant':\n            data = tensor.attribs['value']\n        else:\n            data = tensor.data\n\n        return (not isinstance(tensor.data, np.ndarray) or data.shape == ()) and data == value\n\n    @staticmethod\n    def _is_uniform(array, value):\n        return all(item == value for item in array)\n\n    @staticmethod\n    def _merge_transpose_squeeze(transpose, squeeze):\n        transpose_axes = transpose.attribs['axes']\n        squeeze_axes = squeeze.attribs['axes']\n\n        squeezed = [x for i, x in enumerate(transpose_axes) if i not in squeeze_axes]\n        is_identity = squeezed == list(range(len(squeezed)))\n\n        if not is_identity:\n            return False\n\n        attribs = dict(squeeze.attribs)\n        attribs['axes'] = [transpose_axes[x] for x in squeeze_axes]\n\n        squeeze.copy_with(inputs=transpose.input, attribs=attribs)\n\n    @staticmethod\n    def _substitute_squeeze(reshape):\n        input_shape = reshape.input.shape\n        output_shape = reshape.output.shape\n\n        if not len(output_shape) < len(input_shape):\n            return False\n\n        k = 0\n        axes = []\n        for i in range(len(input_shape)):\n            if k < len(output_shape) and input_shape[i] == output_shape[k]:\n                k += 1\n            elif input_shape[i] == 1:\n                axes.append(i)\n            else:\n                return False\n\n        attribs = {'axes': axes}\n        dtype = reshape.attribs.get('dtype')\n        if dtype is not None:\n            attribs['dtype'] = dtype\n\n        Operation(reshape.graph, type='squeeze', name=reshape.name, inputs=reshape.input, outputs=reshape.output,\n                  attribs=attribs)\n\n    @staticmethod\n    def _dequantize_variables(graph):\n        for tensor in graph.tensors:\n            if tensor.quant and tensor.data is not None:\n                rank = len(tensor.data.shape)\n                scale = Optimizer._ensure_quant_param_rank(tensor.quant.get('scale'), rank)\n                zero_point = Optimizer._ensure_quant_param_rank(tensor.quant.get('zero_point'), rank)\n                if isinstance(zero_point, np.ndarray):\n                    assert Optimizer._broadcastable(zero_point.shape, tensor.shape), \\\n                        f\"zero-point shape {zero_point.shape} cannot be broadcast to tensor shape {tensor.shape} \" \\\n                        f\"for tensor '{tensor.name}'\"\n                if isinstance(scale, np.ndarray):\n                    assert Optimizer._broadcastable(scale.shape, tensor.shape), \\\n                        f\"scale shape {scale.shape} cannot be broadcast to tensor shape {tensor.shape} \" \\\n                        f\"for tensor '{tensor.name}'\"\n                if scale is not None and not Optimizer._is_zero(scale):\n                    dequantized = (tensor.data - zero_point) * scale\n                    tensor.data = dequantized.astype(np.float32)\n                    tensor.quant = None\n\n    @staticmethod\n    def _remove_quantization_attribs(graph):\n        for tensor in graph.tensors:\n            tensor.quant = None\n\n    @staticmethod\n    def _ensure_quant_param_rank(param, rank, offset=0):\n        return np.reshape(param, newshape=(1,) * offset + param.shape + (1,) * (rank - 1 - offset)) \\\n            if isinstance(param, np.ndarray) and len(param.shape) == 1 else param\n\n    @staticmethod\n    def _broadcastable(x, y):\n        return all(xi == yi or xi == 1 for xi, yi in zip(x, y))\n\n    @staticmethod\n    def _is_zero(value):\n        return np.all(value == 0) if isinstance(value, np.ndarray) else value == 0\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/optimization/onnx_optimizer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom ..model.utils import replace_chain\nimport six\n\n\nclass Optimizer:\n\n    def __init__(self, custom_optimizers=None):\n        self._custom_optimizers = custom_optimizers or {}\n\n    def __call__(self, graph, only_required=False):\n        self._fix_batchnorm_spatial(graph)\n        for chain, replacer in six.iteritems(self._custom_optimizers):\n            replace_chain(graph, chain, replacer)\n        return graph\n\n    @staticmethod\n    def _fix_batchnorm_spatial(graph):\n        for op in graph.operations:\n            if op.type == 'BatchNormalization':\n                spatial = op.attribs.get('spatial')\n                if spatial == 0 and op.inputs[1].rank == 1:\n                    del op.attribs['spatial']\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/optimization/tf_optimizer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom ..model.utils import replace_chain\nfrom ..model.graph import *\nfrom ..utils.types import from_numpy\n\n\nclass Optimizer:\n\n    def __init__(self, custom_optimizers=None):\n        self._custom_optimizers = custom_optimizers or {}\n\n    def __call__(self, graph, only_required=False):\n        self._fix_inputs_without_producer(graph)\n        replace_chain(graph, ['SpaceToBatchND', {'Conv2D', 'DepthwiseConv2dNative'}, 'BatchToSpaceND'], self._replace_dilated_conv)\n        replace_chain(graph, ['Cast'], self._replace_bool_cast)\n        for chain, replacer in six.iteritems(self._custom_optimizers):\n            replace_chain(graph, chain, replacer)\n        if not only_required:\n            self._remove_unused_constants(graph)\n        return graph\n\n    @staticmethod\n    def _fix_inputs_without_producer(graph):\n        idx = 0\n        for tensor in graph.inputs:\n            if tensor.producer is None:\n                cnt = len(graph.operations)\n                Operation(tensor.graph, type='Placeholder', name=Optimizer._op_name(tensor.name), outputs=tensor,\n                          attribs={'shape': tensor.shape, 'dtype': tensor.dtype})\n                graph.move_operation(cnt, idx)\n                idx += 1\n        return idx > 0\n\n    @staticmethod\n    def _op_name(tensor_name):\n        idx = tensor_name.find(':')\n        return tensor_name[:idx] if idx != -1 else tensor_name\n\n    @staticmethod\n    def _remove_unused_constants(graph):\n        ops = [op for op in graph.operations if op.type == 'Const' and len(op.output.consumers) == 0]\n        tensors = [op.output for op in ops]\n        graph.remove_operations(ops, unlink=True)\n        graph.remove_tensors(tensors)\n\n    @staticmethod\n    def _replace_dilated_conv(space_to_batch, conv, batch_to_space):\n        if not Optimizer._is_constant(space_to_batch.inputs[1]) or not Optimizer._is_constant(batch_to_space.inputs[1]):\n            return False\n\n        block_shape1 = Optimizer._read_constant(space_to_batch.inputs[1])\n        block_shape2 = Optimizer._read_constant(batch_to_space.inputs[1])\n\n        if not np.all(block_shape1 == block_shape2):\n            return False\n\n        if conv.attribs['padding'] != 'VALID':\n            return False\n\n        dilations = from_numpy(block_shape1)\n\n        input = space_to_batch.inputs[0]\n        filter = conv.inputs[1]\n        output = batch_to_space.outputs[0]\n\n        is_nxc = Optimizer._is_nxc(conv.attribs['data_format'])\n        same_padding = Optimizer._is_same_padded(input.shape, output.shape, conv.attribs['strides'], is_nxc)\n\n        if not same_padding:\n            return False\n\n        op = conv.copy_with(inputs=(input, filter, *conv.inputs[2:]), outputs=output, attribs=dict(conv.attribs))\n\n        op.attribs['dilations'] = [1] + dilations + [1] if is_nxc else [1, 1] + dilations\n        op.attribs['padding'] = 'SAME'\n        if '_output_shapes' in op.attribs:\n            op.attribs['_output_shapes'] = batch_to_space.attribs['_output_shapes']\n\n    @staticmethod\n    def _replace_bool_cast(cast):\n        if cast.input.dtype == bool and cast.output.dtype != bool:\n            ones = Tensor(cast.graph, name=cast.name + '/ones', dtype=cast.output.dtype, shape=cast.output.shape,\n                          data=np.full(fill_value=1, dtype=cast.output.dtype, shape=cast.output.shape))\n            zeros = Tensor(cast.graph, name=cast.name + '/zeros', dtype=cast.output.dtype, shape=cast.output.shape,\n                           data=np.full(fill_value=0, dtype=cast.output.dtype, shape=cast.output.shape))\n            Optimizer._make_constant_producer(ones)\n            Optimizer._make_constant_producer(zeros)\n            Operation(cast.graph, type='Select', name=cast.name, inputs=(cast.input, ones, zeros), outputs=cast.output,\n                      attribs={'T': cast.output.dtype})\n        elif cast.input.dtype != bool and cast.output.dtype == bool:\n            zeros = Tensor(cast.graph, name=cast.name + '/zeros', dtype=cast.input.dtype, shape=(),\n                           data=np.array(0, dtype=cast.input.dtype))\n            Optimizer._make_constant_producer(zeros)\n            Operation(cast.graph, type='NotEqual', name=cast.name, inputs=(cast.input, zeros), outputs=cast.output,\n                      attribs={'T': cast.output.dtype})\n        else:\n            return False\n\n    @staticmethod\n    def _is_constant(tensor):\n        return tensor.producer.type == 'Const' if tensor.producer else tensor.data is not None\n\n    @staticmethod\n    def _read_constant(tensor):\n        return tensor.producer.attribs['value'] if tensor.producer else tensor.data\n\n    @staticmethod\n    def _is_nxc(format):\n        return format[0] == 'N' and format[-1] == 'C' and len(format) > 2\n\n    @staticmethod\n    def _is_same_padded(input, output, stride, is_nxc):\n        rank = len(input)\n        return all(output[i] == (input[i] + stride[i] - 1) // stride[i]\n                   for i in (range(1, rank - 1) if is_nxc else range(2, rank)))\n\n    @staticmethod\n    def _make_constant_producer(tensor):\n        Operation(tensor.graph, type='Const', name=tensor.name, outputs=tensor,\n                  attribs={'dtype': tensor.dtype, 'value': tensor.data})\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/optimization/tflite_optimizer.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom ..model.utils import replace_chain, bypass_and_remove\nfrom ..model.graph import *\nfrom ..utils.types import from_numpy\n\n\nclass Optimizer:\n\n    def __init__(self, custom_optimizers=None):\n        self._custom_optimizers = custom_optimizers or {}\n\n    def __call__(self, graph, only_required=False):\n        Optimizer._eliminate_variable_dequantize_ops(graph)\n        replace_chain(graph, ['SPACE_TO_BATCH_ND', {'CONV_2D', 'DEPTHWISE_CONV_2D'}, 'BATCH_TO_SPACE_ND'], self._replace_dilated_conv)\n        replace_chain(graph, ['RESHAPE', 'RESHAPE', 'PACK', 'PACK', 'RESHAPE'], self._replace_resize_nearest)\n        replace_chain(graph, ['SHAPE'], self._replace_const_shape)\n        for chain, replacer in six.iteritems(self._custom_optimizers):\n            replace_chain(graph, chain, replacer)\n        return graph\n\n    @staticmethod\n    def _replace_resize_nearest(reshape1, reshape2, pack1, pack2, reshape3):\n        def _all_inputs_same(op):\n            return len(op.inputs) > 0 and all(tensor is op.inputs[0] for tensor in op.inputs)\n\n        if not (reshape2.output.shape == reshape1.inputs[0].shape and\n                _all_inputs_same(pack1) and _all_inputs_same(pack2) and\n                len(pack1.inputs) == len(pack2.inputs)):\n            return False\n\n        input = reshape1.inputs[0]\n        output = reshape3.output\n        size = Tensor(input.graph, shape=(len(output.shape) - 2,), dtype=np.int32, data=np.array(output.shape[1:-1]),\n                      name=output.name + '/size')\n        Operation(input.graph,\n                  type='RESIZE_NEAREST_NEIGHBOR',\n                  inputs=(input, size),\n                  outputs=output,\n                  attribs={\n                      'align_corners': False,\n                      'half_pixel_centers': False,\n                  })\n\n    @staticmethod\n    def _replace_dilated_conv(space_to_batch, conv, batch_to_space):\n        if not Optimizer._is_constant(space_to_batch.inputs[1]) or not Optimizer._is_constant(batch_to_space.inputs[1]):\n            return False\n\n        block_shape1 = Optimizer._read_constant(space_to_batch.inputs[1])\n        block_shape2 = Optimizer._read_constant(batch_to_space.inputs[1])\n\n        if not np.all(block_shape1 == block_shape2):\n            return False\n\n        if conv.attribs['padding'] != 'VALID':\n            return False\n\n        strides = [1, conv.attribs['stride_h'], conv.attribs['stride_w'], 1]\n        dilations = from_numpy(block_shape1)\n\n        input = space_to_batch.inputs[0]\n        filter = conv.inputs[1]\n        output = batch_to_space.outputs[0]\n\n        same_padding = Optimizer._is_same_padded(input.shape, output.shape, strides)\n\n        if not same_padding:\n            return False\n\n        op = conv.copy_with(inputs=(input, filter, *conv.inputs[2:]), outputs=output, attribs=dict(conv.attribs))\n\n        op.attribs['dilation_h_factor'] = dilations[0]\n        op.attribs['dilation_w_factor'] = dilations[1]\n        op.attribs['padding'] = 'SAME'\n\n    @staticmethod\n    def _is_constant(tensor):\n        return tensor.producer is None and tensor.data is not None\n\n    @staticmethod\n    def _read_constant(tensor):\n        return tensor.data\n\n    @staticmethod\n    def _is_same_padded(input, output, stride, is_nxc=True):\n        rank = len(input)\n        return all(output[i] == (input[i] + stride[i] - 1) // stride[i]\n                   for i in (range(1, rank - 1) if is_nxc else range(2, rank)))\n\n    @staticmethod\n    def _eliminate_variable_dequantize_ops(graph):\n        for op in list(graph.operations):\n            if op.type == 'DEQUANTIZE' and Optimizer._is_constant(op.input):\n                variable = op.input\n\n                if 'zero_point' in variable.quant and 'scale' in variable.quant:\n                    zero_point = variable.quant['zero_point']\n                    scale = variable.quant['scale']\n                    variable.data = (variable.data - zero_point) * scale\n\n                variable.data = variable.data.astype(np.float32)\n                variable.dtype = np.float32\n                variable.quant = None\n\n                bypass_and_remove(graph, op)\n\n    @staticmethod\n    def _replace_const_shape(shape):\n        if shape.input.shape is not None:\n            shape.output.data = np.array(shape.input.shape)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/quantize.py",
    "content": "from nnef_tools.io.nnef.reader import Reader\nfrom nnef_tools.io.nnef.writer import Writer\nimport numpy as np\nimport argparse\nimport json\nimport os\n\n\n_CONV_OPS = ['conv', 'deconv', 'separable_conv', 'separable_deconv']\n\n\ndef make_quantization(min, max, signed, symmetric):\n    if min > 0:\n        min = 0\n    if max < 0:\n        max = 0\n\n    if signed and symmetric:\n        if -max < min:\n            min = -max\n        if -min > max:\n            max = -min\n\n    scale = 255 / (max - min)\n    zero_point = int((0 - min) * scale)\n\n    if signed:\n        zero_point -= 127 if symmetric else 128\n\n    return {'op-name': 'zero_point_linear_quantize', 'zero_point': zero_point, 'scale': scale,\n            'signed': signed, 'symmetric': symmetric, 'bits': 8}\n\n\ndef quantize_params(params, zero_point, scale, signed, symmetric):\n    min = (((-127 if symmetric else -128) if signed else 0) - zero_point) * scale\n    max = ((127 if signed else 255) - zero_point) * scale\n\n    params = np.clip(params, min, max)\n    return np.floor((params - min) / scale).astype(np.int8 if signed else np.uint8)\n\n\ndef quantize_bias(params, scale):\n    return np.floor(params / scale).astype(np.int32)\n\n\ndef is_conv_param(tensor):\n    return all(op.type in _CONV_OPS for op in tensor.consumers)\n\n\ndef is_conv_bias(tensor):\n    assert len(tensor.consumers) == 1\n    return len(tensor.consumers) == 1 and tensor.consumer.type in _CONV_OPS and len(tensor.consumer.inputs) > 2 and \\\n           tensor is tensor.consumer.inputs[2]\n\n\ndef main(args):\n    reader = Reader(infer_shapes=False)\n    model = reader(args.model)\n\n    stats_path = args.statistics or os.path.join(args.model, 'graph.stats')\n    if not os.path.exists(stats_path):\n        print(\"Could not find statistics file '{}'\".format(stats_path))\n        return -1\n\n    with open(stats_path, 'r') as file:\n        stats = json.load(file)\n\n    for tensor in model.tensors:\n        stat = stats.get(tensor.name)\n        if stat is not None:\n\n            if args.percentile is not None:\n                lo = max(stat['mean'] - args.percentile * stat['std'], stat['min'])\n                hi = min(stat['mean'] + args.percentile * stat['std'], stat['max'])\n            else:\n                lo = stat['min']\n                hi = stat['max']\n\n            tensor.quant = make_quantization(lo, hi, args.signed, args.symmetric)\n\n            if tensor.data is not None:\n                tensor.data = quantize_params(tensor.data, tensor.quant['zero_point'], tensor.quant['scale'],\n                                              args.signed, args.symmetric)\n\n    if args.wide_bias:\n        for tensor in model.tensors:\n            if len(tensor.quant) > 0 and tensor.data is not None and is_conv_bias(tensor):\n                conv = tensor.consumer\n                tensor.quant['bits'] = 32\n                tensor.quant['zero_point'] = 0\n                tensor.quant['scale'] = conv.inputs[0].quant['scale'] * conv.inputs[1].quant['scale']\n                tensor.data = quantize_bias(tensor.data, tensor.quant['scale'])\n\n    writer = Writer()\n    writer(model, args.output)\n    return 0\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('model', type=str,\n                        help='The model to quantize')\n    parser.add_argument('--statistics', type=str, default=None,\n                        help='The tensor statistics to use for quantization')\n    parser.add_argument('--output', type=str, required=True,\n                        help='The path of the output model')\n    parser.add_argument('--signed', action='store_true',\n                        help='Whether to generate signed int8 quantized values instead of uint8')\n    parser.add_argument('--symmetric', action='store_true',\n                        help='Whether to quantize symmetrically and force zero-point to 0')\n    parser.add_argument('--wide-bias', action='store_true',\n                        help='Whether to quantize biases into int32 values')\n    parser.add_argument('--percentile', type=float, default=None,\n                        help='Define ranges with approximate normal distribution percentiles;'\n                             'provide number of standard deviations from mean to be used')\n    exit(main(parser.parse_args()))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/random_tensor.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .utils import stdio\nimport numpy as np\nimport argparse\nimport nnef\nimport sys\n\n\ndef _is_lambda(v):\n    LAMBDA = lambda: 0\n    return isinstance(v, type(LAMBDA)) and v.__name__ == LAMBDA.__name__\n\n\ndef uniform(min=0, max=1):\n    return lambda shape: np.random.uniform(min, max, shape)\n\n\ndef normal(mean=0, std=1):\n    return lambda shape: np.random.normal(mean, std, shape)\n\n\ndef bernoulli(prob=0.5):\n    return lambda shape: np.random.uniform(0, 1, shape) < prob\n\n\ndef main(args):\n    if args.output is None:\n        if not stdio.is_stdout_piped():\n            print(\"Output must be piped\", file=sys.stderr)\n            return -1\n        stdio.set_stdout_to_binary()\n\n    try:\n        distribution = eval(args.distribution)\n        if not _is_lambda(distribution):\n            distribution = distribution()\n    except Exception as e:\n        print(\"Could not evaluate distribution: \" + str(e), file=sys.stderr)\n        return -1\n\n    tensor = distribution(args.shape).astype(np.dtype(args.dtype))\n\n    if args.output is not None:\n        with open(args.output, 'wb') as file:\n            nnef.write_tensor(file, tensor)\n    else:\n        nnef.write_tensor(sys.stdout, tensor)\n        \n    return 0\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('distribution', type=str,\n                        help='The distribution to generate values from')\n    parser.add_argument('--shape', type=int, nargs='+', required=True,\n                        help='The dimensions of the tensor to generate')\n    parser.add_argument('--dtype', type=str, default='float32',\n                        help='The data-type of the resulting tensor')\n    parser.add_argument('--output', type=str, default=None,\n                        help='File name to save the result into')\n    exit(main(parser.parse_args()))\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/utils/__init__.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/utils/stdio.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\n\n\ndef set_stdin_to_binary():\n    if sys.version_info >= (3, 0):\n        sys.stdin = sys.stdin.buffer\n    elif sys.platform == 'win32':\n        import os, msvcrt\n        msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)\n\n\ndef set_stdout_to_binary():\n    if sys.version_info >= (3, 0):\n        sys.stdout = sys.stdout.buffer\n    elif sys.platform == 'win32':\n        import os, msvcrt\n        msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)\n\n\ndef is_stdin_piped():\n    return not sys.stdin.isatty()\n\n\ndef is_stdout_piped():\n    return not sys.stdout.isatty()\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/utils/types.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nimport numpy as np\nfrom collections.abc import Sequence\n\n\n# noinspection PyUnresolvedReferences\ndef as_str(s):\n    if sys.version_info[0] >= 3:\n        return s.decode('utf-8') if isinstance(s, bytes) else s\n    else:\n        return s.encode('utf-8') if isinstance(s, unicode) else s\n\n\nPyTypeFromNumpyDtype = {\n    np.float16: float,\n    np.float32: float,\n    np.float64: float,\n    np.int8: int,\n    np.int16: int,\n    np.int32: int,\n    np.int64: int,\n    np.uint8: int,\n    np.uint16: int,\n    np.uint32: int,\n    np.uint64: int,\n    np.bool_: bool,\n    np.str_: str,\n}\n\nPyTypeToNumpyDtype = {\n    int: np.int32,\n    float: np.float32,\n    bool: np.bool_,\n    str: np.str_,\n}\n\n\n_builtin_type = type\n\n\ndef cast(value, type):\n    return _builtin_type(value)(cast(item, type) for item in value) if isinstance(value, Sequence) else type(value)\n\n\ndef from_numpy(array, type=None):\n    if type is None:\n        type = PyTypeFromNumpyDtype[array.dtype.type]\n    return cast(array.tolist(), type)\n\n\ndef to_numpy(value, dtype=None):\n    def _item(value):\n        return _item(value[0]) if isinstance(value, Sequence) else value\n\n    if dtype is None:\n        dtype = PyTypeToNumpyDtype.get(type(_item(value)))\n    return np.array(value, dtype=dtype)\n"
  },
  {
    "path": "nnef_tools-pyproject/nnef_tools/visualize.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nfrom .io.nnef import Reader\nfrom .io.nnef.writer import _DtypeFromNumpy\nfrom graphviz import Digraph\nimport numpy as np\nimport os\n\n\ndef _text_with_size(text, size):\n    return '<FONT POINT-SIZE=\"{}\">{}</FONT>'.format(size, text)\n\n\ndef _format_tensor_label(tensor):\n    return '< {}<BR/> {}>'.format(tensor.name, _text_with_size(_dtype_str(tensor.dtype) + _shape_str(tensor.shape), size=10))\n\n\ndef _dtype_str(dtype):\n    return _DtypeFromNumpy[dtype.type if isinstance(dtype, np.dtype) else dtype]\n\n\ndef _shape_str(shape):\n    return '[' + ','.join(str(s) if s is not None else '?' for s in shape) + ']' if shape is not None else ''\n\n\ndef _truncate(text, max_length=32):\n    return text[:max_length - 3] + \"...\" if len(text) > max_length else text\n\n\ndef _attribs_str(attribs, separator):\n    s = []\n    for k, v in sorted(attribs.items(), key=lambda e: e[0]):\n        if k == \"label\":\n            v = \".../\" + v.split('/')[-1]\n        elif k == \"dtype\":\n            v = _dtype_str(v)\n        s.append(\"{}{}{}\".format(k, separator, _truncate(str(v))))\n    return s\n\n\ndef _format_op_details(op):\n    s = [\n        \"inputs: \" + \", \".join(str(tensor.name if tensor.producer is not None else tensor.data) for tensor in op.inputs),\n        \"outputs: \" + \", \".join(str(tensor.name) for tensor in op.outputs),\n    ]\n\n    s.extend(_attribs_str(op.attribs, separator=': '))\n\n    return \"&#13;&#10;\".join(s)\n\n\ndef _format_op_label(op):\n    attrs = (_text_with_size(s, size=10) for s in _attribs_str(op.attribs, separator='='))\n    return '<{}<BR/>{}>'.format(op.type, '<BR/>'.join(attrs)) if len(op.attribs) else op.type\n\n\ndef _generate_digraph(graph, show_variables, verbose):\n    digraph = Digraph()\n\n    for op in graph.operations:\n        if (show_variables or op.type != \"variable\") and op.type != \"external\":\n            digraph.node(str(id(op)), _format_op_label(op) if verbose else op.type, shape=\"box\", tooltip=_format_op_details(op))\n\n    for tensor in graph.tensors:\n        if tensor.producer is not None and (show_variables or tensor.producer.type != \"variable\") and tensor.producer.type != \"external\":\n            for consumer in tensor.consumers:\n                digraph.edge(str(id(tensor.producer)), str(id(consumer)),\n                             label=_format_tensor_label(tensor) if verbose else \"  \" + tensor.name,\n                             labeltooltip=\"{}{}\".format(_dtype_str(tensor.dtype), _shape_str(tensor.shape)))\n\n    for tensor in graph.inputs:\n        digraph.node(str(id(tensor)), _format_tensor_label(tensor) if verbose else tensor.name, shape=\"ellipse\",\n                     tooltip=\"{}{}\".format(_dtype_str(tensor.dtype), _shape_str(tensor.shape)))\n        for consumer in tensor.consumers:\n            digraph.edge(str(id(tensor)), str(id(consumer)), label=None)\n\n    for tensor in graph.outputs:\n        digraph.node(str(id(tensor)), _format_tensor_label(tensor) if verbose else tensor.name, shape=\"ellipse\",\n                     tooltip=\"{}{}\".format(_dtype_str(tensor.dtype), _shape_str(tensor.shape)))\n        digraph.edge(str(id(tensor.producer)), str(id(tensor)), label=None)\n\n    return digraph\n\n\ndef main(args):\n    reader = Reader(decomposed=args.decompose, infer_shapes=args.infer_shapes)\n    graph = reader(args.model)\n    digraph = _generate_digraph(graph, args.show_variables, args.verbose)\n    digraph.render(args.model + '.gv', format=args.format, cleanup=True)\n    os.rename(args.model + '.gv.' + args.format, args.model + '.' + args.format)\n    return 0\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('model', type=str,\n                        help='The model to visualize')\n    parser.add_argument('--decompose', type=str, nargs='*', default=None,\n                        help='Names of operators to be decomposed by NNEF parser')\n    parser.add_argument('--verbose', action='store_true',\n                        help='Add more info to the nodes and edges')\n    parser.add_argument('--show-variables', action='store_true',\n                        help='Show variables explicitly')\n    parser.add_argument('--infer-shapes', action='store_true',\n                        help='Perform shape inference and show in visualized graph')\n    parser.add_argument('--format', type=str, choices=['svg', 'pdf', 'png', 'dot'], default='svg',\n                        help='The format of the output')\n    exit(main(parser.parse_args()))\n"
  },
  {
    "path": "nnef_tools-pyproject/package_info.md",
    "content": "# NNEF Tools\n\nThis package contains a set of tools for converting and transforming machine learning models.\n\n## Usage\n\nFor basic usage, you have to supply an input format, an output format and an input model. The output model name defaults to the input model name suffixed with the output format, but it can also be supplied explicitly.\n\n```\npython -m nnef_tools.convert --input-format tf --output-format nnef --input-model my_model.pb --output-model my_model.nnef\n```\n\n### Setting input shapes\n\nIf the model has (partially) undefined shapes, the concrete shapes can be supplied with the `--input-shapes` argument. The input shapes must be a Python dict expression, with string keys of input tensor names and tuple values as shapes. It is enough to supply shapes for those inputs that we want to freeze. For example:\n\n```\n--input-shapes \"{'input': (1, 224, 224, 3)}\"\n```\n\n### Transposing inputs and outputs\n\nWhen converting between TF and NNEF, the (default) dimension ordering differs, and the model may be transposed (for example in case of 2D convolutional models). However, the inputs and outputs are not automatically transposed, as the converter cannot reliably decide which input and outputs represent images. Transposing inputs and outputs can be turned on by the `--io-transpose` option. There are two ways to use it: either to transpose all inputs and outputs, or to select the ones to be transposed. All inputs and outputs can be transposed by using `--io-transpose` without any further arguments, while selecting inputs and outputs can be done by providing a list of names:\n\n```\n--io-transpose \"input1\" \"input2\" \"output1\"\n```\n\n### Retaining input/output names\n\nDuring conversion, the converter may generate suitable names for tensors. However, it is possible to force to keep the names of input and output tensors using the `--keep-io-names` option.\n\n\n### Folding constants\n\nThe original model may contain operations that are performed on constant tensors, mainly resulting from shapes that are known in conversion time, or that became known by setting with the `--input-shape` option. In this case, it can be useful to fold constant operations, because the resulting graph is simplified. Furthermore, without constant folding, the graph may not even be convertible due to the presence of non-convertible operations, but constant folding may eliminate them and make the model convertible. To use it, simply turn on the `--fold-constants` option.\n\n### Optimizing the output model\n\nThe resulting model may contain operations or sequences of operations that can be merged or even eliminated as they result in a no-op. To do so, turn on the `--optimize` flag. This works for NNEF output.\n\nThe converter can also be run with the same input and output format. In this case, the tool only reads and writes the model, with an optional optimization phase in between if the `--optimize` flag is set and an optimizer is available for the given format.\n\n### Handling unsupported operations\n\nWhen running into an unsupported operation, the converter stops the conversion process. It is possible to override this behavior by enabling mirror-conversion (one-to-one copying the operation to the destination format) using the `--mirror-unsupported` flag. This may not result in a valid output model, but may be helpful for debugging.\n\n## Further options\n\nThe following further options can be used when the output format is NNEF:\n* The `--compress` option generates a compressed `tgz` file. It can also take a further compression level argument.\n* The `--annotate-shapes` flag generates the graph description with the shapes of tensors annotated in comments.\n* The `--output-names` option takes a list of tensor names, and considers those as outputs, and only converts the sub-graph required to compute those outputs.\n* The `--tensor-mapping` option allows to save the mapping of tensor names (mapping from the input model to the output model) into a separate json file.\n\n\n## Conversion from TF Python code\n\nWhen starting from Python code, the first step is to export the graph into a graph-def protobuf (.pb) file, which can then be further converted to a different format. To do so, the package contains some utility functions to freeze the graph and save it. Simply import these utilities and call them in your Python code:\n\n```\nimport nnef_tools.io.tf.graphdef as graphdef\n# define your TF model here\nwith tf.Session() as sess:\n    ...     # initialize variables and train graph\n    graphdef.save_default_graph('path/to/save.pb', session=sess, outputs=...)\n```\n\nIf your model contains dynamic shapes, you can save the graph with concrete shapes by providing the input shapes to the save function. Furthermore, constant operations can also be folded while saving the model:\n\n```\ngraphdef.save_default_graph('path/to/save.pb', session=..., outputs=...,\n                            input_shapes={'input': (1, 224, 224, 3)},\n                            fold_constants=True)\n```\n\nOutputs can be specified as a list of tensors, or alternatively, they can be renamed by mapping tensors to strings as new names.\n\n### Saving composite functions as a single operation\n\nOften, when exporting a graph, it is desirable to convert a subgraph (compound operation) into a single operation. This can be done by defining the subgraph in a Python function and annotating it with `@composite_function` of the `graphdef` module:\n\n```\n@graphdef.composite_function\ndef my_compound_op( x, a, b ):\n    return a * x + b\n```\n\nThen `graphdef.save_default_graph` will magically take care of the rest, by converting composite functions into `PyFunc` ops in the graph-def. Note however, that if you are exporting such graphs repeatedly, you have to call `graphdef.reset_composites()`  before the definition of the graph.\n\nHow exactly the signature of the function is converted depends on the invocation of the function: tensor arguments are converted to inputs, while non-tensor arguments are converted to attributes. It does not matter whether positional or keyword arguments are used. Outputs must be tensors:\n\n```\ngraphdef.reset_composites()\n\n# define the graph\nx = tf.placeholder(shape=(2,3), dtype=tf.float32, name='input')\ny = my_compound_op(x, a=4, b=5)   # x is treated as tensor, a and b as attributes\n\nwith tf.Session() as sess:\n    graphdef.save_default_graph('path/to/save.pb', session=sess, outputs={y: 'output'})\n```\n\nWhen exporting models containing composite functions, if the model has dynamic shapes it is preferable to export it with concrete shapes and folding constants during export. This is because before converting composite functions to a single op, TF can still perform shape inference and constant folding automatically, but after the conversion, it cannot infer shapes and perform the computation of the `PyFunc` operations resulting from the composite functions. If there are no composite functions in the model, then concrete shapes can be provided later as well (during conversion), accompanied by constant folding.\n\nCollapsing composites to a single op when saving the graph can be turned off by `collapse_composites=False`. See `custom/composite_export_example.py` for more examples.\n\n\n#### **Important note**\n\nComposite functions **must not** get tensor inputs from other sources than the function arguments (such as global or class member variables). In that case, the code must be reorganized to make the actual composite function be called with explicitly marked tensor arguments. The same practice is also useful for attributes. In general, composite functions should be stateless.\n\n\n## Custom converter plugins\n\nThe coverage of the converter can be extended to custom operations. This is required for example, when one wants to convert a composite function. Such a function is exported to the protobuf model as a `PyFunc` operation, that records the name, attributes, inputs and outputs of the original composite function. However, a converter must be provided for that name. In the actual conversion process, the `PyFunc` node is replaced with an operator of the original name of the composite function, so that it can be referenced.\n\nThe conversion of operations is governed by `nnef_tools.conversion.Transform` instances mapped to operator types. To add a new operator to be converted, one needs to provide a map entry for the operator. This is done by providing a Python module to the converter that contains the mapping for custom operators in a dict with the standard name `CUSTOM_TRANSFORMS`. The module is injected to the converter with the `--custom-converters` option:\n\n```\n--custom-converters my.custom.plugin.module\n```\n\nwhere `my/custom/plugin/module.py` is a Python module accessible to the Python interpreter (either by providing an absolute path or by setting `PYTHON_PATH`). Its contents may look like the following:\n\n```\nfrom nnef_tools.conversion import Transform\n\ndef my_conversion_helper_func(converter, ...):\n    ...\n\nCUSTOM_TRANSFORMS = {\n    'op_type_to_convert_from':\n        Transform(\n            type='op_type_to_convert_into',\n            name='optional_name_of_resulting op',\n            inputs=(\n                # one entry for each input\n            ),\n            outputs=(\n                # one entry for each output\n            ),\n            attribs={\n                # one entry for each attribute\n            }\n        ),\n}\n```\n\nEntries are for the resulting operator, and may be constant Python values or expressions to be evaluated by the Python interpreter. Such expressions are written as Python strings that start with the `!` character, for example `'!a+2'` evaluates the expression `a+2`. The expressions are evaluated in the context of the source operator (the one converted from) and the converter context (that is defined by the input and output formats). It consists of the following:\n* The type of the source operator is accessed via the identifier `_type_`.\n* The name of the source operator is accessed via the identifier `_name_`.\n* Inputs of the source operator are accessed via the identifier `I`, which is a Python `list`. For example the expression `'!I[0]'` results in the first input.\n* Outputs of the source operator are accessed via the identifier `O`, which is a Python `list`. For example, the expression `'len(O)'` results in the number of outptus.\n* Attributes of the source operator are accessed via identifiers that match the names of the attributes. For example if the source operator has attribute `a` then the expression `'!a'` takes its value.\n* Furthermore, the following can be used in building complex expressions:\n    * All built-in Python operators and functions.\n    * All public member functions (not starting with `_`) defined by the converter in effect.\n    * All public functions (not starting with `_`) defined in the custom module. Such functions must take a converter as their first argument, but otherwise can take arbitrary arguments. The public methods of the converter can be used in their definition.\n\nThe `Transform` can further contain a `using={'id': '!expr', ...}` field, which may define intermediate expressions that are evaluated first and can be used in other expressions for attributes/inputs/outputs. If the dictionary is ordered, the entries may depend on each other.\n\nFurthermore, by adding an optional `cond='!expr'` field to the `Transform`, it is possible to achieve conditional conversion, only when the given expression evaluates to `True`. Otherwise, the converter treats it as if there was no converter provided for the given operator. This is to allow conversion of operations with only certain attribute values.\n\nSee `custom/custom_transforms_example.py` for more details.\n\nSimilarly to the above mechanism, custom shape inference functions and custom operator definitions (fragments) can be plugged in to converters that convert from NNEF using the `--custom-shapes` and `--custom-fragments` option. This may be required for custom NNEF operators defined as fragments in the input when such fragments are not decomposed. The fragments and shape inference functions must be defined in python module(s) supplied after the `--custom-shape` or `--custom-fragments` option. The module may look like this:\n\n```\ndef my_custom_shape_function(intput1_shape, ..., attrib1, ...)\n    ...     # assert validity of input shapes / attribs\n    ...     # return calculated output shape(s)\n\nCUSTOM_SHAPES = {\n    'my_custom_op': my_custom_shape_function,\n}\n```\n\nor\n\n```\nop_fragment =\n\"\"\"\n# NNEF fragment declaration/definition goes here\n\"\"\"\n\nCUSTOM_FRAGMENTS = {\n    'op-name': op_fragment,\n}\n```\n\nFurthermore, the `--decompose` option can be used to let the NNEF parser decompose the (composite) operators listed after the option (as separate args).\n\nAdditionally, with a similar mechanism, custom optimization passes can also be injected to the converter. The optimizer can match sequential sub-graphs (chains), and replace them with another sequence of operations. To provide custom optimizer passes, the chains of operations to be replaced must be mapped onto functions that perform generate the replacement sequence after checking the chain to bre replaced for validity:\n\n```\ndef replace_my_chain(a, b, c):   # a, b, c will contain the matched chain of ops in order when this is called\n    ...     # check attributes of the chain a, b, c to see if it should really be replaced;\n            # if not, return False (do not modify the graph before all checks)\n    ...     # create new tensors and operations in the graph that will replace the chain\n    ...     # either return nothing (None), or any non-False value\n\nCUSTOM_OPTIMIZERS = {\n    ('a', 'b', 'c'): replace_my_chain,      # use a tuple as key, since list is not hashable\n}\n```\n\nSee `custom/custom_optimizers_example.py` for mode info.\n\n## Executing a model and saving activations\n\nA separate tool (`execute.py`) is available for executing a model. It requires a model and a format to be specified.\n\nThe inputs may be read from the (binary) input stream and outputs may be written to the (binary) output stream. Tensor data files can be piped as inputs and outputs:\n\n```\npython -m nnef_tools.execute < input.dat my_model.pb --format tf > output.dat\n```\n\nAlternatively, inputs can be random generated, and selected activations may be written to a folder, allowing to specify a different name:\n\n```\npython -m nnef_tools.execute my_model.pb --format tf --random \"uniform(0,1)\" --seed 0 --output-path . --output-names \"{'tensor-name1': 'save-name1', ...}\"\n```\n\nFurther options to the model executor:\n\n* The `--batch-size` option can be used to perform batched execution if a model specifies batch size of 1 in its inputs, supplying the desired batch size. If the supplied batch size is 0, it means that the (common) batch size of the actual inputs is used. Furthermore, when the supplied batch size equals the one defined by the model, execution will be done one-by-one instead of a single batch, which may be useful for reducing the memory footprint.\n* The `--statistics` flag (followed by an optional output file path) can be used to generate activation statistics and save it in json format.\n* The `--tensor-mapping` option can be used to provide a tensor name mapping obtained from the conversion step to the executor, used in remapping tensor names when generating statistics. This may be useful for comparing executions of the same model in different formats.\n* Inputs and outputs (or activations) may need transposing before feeding into execution or after execution upon saving. This can be achieved with the `--io-transpose` flag. If no further arguments are listed, all tensors are transposed, but the transposed tensors can be controlled by enumerating a list of tensor names (as separate args). Inputs read from the input stream are transposed from channels first to last, while the outputs that are written to the output stream or saved are transposed from channels last to first if the format dictates so (TF/Lite).\n* The `--decompose` option can be used to let the NNEF parser decompose the (composite) operators listed after the option (as separate args).\n* The `--custom-operators` option can be used to inject custom operators to the executor by supplying a python module after the option. The contents of the module may look like this:\n\n```\ndef my_custom_op(input1, ..., attrib1, ...):\n    ...     # calculate output using inputs / attribs\n\nCUSTOM_OPERATORS = {\n    'my_custom_op': my_custom_op,\n}\n```\n\nSee `custom/custom_operators_example.py` for more info.\n\nFurther tools are available for generating random tensors (`random_tensor.py`) and converting images to tensors (`image_tensor.py`). These tools write their results to the output stream and can be directed into a file or piped to `execute.py`.\n\n## Visualizing a model\n\nNNEF models can be visualized with the `visualize.py` tool. The tool generates and svg/pdf/png rendering of the NNEF graph:\n\n```\npython -m nnef_tools.visualize my_model.nnef --format svg\n```\n\nBy default, the render only contains the names of operations and tensors. In case of and svg output, _tooltips_ contain more details about nodes (op attributes, tensor dtypes and shapes). The shapes are only calculated if the `--infer-shapes` flag is turned on. To include those details in the render itself, use the `--verbose` flag.\n\n## GMAC calculation\n\nThe script `gmac.py` can be used to calculate the GMACs required to execute a model. By default, it only calculates linear operations (convolutions, matrix multiplies), but it is possible to add other groups of operations (pooling, normalization, reduction, up-sampling) into the calculation:\n\n```\npython -m nnef_tools.gmac my_model.nnef --include-pooling\n```\n\nThe calculation requires shape inference, so in case of custom operators, the `--custom-shapes` option should be used (same as for `convert.py`).\n\n## Troubleshooting\n\nSeveral things can go wrong during various stages of conversion, and sometimes it's hard to find where it exactly happened. Here are a few tips on how to get started:\n* If the export process starts from Python code in a framework such as TensorFlow or PyTorch, the first step is saving the model into a framework specific format, such as TensorFlow protobuf or ONNX in case of PyTorch.\n    * Check the resulting model to see if it accurately reflects the framework code. TensorBoard or Netron viewer can be used for this purpose.\n    * If there is an error in this step, try to turn off certain flags during saving. For example in `nnef_tools.io.tf.graphdef.save_default_graph`, try turning off `fold_constants` and `collapse_composites` flag. The first merges operations on constant tensors, the second one merges composite operators into a single piece. By turning them off, errors in these transformation steps can be excluded.\n* If the conversion from any model format to NNEF fails, typical reasons are as follows:\n    * Conversion of some operator is not implemented. In this case, adding a custom converter using the `--custom-converters` option can solve the problem.\n    * There is a bug in the converter; for example it does not support some parameter/version of an operator. In this case file a bug for `nnef_tools`.\n* After the conversion to NNEF succeeds, check the converted model by executing it (`nnef_tools.execute`) on some (maybe random) inputs.\n    * Execution may itself fail if there are custom operators in the model, in which case custom executors can be injected with the `--custom-operators` option.\n    * If executed on non-random inputs, the outputs can be compared to results obtained from executing the same model in the original framework, or after saving it and executing the saved model (`nnef_tools.execute`). By comparing the results of those three stages, it is possible to tell in which stage something goes wrong. However, make sure to feed the same inputs to all stages, and beware that NNEF dimension order (channels first) is different from TensorFlow dimension order (channels last).\n    * If the failing stage is the saving step, see above for turning off certain options too see if those are the culprits.\n    * If the failing stage is the conversion step, first make sure to isolate optimizations by not using the `--optimize` option. The same goes for the `--fold-constants` option to see if that causes problems.\n    * If conversion fails even without optimization and constant folding, it is usually due to the conversion of one of the operations, which must be found. Ideally, one would compare the intermediate tensors after each operation in a sequence, but exact comparison is hard to do automatically due to non 1-1 mappings during the conversion. However, generating statistics (`nnef_tools.execute --statistics`) for the same input for both models allows comparison of how execution proceeds in the two models and finding where the first difference occurs.\n* When in doubt about some of the tools and this documentation does not provide enough information, check the help of the command-line tool itself (`-h` or `--help`) option.\n"
  },
  {
    "path": "nnef_tools-pyproject/pyproject.toml",
    "content": "[project]\nname = \"nnef_tools\"\nversion = \"1.0.10\"\ndescription = \"A package for managing NNEF files\"\nrequires-python = \">=3.7\"\n\nclassifiers = [\n    'Intended Audience :: Developers',\n    'License :: OSI Approved :: Apache Software License',\n    'Programming Language :: Python :: 3',\n]\ndynamic = [\"readme\"]\nlicense = { file = \"LICENSE\" }\nkeywords = [\"nnef\"]\n\nauthors = [\n    { name = \"Viktor Gyenes\", email = \"viktor.gyenes@aimotive.com\" },\n    { name = \"Tamas Danyluk\", email = \"9149812+tdanyluk@users.noreply.github.com\" },\n]\nmaintainers = [{ name = \"Viktor Gyenes\", email = \"viktor.gyenes@aimotive.com\" }]\n\ndependencies = [\"future\", \"numpy\", \"six\", \"nnef\"]\n\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project.optional-dependencies]\ncaffe = [\"protobuf\", \"torch\"]\nonnx = [\"protobuf\", \"onnx\", \"onnx-simplifier\", \"onnxruntime\"]\ntensorflow-lite = [\"nnef_tools[tensorflow-protobuf]\", \"flatbuffers\"]\ntensorflow-protobuf = [\"tensorflow\"]\nvisualization = [\"graphviz\"]\nfull = [\"nnef_tools[tensorflow-lite,onnx,caffe,visualization]\"]\n\n[project.urls]\n\"Homepage\" = \"https://www.khronos.org/nnef\"\n\"Repository\" = \"https://github.com/KhronosGroup/NNEF-Tools\"\n\n[tool.setuptools.dynamic]\nreadme = { file = [\"package_info.md\"], content-type = \"text/markdown\" }\n\n[tool.setuptools]\npackage-dir = {\"nnef_tools\" = \"nnef_tools\"}\n"
  },
  {
    "path": "nnef_tools-pyproject/tests/conversion/graphdef_test.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nfrom nnef_tools.io.tf.graphdef.protobuf import GraphDef\nimport nnef_tools.io.nnef as nnef_io\nimport nnef_tools.io.tf.graphdef as graphdef\nimport nnef_tools.conversion.tf_to_nnef as tf_to_nnef\nimport nnef_tools.conversion.nnef_to_tf as nnef_to_tf\nimport nnef_tools.optimization.nnef_optimizer as nnef_opt\nimport nnef_tools.optimization.tf_optimizer as tf_opt\nimport unittest\nimport tempfile\nimport os\ntry:\n    import tensorflow.compat.v1 as tf\n    tf.disable_v2_behavior()\nexcept ImportError:\n    import tensorflow as tf\n\n\nUNITTEST_FOLDER = os.environ.get('UNITTEST_FOLDER')\n\n\nclass TestEnv(unittest.TestCase):\n\n    _network_folder = os.path.join(UNITTEST_FOLDER, 'tf/nets/') if UNITTEST_FOLDER else None\n    _output_folder = os.path.join(UNITTEST_FOLDER, 'tf/ops/') if UNITTEST_FOLDER else None\n    _io_transpose = True\n    _optimize = True\n\n    def setUp(self) -> None:\n        self._tf_reader = graphdef.Reader(fold_constants=True)\n        self._tf_writer = graphdef.Writer()\n        self._tf_optimizer = tf_opt.Optimizer()\n        self._nnef_optimizer = nnef_opt.Optimizer()\n        self._tf_to_nnef_converter = tf_to_nnef.Converter(io_transpose=self._io_transpose)\n        self._nnef_to_tf_converter = nnef_to_tf.Converter(io_transpose=self._io_transpose)\n        self._nnef_reader = nnef_io.Reader(custom_shapes=self._nnef_to_tf_converter.defined_shapes(),\n                                           decomposed=self._nnef_to_tf_converter.decomposed_operations())\n        self._nnef_writer = nnef_io.Writer(fragments=self._tf_to_nnef_converter.defined_operations())\n\n    def tearDown(self) -> None:\n        tf.reset_default_graph()\n\n    @staticmethod\n    def _save_graph_def(graph_def, filename):\n        with open(filename, 'wb') as file:\n            file.write(graph_def.SerializeToString())\n\n    @staticmethod\n    def _load_graph_def(filename):\n        graph_def = GraphDef()\n        with open(filename, 'rb') as file:\n            graph_def.ParseFromString(file.read())\n        return graph_def\n\n    @staticmethod\n    def _numpy_dtype(tensor):\n        return np.bool_ if tensor.dtype.is_bool else tensor.dtype.as_numpy_dtype()\n\n    @staticmethod\n    def _exec_graph_def(graph_def, only_first_output=False):\n        np.random.seed(0)\n\n        tf.reset_default_graph()\n        tf.import_graph_def(graph_def, name='')\n        ops = tf.get_default_graph().get_operations()\n\n        consumed = {tensor for op in ops for tensor in op.inputs}\n        inputs = [op.outputs[0] for op in ops if op.type == 'Placeholder']\n        if only_first_output:\n            outputs = [op.outputs[0] for op in ops if len(op.inputs) and op.outputs[0] not in consumed]\n        else:\n            outputs = [tensor for op in ops if len(op.inputs) for tensor in op.outputs if tensor not in consumed]\n        feed_dict = {tensor: TestEnv._random_data(TestEnv._numpy_dtype(tensor), tensor.shape.as_list())\n                     for tensor in inputs}\n\n        with tf.Session() as sess:\n            return sess.run(outputs, feed_dict=feed_dict)\n\n    @staticmethod\n    def _random_data(dtype, shape):\n        if dtype == bool:\n            return np.random.random(shape) > 0.5\n        else:\n            return np.random.random(shape).astype(dtype)\n\n    def _convert_to_nnef(self, filename, input_shapes=None):\n        tf_graph = self._tf_reader(filename, input_shapes=input_shapes)\n        tf_graph = self._tf_optimizer(tf_graph)\n        nnef_graph = self._tf_to_nnef_converter(tf_graph)\n        if self._optimize:\n            nnef_graph = self._nnef_optimizer(nnef_graph)\n        self._nnef_writer(nnef_graph, filename + '.nnef')\n\n    def _convert_from_nnef(self, filename):\n        nnef_graph = self._nnef_reader(filename)\n        tf_graph = self._nnef_to_tf_converter(nnef_graph)\n        self._tf_writer(tf_graph, filename + '.pb')\n\n    def _test_conversion(self, name, only_first_output=True, epsilon=1e-5):\n        filename = tempfile.mktemp() if self._output_folder is None else TestEnv._output_folder + name + '.pb'\n\n        graph_def = tf.get_default_graph().as_graph_def(add_shapes=True)\n        self._save_graph_def(graph_def, filename)\n\n        self._test_conversion_from_file(filename, only_first_output=only_first_output, epsilon=epsilon)\n\n    def _test_conversion_from_file(self, filename, only_first_output=True, input_shapes=None, epsilon=1e-5):\n        self._convert_to_nnef(filename, input_shapes)\n        self._convert_from_nnef(filename + '.nnef')\n\n        original_graph_def = self._load_graph_def(filename)\n        converted_graph_def = self._load_graph_def(filename + '.nnef.pb')\n\n        if input_shapes is not None:\n            original_graph_def = graphdef.set_input_shapes(original_graph_def, input_shapes)\n\n        original_outputs = self._exec_graph_def(original_graph_def, only_first_output)\n        converted_outputs = self._exec_graph_def(converted_graph_def, only_first_output)\n\n        self.assertEqual(len(original_outputs), len(converted_outputs))\n        for original, converted in zip(original_outputs, converted_outputs):\n            if original.dtype == bool:\n                self.assertTrue(np.all(original == converted))\n            else:\n                diff = np.max(np.abs(original - converted))\n                self.assertLess(diff, epsilon)\n\n\nclass TestCases(TestEnv):\n\n    def test_conv1d(self):\n        input = tf.placeholder(shape=(4, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv1d(input, filter, stride=1, padding='SAME')\n\n        self._test_conversion('conv1d')\n\n    def test_conv2d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv2d(input, filter, strides=1, padding='SAME')\n\n        self._test_conversion('conv2d')\n\n    def test_conv3d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv3d(input, filter, strides=[1, 1, 1, 1, 1], padding='SAME')\n\n        self._test_conversion('conv3d')\n\n    def test_conv2d_explicit_padding(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv2d(input, filter, strides=1, padding=[(0, 0), (2, 2), (2, 2), (0, 0)])\n\n        self._test_conversion('conv2d-explicit-padding')\n\n    def test_conv2d_transpose(self):\n        input = tf.placeholder(shape=(4, 32, 32, 16), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv2d_transpose(input, filter, strides=1, padding='SAME', output_shape=(4, 32, 32, 3))\n\n        self._test_conversion('conv2d_transpose')\n\n    def test_depthwise_conv2d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 2)), dtype=tf.float32)\n        output = tf.nn.depthwise_conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')\n\n        self._test_conversion('depthwise_conv2d')\n\n    def test_depthwise_conv2d_transpose(self):\n        input = tf.placeholder(shape=(4, 32, 32, 6), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 2)), dtype=tf.float32)\n        output = tf.nn.depthwise_conv2d_backprop_input([4, 32, 32, 3], filter, input, strides=[1, 1, 1, 1], padding='SAME')\n\n        self._test_conversion('depthwise_conv2d_transpose')\n\n    def test_conv2d_dilated(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv2d(input, filter, strides=1, dilations=2, padding='SAME')\n\n        self._test_conversion('conv2d_dilated')\n\n    def test_max_pool2d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.max_pool2d(input, ksize=3, strides=1, padding='SAME')\n\n        self._test_conversion('max_pool2d')\n\n    def test_max_pool2d_with_index(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        maximum, index = tf.nn.max_pool_with_argmax(input, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')\n\n        self._test_conversion('max_pool2d_with_index')\n\n    def test_avg_pool2d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.avg_pool2d(input, ksize=3, strides=1, padding='SAME')\n\n        self._test_conversion('avg_pool2d')\n\n    def test_min_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reduce_min(input, axis=3, keepdims=True)\n\n        self._test_conversion('min_reduce')\n\n    def test_max_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reduce_max(input, axis=3, keepdims=True)\n\n        self._test_conversion('max_reduce')\n\n    def test_mean_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reduce_mean(input, axis=3, keepdims=True)\n\n        self._test_conversion('mean_reduce')\n\n    def test_sum_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reduce_sum(input, axis=3, keepdims=True)\n\n        self._test_conversion('sum_reduce')\n\n    def test_any_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.reduce_any(input, axis=3, keepdims=True)\n\n        self._test_conversion('any_reduce')\n\n    def test_all_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.reduce_all(input, axis=3, keepdims=True)\n\n        self._test_conversion('all_reduce')\n\n    def test_argmin_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.argmin(input, axis=-1)\n        output = tf.expand_dims(output, axis=-1)\n\n        self._test_conversion('axgmin_reduce')\n\n    def test_argmax_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.argmax(input, axis=-1)\n        output = tf.expand_dims(output, axis=-1)\n\n        self._test_conversion('axgmax_reduce')\n\n    def test_concat(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        otuput = tf.concat([input1, input2], axis=3)\n\n        self._test_conversion('concat')\n\n    def test_split_sizes(self):\n        input = tf.placeholder(shape=(4, 32, 32, 6), dtype=tf.float32)\n        [output1, output2] = tf.split(input, axis=3, num_or_size_splits=[3, 3])\n\n        self._test_conversion('split-sizes')\n\n    def test_split_num(self):\n        input = tf.placeholder(shape=(4, 32, 32, 6), dtype=tf.float32)\n        [output1, output2] = tf.split(input, axis=3, num_or_size_splits=2)\n\n        self._test_conversion('split-num')\n\n    def test_reshape(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reshape(input, shape=(4, 32 * 32 * 3))\n\n        self._test_conversion('reshape')\n\n    def test_flatten(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reshape(input, shape=(4, -1))\n\n        self._test_conversion('flatten')\n\n    def test_squeeze(self):\n        input = tf.placeholder(shape=(4, 32, 32, 1), dtype=tf.float32)\n        squeezed = tf.squeeze(input, axis=[3])\n        output = tf.expand_dims(squeezed, axis=[3])\n\n        self._test_conversion('squeeze')\n\n    def test_squeeze_all(self):\n        input = tf.placeholder(shape=(4, 32, 32, 1), dtype=tf.float32)\n        squeezed = tf.squeeze(input)\n        output = tf.expand_dims(squeezed, axis=[3])\n\n        self._test_conversion('squeeze_all')\n\n    def test_transpose(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        trans = tf.transpose(input, perm=(0, 3, 1, 2))\n        output = tf.transpose(trans, perm=(0, 2, 3, 1))\n\n        self._test_conversion('transpose')\n\n    def test_stack(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 1), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 1), dtype=tf.float32)\n        input1 = tf.squeeze(input1, axis=3)\n        input2 = tf.squeeze(input2, axis=3)\n        output = tf.stack([input1, input2], axis=3)\n\n        self._test_conversion('stack')\n\n    def test_unstack(self):\n        input = tf.placeholder(shape=(4, 32, 32, 2), dtype=tf.float32)\n        [output1, output2] = tf.unstack(input, axis=3)\n        output1 = tf.expand_dims(output1, axis=3)\n        output2 = tf.expand_dims(output2, axis=3)\n\n        self._test_conversion('unstack')\n\n    def test_add(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.add(input1, input2)\n\n        self._test_conversion('add')\n\n    def test_add_broadcast(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(3,), dtype=tf.float32)\n        output = tf.add(input1, input2)\n\n        self._test_conversion('add-broadcast')\n\n    def test_sub(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.subtract(input1, input2)\n\n        self._test_conversion('sub')\n\n    def test_sub_broadcast(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(3,), dtype=tf.float32)\n        output = tf.subtract(input1, input2)\n\n        self._test_conversion('sub-broadcast')\n\n    def test_mul(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.multiply(input1, input2)\n\n        self._test_conversion('mul')\n\n    def test_mul_broadcast(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(3,), dtype=tf.float32)\n        output = tf.multiply(input1, input2)\n\n        self._test_conversion('mul-broadcast')\n\n    def test_div(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.divide(input1, input2)\n\n        self._test_conversion('div')\n\n    def test_div_boradcast(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(3,), dtype=tf.float32)\n        output = tf.divide(input1, input2)\n\n        self._test_conversion('div-broadcast')\n\n    def test_pow(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.pow(input1, input2)\n\n        self._test_conversion('pow')\n\n    def test_min(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.minimum(input1, input2)\n\n        self._test_conversion('min')\n\n    def test_max(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.maximum(input1, input2)\n\n        self._test_conversion('max')\n\n    def test_and(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.logical_and(input1, input2)\n\n        self._test_conversion('and')\n\n    def test_or(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.logical_or(input1, input2)\n\n        self._test_conversion('or')\n\n    def test_lt(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.less(input1, input2)\n\n        self._test_conversion('lt')\n\n    def test_gt(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.greater(input1, input2)\n\n        self._test_conversion('gt')\n\n    def test_le(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.less_equal(input1, input2)\n\n        self._test_conversion('le')\n\n    def test_ge(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.greater_equal(input1, input2)\n\n        self._test_conversion('ge')\n\n    def test_eq(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.equal(input1, input2)\n\n        self._test_conversion('eq')\n\n    def test_ne(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.not_equal(input1, input2)\n\n        self._test_conversion('ne')\n\n    def test_identity(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.identity(input)\n\n        self._test_conversion('identity')\n\n    def test_relu(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.relu(input)\n\n        self._test_conversion('relu')\n\n    def test_elu(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.elu(input)\n\n        self._test_conversion('elu')\n\n    def test_selu(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.selu(input)\n\n        self._test_conversion('selu')\n\n    def test_relu6(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.relu6(input)\n\n        self._test_conversion('relu6')\n\n    def test_leaky_relu(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.leaky_relu(input, alpha=0.1)\n\n        self._test_conversion('leaky_relu')\n\n    def test_sigmoid(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.sigmoid(input)\n\n        self._test_conversion('sigmoid')\n\n    def test_softplus(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.softplus(input)\n\n        self._test_conversion('softplus')\n\n    def test_exp(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.exp(input)\n\n        self._test_conversion('exp')\n\n    def test_log(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.log(input)\n\n        self._test_conversion('log')\n\n    def test_sin(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.sin(input)\n\n        self._test_conversion('sin')\n\n    def test_cos(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.cos(input)\n\n        self._test_conversion('cos')\n\n    def test_tan(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.tan(input)\n\n        self._test_conversion('tan')\n\n    def test_sinh(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.sinh(input)\n\n        self._test_conversion('sinh')\n\n    def test_cosh(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.cosh(input)\n\n        self._test_conversion('cosh')\n\n    def test_tanh(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.tanh(input)\n\n        self._test_conversion('tanh')\n\n    def test_sign(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.sign(input)\n\n        self._test_conversion('sign')\n\n    def test_abs(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.abs(input)\n\n        self._test_conversion('abs')\n\n    def test_neg(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.negative(input)\n\n        self._test_conversion('neg')\n\n    def test_rcp(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.reciprocal(input)\n\n        self._test_conversion('rcp')\n\n    def test_floor(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.floor(input)\n\n        self._test_conversion('floor')\n\n    def test_ceil(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.ceil(input)\n\n        self._test_conversion('ceil')\n\n    def test_round(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.round(input)\n\n        self._test_conversion('round')\n\n    def test_sqr(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.square(input)\n\n        self._test_conversion('sqr')\n\n    def test_sqrt(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.sqrt(input)\n\n        self._test_conversion('sqrt')\n\n    def test_rsqrt(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.rsqrt(input)\n\n        self._test_conversion('rsqrt')\n\n    def test_not(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.math.logical_not(input)\n\n        self._test_conversion('not')\n\n    def test_cast(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.int32)\n        output = tf.cast(input, dtype=tf.float32)\n\n        self._test_conversion('cast')\n\n    def test_cast_ints(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.int32)\n        cast = tf.cast(input, dtype=tf.int8)\n        output = tf.cast(cast, dtype=tf.int32)\n\n        self._test_conversion('cast_ints')\n\n    def test_cast_float_bool(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.cast(input, dtype=tf.bool)\n\n        self._test_conversion('cast_float_bool')\n\n    def test_select(self):\n        cond = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        left = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        right = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.where(cond, left, right)\n\n        self._test_conversion('select')\n\n    def test_clamp(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.clip_by_value(input, 0.2, 0.8)\n\n        self._test_conversion('clamp')\n\n    def test_batch_norm(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        mean = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        variance = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        scale = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        offset = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        outputs = tf.nn.batch_normalization(input, scale=scale, offset=offset, mean=mean, variance=variance,\n                                            variance_epsilon=1e-5)\n\n        self._test_conversion('batch_norm')\n\n    def test_fused_batch_norm(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        mean = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        variance = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        scale = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        offset = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        outputs = tf.nn.fused_batch_norm(input, scale=scale, offset=offset, mean=mean, variance=variance,\n                                         is_training=False)\n\n        self._test_conversion('fused_batch_norm')\n\n    def test_bias_add(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        bias = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        output = tf.nn.bias_add(input, bias)\n\n        self._test_conversion('bias_add')\n\n    def test_softmax(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.softmax(input)\n\n        self._test_conversion('softmax')\n\n    def test_conv_bias_relu_pool(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        bias = tf.constant(np.random.random(size=16,), dtype=tf.float32)\n        mean = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        variance = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        scale = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        offset = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        filtered = tf.nn.conv2d(input, filter, strides=1, padding='SAME')\n        biased = tf.nn.bias_add(filtered, bias)\n        normed, _mean, _variance = tf.nn.fused_batch_norm(biased, scale, offset, mean, variance, is_training=False)\n        relu = tf.nn.relu(normed)\n        pooled = tf.nn.max_pool2d(relu, ksize=2, strides=2, padding='SAME')\n\n        self._test_conversion('conv_bias_relu_pool', epsilon=1e-4)\n\n    def test_conv_mul_add(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        bias = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        scale = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        offset = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        filtered = tf.nn.conv2d(input, filter, strides=1, padding='SAME')\n        biased = tf.nn.bias_add(filtered, bias)\n        scaled = biased * scale\n        output = scaled + offset\n\n        self._test_conversion('conv_mul_add', epsilon=1e-4)\n\n    def test_mul_conv(self):\n        input = tf.placeholder(shape=(4, 32, 32, 8), dtype=tf.float32)\n        filter1 = tf.constant(np.random.random(size=(5, 5, 8, 16)), dtype=tf.float32)\n        filter2 = tf.constant(np.random.random(size=(5, 5, 8, 16)), dtype=tf.float32)\n        bias1 = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        bias2 = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        scale = tf.constant(np.random.random(size=8, ), dtype=tf.float32)\n        scaled = input * scale\n        filtered1 = tf.nn.conv2d(scaled, filter1, strides=1, padding='SAME')\n        filtered2 = tf.nn.conv2d(scaled, filter2, strides=1, padding='SAME')\n        biased1 = tf.nn.bias_add(filtered1, bias1)\n        biased2 = tf.nn.bias_add(filtered2, bias2)\n\n        self._test_conversion('mul_conv', epsilon=1e-4)\n\n    def test_matmul(self):\n        input1 = tf.placeholder(shape=(10, 100), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(100, 20), dtype=tf.float32)\n        output = tf.matmul(input1, input2)\n\n        self._test_conversion('matmul')\n\n    def test_matmul_trans(self):\n        input1 = tf.placeholder(shape=(10, 100), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(20, 100), dtype=tf.float32)\n        output = tf.matmul(input1, input2, transpose_b=True)\n\n        self._test_conversion('matmul-trans')\n\n    def test_pad(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.pad(input, paddings=[[0, 0], [1, 2], [1, 2], [0, 0]])\n\n        self._test_conversion('pad')\n\n    def test_pad_mirror(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.pad(input, paddings=[[0, 0], [1, 2], [1, 2], [0, 0]], mode='REFLECT')\n\n        self._test_conversion('pad_reflect')\n\n    def test_pad_symmetric(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.pad(input, paddings=[[0, 0], [1, 2], [1, 2], [0, 0]], mode='SYMMETRIC')\n\n        self._test_conversion('pad_symmetric')\n\n    def test_slice(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.slice(input, begin=[0, 1, 1, 0], size=[4, 30, 30, 3])\n\n        self._test_conversion('slice')\n\n    def test_strided_slice(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = input[:, 1:-1, 1:-1, :]\n\n        self._test_conversion('strided_slice')\n\n    def test_strided_slice_shrink_axis(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = input[:, 1:-1, 1:-1, 1]\n        output = tf.expand_dims(output, axis=3)\n\n        self._test_conversion('strided_slice-shrink_axis')\n\n    def test_strided_slice_new_axis(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = input[:, 1:-1, 1:-1, tf.newaxis, :]\n        output = tf.squeeze(output, axis=3)\n\n        self._test_conversion('strided_slice-new_axis')\n\n    def test_strided_slice_flip(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = input[:, -2:0:-1, -2:0:-1, :]\n\n        self._test_conversion('strided_slice-flip')\n\n    def test_tile(self):\n        input = tf.placeholder(shape=(4, 1, 1, 3), dtype=tf.float32)\n        output = tf.tile(input, multiples=(1, 32, 32, 1))\n\n        self._test_conversion('tile')\n\n    def test_gather(self):\n        input = tf.placeholder(shape=(4, 32, 32, 16), dtype=tf.float32)\n        indices = tf.constant(np.random.random_integers(size=(24,), low=0, high=15), dtype=tf.int32)\n        output = tf.gather(input, indices, axis=3)\n\n        self._test_conversion('gather')\n\n    def test_upsample_nearest(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.image.resize_nearest_neighbor(input, size=(64, 64))\n\n        self._test_conversion('upsample-nearest')\n\n    def test_downsample_nearest(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.image.resize_nearest_neighbor(input, size=(16, 16))\n\n        self._test_conversion('downsample-nearest')\n\n    def test_downsample_area(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.image.resize_area(input, size=(16, 16))\n\n        self._test_conversion('downsample-area')\n\n    def test_upsample_linear(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.image.resize_bilinear(input, size=(64, 64))\n\n        self._test_conversion('upsample-linear')\n\n    def test_lrn(self):\n        input = tf.placeholder(shape=(4, 32, 32, 8), dtype=tf.float32)\n        output = tf.nn.local_response_normalization(input, depth_radius=2)\n\n        self._test_conversion('lrn')\n\n    def test_l2_normalize(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.l2_normalize(input, axis=-1)\n\n        self._test_conversion('l2_normalize')\n\n    def test_add_n(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.add_n([input, input, input])\n\n        self._test_conversion('add_n')\n\n\n@unittest.skipIf(TestEnv._network_folder is None or not os.path.isdir(TestEnv._network_folder),\n                 \"no network test folder provided\")\nclass NetworkTestCases(TestEnv):\n\n    def test_mobilenet_v1(self):\n        self._test_conversion_from_file(self._network_folder + 'mobilenet_v1.pb',\n                                        input_shapes={'input': (1, 224, 224, 3)})\n\n    def test_mobilenet_v2(self):\n        self._test_conversion_from_file(self._network_folder + 'mobilenet_v2.pb',\n                                        input_shapes={'input': (1, 224, 224, 3)})\n\n    def test_inception_v3(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_v3.pb',\n                                        input_shapes={'input': (1, 299, 299, 3)})\n\n    def test_inception_v4(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_v4.pb',\n                                        input_shapes={'input': (1, 299, 299, 3)})\n\n    def test_inception_resnet_v2(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_resnet_v2.pb',\n                                        input_shapes={'input': (1, 299, 299, 3)})\n\n    def test_squeezenet(self):\n        self._test_conversion_from_file(self._network_folder + 'squeezenet.pb',\n                                        input_shapes={'Placeholder': (1, 224, 224, 3)})\n\n    def test_nasnet(self):\n        self._test_conversion_from_file(self._network_folder + 'nasnet_mobile.pb',\n                                        input_shapes={'input': (1, 224, 224, 3)})\n\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "nnef_tools-pyproject/tests/conversion/onnx_test.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport nnef_tools.io.nnef as nnef_io\nimport nnef_tools.io.onnx as onnx_io\nimport nnef_tools.conversion.onnx_to_nnef as onnx_to_nnef\nimport nnef_tools.conversion.nnef_to_onnx as nnef_to_onnx\nimport nnef_tools.optimization.nnef_optimizer as nnef_opt\nimport nnef_tools.optimization.onnx_optimizer as onnx_opt\nimport numpy as np\nimport unittest\nimport tempfile\nimport onnx\nimport sys\nimport os\nfrom onnx import helper, TensorProto\n\n\nUNITTEST_FOLDER = os.environ.get('UNITTEST_FOLDER')\n\n\nclass TestEnv(unittest.TestCase):\n\n    DEFAULT_OPSET_VERSION = 11\n    DEFAULT_IR_VERSION = 6\n\n    _type_to_numpy = {\n        \"tensor(float)\": np.float32,\n        \"tensor(double)\": np.float64,\n        \"tensor(int8)\": np.int8,\n        \"tensor(int16)\": np.int16,\n        \"tensor(int32)\": np.int32,\n        \"tensor(int64)\": np.int64,\n        \"tensor(uint8)\": np.uint8,\n        \"tensor(uint16)\": np.uint16,\n        \"tensor(uint32)\": np.uint32,\n        \"tensor(uint64)\": np.uint64,\n        \"tensor(bool)\": np.bool_,\n    }\n\n    _network_folder = os.path.join(UNITTEST_FOLDER, 'onnx/nets/') if UNITTEST_FOLDER else None\n    _output_folder = os.path.join(UNITTEST_FOLDER, 'onnx/ops/') if UNITTEST_FOLDER else None\n    _infer_shapes = False\n    _optimize = True\n\n    def setUp(self) -> None:\n        self._onnx_reader = onnx_io.Reader(simplify=True)\n        self._onnx_writer = onnx_io.Writer()\n        self._nnef_optimizer = nnef_opt.Optimizer()\n        self._onnx_optimizer = onnx_opt.Optimizer()\n        self._onnx_to_nnef_converter = onnx_to_nnef.Converter(infer_shapes=self._infer_shapes)\n        self._nnef_to_onnx_converter = nnef_to_onnx.Converter()\n        self._nnef_reader = nnef_io.Reader(custom_shapes=self._nnef_to_onnx_converter.defined_shapes(),\n                                           decomposed=self._nnef_to_onnx_converter.decomposed_operations())\n        self._nnef_writer = nnef_io.Writer(fragments=self._onnx_to_nnef_converter.defined_operations(),\n                                           fragment_dependencies=self._onnx_to_nnef_converter.defined_operation_dependencies())\n\n    def tearDown(self) -> None:\n        pass\n\n    def _convert_to_nnef(self, filename):\n        onnx_graph = self._onnx_reader(filename)\n        if self._optimize:\n            onnx_graph = self._onnx_optimizer(onnx_graph)\n        nnef_graph = self._onnx_to_nnef_converter(onnx_graph)\n        if self._optimize:\n            nnef_graph = self._nnef_optimizer(nnef_graph)\n        self._nnef_writer(nnef_graph, filename + '.nnef')\n\n    def _convert_from_nnef(self, filename):\n        nnef_graph = self._nnef_reader(filename)\n        onnx_graph = self._nnef_to_onnx_converter(nnef_graph)\n        self._onnx_writer(onnx_graph, filename + '.onnx')\n\n    @staticmethod\n    def _random_data(dtype, shape):\n        if dtype == bool:\n            return np.random.random(shape) > 0.5\n        else:\n            return np.random.random(shape).astype(dtype)\n\n    @staticmethod\n    def _exec_model(filename):\n        import onnxruntime\n        np.random.seed(0)\n\n        options = onnxruntime.SessionOptions()\n        options.inter_op_num_threads = 1\n        options.intra_op_num_threads = 1\n        options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL\n\n        session = onnxruntime.InferenceSession(filename, sess_options=options,\n                                               providers=['CPUExecutionProvider'])\n\n        inputs = {input.name: TestEnv._random_data(TestEnv._type_to_numpy[input.type], input.shape)\n                  for input in session.get_inputs()}\n        outputs = session.run([output.name for output in session.get_outputs()], inputs)\n\n        return outputs\n\n    @staticmethod\n    def _create_tensor(value_info, data):\n        name, shape, dtype = onnx_io.reader._get_value_info(value_info)\n        if data is None:\n            data = TestEnv._random_data(dtype, shape)\n        elif not isinstance(data, np.ndarray):\n            data = np.array(data)\n        return helper.make_tensor(name, helper.np_dtype_to_tensor_dtype(np.dtype(dtype)), shape, vals=data.flat)\n\n    @staticmethod\n    def _create_model(name, nodes, inputs, outputs, constants, values, opset_version, ir_version):\n        tensors = [TestEnv._create_tensor(item, values.get(item.name)) for item in constants]\n        graph_def = helper.make_graph(nodes, name, inputs, outputs, value_info=constants, initializer=tensors)\n        model_def = helper.make_model(graph_def, producer_name='nnef-to-onnx-test')\n        model_def.opset_import[0].version = opset_version\n        model_def.ir_version = ir_version\n        onnx.checker.check_model(model_def, full_check=True)\n        return model_def\n\n    @staticmethod\n    def _save_model(model_def, filename):\n        with open(filename, 'wb') as file:\n            file.write(model_def.SerializeToString())\n\n    def _test_conversion(self, name, nodes, inputs, outputs, constants=None, values=None,\n                         opset_version=DEFAULT_OPSET_VERSION, ir_version=DEFAULT_IR_VERSION, epsilon=1e-5):\n        filename = tempfile.mktemp() if self._output_folder is None else TestEnv._output_folder + name + '.onnx'\n        model_def = self._create_model('G', nodes, inputs, outputs, constants or [], values or {}, opset_version, ir_version)\n        self._save_model(model_def, filename)\n        self._test_conversion_from_file(filename, epsilon=epsilon)\n\n    def _test_conversion_from_file(self, filename, epsilon=1e-5):\n        self._convert_to_nnef(filename)\n        self._convert_from_nnef(filename + '.nnef')\n\n        original_outputs = self._exec_model(filename)\n        converted_outputs = self._exec_model(filename + '.nnef.onnx')\n\n        self.assertEqual(len(original_outputs), len(converted_outputs))\n        for original, converted in zip(original_outputs, converted_outputs):\n            if original.dtype == bool:\n                self.assertTrue(np.all(original == converted))\n            else:\n                diff = np.max(np.abs(original - converted))\n                self.assertLess(diff, epsilon)\n\n    def _test_unary(self, op_type, dtype=TensorProto.FLOAT,\n                    opset_version=DEFAULT_OPSET_VERSION, ir_version=DEFAULT_IR_VERSION):\n        input = helper.make_tensor_value_info('input', dtype, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', dtype, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type=op_type,\n            inputs=['input'],\n            outputs=['output'],\n        )\n\n        self._test_conversion(op_type.lower(), [node], [input], [output],\n                              opset_version=opset_version, ir_version=ir_version)\n\n    def _test_binary(self, op_type, input_dtype=TensorProto.FLOAT, output_dtype=TensorProto.FLOAT,\n                     opset_version=DEFAULT_OPSET_VERSION, ir_version=DEFAULT_IR_VERSION):\n        input1 = helper.make_tensor_value_info('input1', input_dtype, [1, 3, 32, 32])\n        input2 = helper.make_tensor_value_info('input2', input_dtype, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', output_dtype, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type=op_type,\n            inputs=['input1', 'input2'],\n            outputs=['output'],\n        )\n\n        self._test_conversion(op_type.lower(), [node], [input1, input2], [output],\n                              opset_version=opset_version, ir_version=ir_version)\n\n    def _test_reduce(self, op_type, keepdims, dtype=TensorProto.FLOAT, p=None,\n                     opset_version=DEFAULT_OPSET_VERSION, ir_version=DEFAULT_IR_VERSION):\n        input = helper.make_tensor_value_info('input', dtype, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', dtype, [1, 1, 32, 32] if keepdims else [1, 32, 32])\n        node = helper.make_node(\n            op_type=op_type,\n            inputs=['input'],\n            outputs=['output'],\n            axes=[1],\n            keepdims=keepdims,\n        )\n\n        self._test_conversion(op_type.lower(), [node], [input], [output],\n                              opset_version=opset_version, ir_version=ir_version)\n\n\nclass TestCases(TestEnv):\n\n    def test_conv1d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [16])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32])\n        node = helper.make_node(\n            op_type='Conv',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('conv1d', [node], [input], [output], constants=[filter, bias])\n\n    def test_conv2d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5, 5])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [16])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='Conv',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('conv2d', [node], [input], [output], constants=[filter, bias])\n\n    def test_conv2d_nobias(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5, 5])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='Conv',\n            inputs=['input', 'filter'],\n            outputs=['output'],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('conv2d-nobias', [node], [input], [output], constants=[filter])\n\n    def test_conv2d_valid(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5, 5])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [16])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 28, 28])\n        node = helper.make_node(\n            op_type='Conv',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            auto_pad='VALID',\n        )\n\n        self._test_conversion('conv2d-valid', [node], [input], [output], constants=[filter, bias])\n\n    def test_conv2d_pads(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5, 5])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [16])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 30, 30])\n        node = helper.make_node(\n            op_type='Conv',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            pads=[1, 1, 1, 1],\n        )\n\n        self._test_conversion('conv2d-pads', [node], [input], [output], constants=[filter, bias])\n\n    def test_conv2d_same_lower(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5, 5])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [16])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='Conv',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            auto_pad=\"SAME_LOWER\",\n        )\n\n        self._test_conversion('conv2d-same-lower', [node], [input], [output], constants=[filter, bias])\n\n    def test_conv2d_transpose(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5, 5])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [3])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='ConvTranspose',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('conv2d_transpose', [node], [input], [output], constants=[filter, bias])\n\n\n    def test_conv2d_transpose_output_shape(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5, 5])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [3])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='ConvTranspose',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            auto_pad='SAME_UPPER',\n            output_shape=[1, 3, 32, 32],\n        )\n\n        self._test_conversion('conv2d_transpose-output_shape', [node], [input], [output], constants=[filter, bias])\n\n    def test_conv2d_transpose_output_padding_strided(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 3, 3])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [3])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 64, 64])\n        node = helper.make_node(\n            op_type='ConvTranspose',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            pads=(1, 1, 1, 1),\n            output_padding=(1, 1),\n            strides=(2, 2),\n        )\n\n        self._test_conversion('conv2d_transpose-output_padding-strided', [node], [input], [output], constants=[filter, bias])\n\n    def test_conv3d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32, 32])\n        filter = helper.make_tensor_value_info('filter', TensorProto.FLOAT, [16, 3, 5, 5, 5])\n        bias = helper.make_tensor_value_info('bias', TensorProto.FLOAT, [16])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32, 32])\n        node = helper.make_node(\n            op_type='Conv',\n            inputs=['input', 'filter', 'bias'],\n            outputs=['output'],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('conv3d', [node], [input], [output], constants=[filter, bias])\n\n    def test_max_pool1d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32])\n        node = helper.make_node(\n            op_type='MaxPool',\n            inputs=['input'],\n            outputs=['output'],\n            kernel_shape=[3],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('max_pool1d', [node], [input], [output])\n\n    def test_max_pool2d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='MaxPool',\n            inputs=['input'],\n            outputs=['output'],\n            kernel_shape=[3, 3],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('max_pool2d', [node], [input], [output])\n\n    def test_max_pool2d_valid(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 30, 30])\n        node = helper.make_node(\n            op_type='MaxPool',\n            inputs=['input'],\n            outputs=['output'],\n            kernel_shape=[3, 3],\n            auto_pad='VALID',\n        )\n\n        self._test_conversion('max_pool2d-valid', [node], [input], [output])\n\n    def test_max_pool2d_pads(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='MaxPool',\n            inputs=['input'],\n            outputs=['output'],\n            kernel_shape=[3, 3],\n            pads=[1, 1, 1, 1],\n        )\n\n        self._test_conversion('max_pool2d-pads', [node], [input], [output])\n\n    def test_max_pool3d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32, 32])\n        node = helper.make_node(\n            op_type='MaxPool',\n            inputs=['input'],\n            outputs=['output'],\n            kernel_shape=[3, 3, 3],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('max_pool3d', [node], [input], [output])\n\n    def test_avg_pool2d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='AveragePool',\n            inputs=['input'],\n            outputs=['output'],\n            kernel_shape=[3, 3],\n            auto_pad='SAME_UPPER',\n        )\n\n        self._test_conversion('avg_pool2d', [node], [input], [output])\n\n    def test_global_avg_pool2d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 1, 1])\n        node = helper.make_node(\n            op_type='GlobalAveragePool',\n            inputs=['input'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('global_avg_pool2d', [node], [input], [output])\n\n    def test_global_max_pool2d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 1, 1])\n        node = helper.make_node(\n            op_type='GlobalMaxPool',\n            inputs=['input'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('global_max_pool2d', [node], [input], [output])\n\n    def test_lp_pool2d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='LpPool',\n            inputs=['input'],\n            outputs=['output'],\n            kernel_shape=[3, 3],\n            auto_pad='SAME_UPPER',\n            p=2,\n        )\n\n        self._test_conversion('lp_pool2d', [node], [input], [output])\n\n    def test_global_lp_pool2d(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 1, 1])\n        node = helper.make_node(\n            op_type='GlobalLpPool',\n            inputs=['input'],\n            outputs=['output'],\n            p=2,\n        )\n\n        self._test_conversion('global_lp_pool2d', [node], [input], [output])\n\n    def test_batch_norm(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        mean = helper.make_tensor_value_info('mean', TensorProto.FLOAT, [3])\n        variance = helper.make_tensor_value_info('variance', TensorProto.FLOAT, [3])\n        offset = helper.make_tensor_value_info('offset', TensorProto.FLOAT, [3])\n        scale = helper.make_tensor_value_info('scale', TensorProto.FLOAT, [3])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='BatchNormalization',\n            inputs=['input', 'scale', 'offset', 'mean', 'variance'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('batch_norm', [node], [input], [output], [mean, variance, offset, scale])\n\n    def test_transpose(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 32, 32, 3])\n        node = helper.make_node(\n            op_type='Transpose',\n            inputs=['input'],\n            outputs=['output'],\n            perm=[0, 2, 3, 1],\n        )\n\n        self._test_conversion('transpose', [node], [input], [output])\n\n    def test_reshape(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3 * 32 * 32])\n        shape = helper.make_tensor_value_info('shape', TensorProto.INT64, [2])\n        node = helper.make_node(\n            op_type='Reshape',\n            inputs=['input', 'shape'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('reshape', [node], [input, shape], [output], constants=[shape],\n                              values={'shape': [1, 3 * 32 * 32]})\n\n    def test_flatten(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3 * 32 * 32])\n        node = helper.make_node(\n            op_type='Flatten',\n            inputs=['input'],\n            outputs=['output'],\n            axis=1,\n        )\n\n        self._test_conversion('flatten', [node], [input], [output])\n\n    def test_squeeze(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 1, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 32, 32])\n        node = helper.make_node(\n            op_type='Squeeze',\n            inputs=['input'],\n            outputs=['output'],\n            axes=[1],\n        )\n\n        self._test_conversion('squeeze', [node], [input], [output])\n\n    def test_unsqueeze(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 1, 32, 32])\n        node = helper.make_node(\n            op_type='Unsqueeze',\n            inputs=['input'],\n            outputs=['output'],\n            axes=[1],\n        )\n\n        self._test_conversion('unsqueeze', [node], [input], [output])\n\n    def test_matmul(self):\n        input1 = helper.make_tensor_value_info('input1', TensorProto.FLOAT, [10, 20])\n        input2 = helper.make_tensor_value_info('input2', TensorProto.FLOAT, [20, 30])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [10, 30])\n        node = helper.make_node(\n            op_type='MatMul',\n            inputs=['input1', 'input2'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('matmul', [node], [input1, input2], [output])\n\n    def test_gemm(self):\n        input1 = helper.make_tensor_value_info('input1', TensorProto.FLOAT, [10, 20])\n        input2 = helper.make_tensor_value_info('input2', TensorProto.FLOAT, [20, 30])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [10, 30])\n        node = helper.make_node(\n            op_type='Gemm',\n            inputs=['input1', 'input2'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('gemm', [node], [input1, input2], [output])\n\n    def test_linear(self):\n        input1 = helper.make_tensor_value_info('input1', TensorProto.FLOAT, [10, 20])\n        input2 = helper.make_tensor_value_info('input2', TensorProto.FLOAT, [30, 20])\n        input3 = helper.make_tensor_value_info('input3', TensorProto.FLOAT, [30])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [10, 30])\n        node = helper.make_node(\n            op_type='Gemm',\n            inputs=['input1', 'input2', 'input3'],\n            outputs=['output'],\n            transB=1,\n        )\n\n        self._test_conversion('linear', [node], [input1], [output], constants=[input2, input3])\n\n    def test_lrn(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='LRN',\n            inputs=['input'],\n            outputs=['output'],\n            size=5,\n        )\n\n        self._test_conversion('lrn', [node], [input], [output])\n\n    def test_concat(self):\n        input1 = helper.make_tensor_value_info('input1', TensorProto.FLOAT, [1, 3, 32, 32])\n        input2 = helper.make_tensor_value_info('input2', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 6, 32, 32])\n        node = helper.make_node(\n            op_type='Concat',\n            inputs=['input1', 'input2'],\n            outputs=['output'],\n            axis=1,\n        )\n\n        self._test_conversion('concat', [node], [input1, input2], [output])\n\n    def test_split(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 6, 32, 32])\n        output1 = helper.make_tensor_value_info('output1', TensorProto.FLOAT, [1, 3, 32, 32])\n        output2 = helper.make_tensor_value_info('output2', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='Split',\n            inputs=['input'],\n            outputs=['output1', 'output2'],\n            axis=1,\n            split=[3, 3],\n        )\n\n        self._test_conversion('split', [node], [input], [output1, output2])\n\n    def test_split_dynamic(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 6, 32, 32])\n        split = helper.make_tensor_value_info('split', TensorProto.INT64, [2])\n        output1 = helper.make_tensor_value_info('output1', TensorProto.FLOAT, [1, 3, 32, 32])\n        output2 = helper.make_tensor_value_info('output2', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='Split',\n            inputs=['input', 'split'],\n            outputs=['output1', 'output2'],\n            axis=1,\n        )\n\n        self._test_conversion('split', [node], [input, split], [output1, output2],\n                              constants=[split], values={'split': [3, 3]}, opset_version=13)\n\n    def test_sum(self):\n        input1 = helper.make_tensor_value_info('input1', TensorProto.FLOAT, [1, 3, 32, 32])\n        input2 = helper.make_tensor_value_info('input2', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='Sum',\n            inputs=['input1', 'input2'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('sum', [node], [input1, input2], [output])\n\n    def test_softmax(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='Softmax',\n            inputs=['input'],\n            outputs=['output'],\n            axis=1,\n        )\n\n        self._test_conversion('softmax', [node], [input], [output])\n\n    def test_leaky_relu(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='LeakyRelu',\n            inputs=['input'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('leaky_relu', [node], [input], [output])\n\n    def test_prelu(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        alpha = helper.make_tensor_value_info('alpha', TensorProto.FLOAT, [16, 1, 1])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='PRelu',\n            inputs=['input', 'alpha'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('prelu', [node], [input, alpha], [output])\n\n    def test_where(self):\n        cond = helper.make_tensor_value_info('cond', TensorProto.BOOL, [1, 1, 32, 32])\n        input1 = helper.make_tensor_value_info('input1', TensorProto.FLOAT, [1, 3, 32, 32])\n        input2 = helper.make_tensor_value_info('input2', TensorProto.FLOAT, [1, 3, 1, 1])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='Where',\n            inputs=['cond', 'input1', 'input2'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('where', [node], [cond, input1, input2], [output])\n\n    def test_clip(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        min = helper.make_tensor_value_info('min', TensorProto.FLOAT, [])\n        max = helper.make_tensor_value_info('max', TensorProto.FLOAT, [])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        node = helper.make_node(\n            op_type='Clip',\n            inputs=['input', 'min', 'max'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('clip', [node], [input, min, max], [output])\n\n    def test_argmin(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.INT64, [1, 1, 32, 32])\n        node = helper.make_node(\n            op_type='ArgMin',\n            inputs=['input'],\n            outputs=['output'],\n            axis=1,\n            keepdims=True,\n        )\n\n        self._test_conversion('argmin_reduce', [node], [input], [output])\n\n    def test_argmax(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.INT64, [1, 32, 32])\n        node = helper.make_node(\n            op_type='ArgMax',\n            inputs=['input'],\n            outputs=['output'],\n            axis=1,\n            keepdims=False,\n        )\n\n        self._test_conversion('argmax_reduce', [node], [input], [output])\n\n    def test_pad(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 34, 34])\n        pads = helper.make_tensor_value_info('pads', TensorProto.INT64, [8])\n        node = helper.make_node(\n            op_type='Pad',\n            inputs=['input', 'pads'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('pad', [node], [input], [output], constants=[pads],\n                              values={'pads': [0, 0, 1, 1, 0, 0, 1, 1]})\n\n    def test_tile(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 64, 64])\n        repeats = helper.make_tensor_value_info('repeats', TensorProto.INT64, [4])\n        node = helper.make_node(\n            op_type='Tile',\n            inputs=['input', 'repeats'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('tile', [node], [input], [output], constants=[repeats],\n                              values={'repeats': [1, 1, 2, 2]})\n\n    def test_expand(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [4, 3, 1, 1])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [4, 3, 32, 32])\n        repeats = helper.make_tensor_value_info('shape', TensorProto.INT64, [4])\n        node = helper.make_node(\n            op_type='Expand',\n            inputs=['input', 'shape'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('expand', [node], [input], [output], constants=[repeats],\n                              values={'shape': [4, 3, 32, 32]})\n\n    def test_slice(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 30, 30])\n        starts = helper.make_tensor_value_info('starts', TensorProto.INT64, [2])\n        ends = helper.make_tensor_value_info('ends', TensorProto.INT64, [2])\n        axes = helper.make_tensor_value_info('axes', TensorProto.INT64, [2])\n        node = helper.make_node(\n            op_type='Slice',\n            inputs=['input', 'starts', 'ends', 'axes'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('slice', [node], [input], [output], constants=[starts, ends, axes],\n                              values={'starts': [1, 1], 'ends': [-1, -1], 'axes': [2, 3]})\n\n    def test_strided_slice(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 32, 32])\n        starts = helper.make_tensor_value_info('starts', TensorProto.INT64, [2])\n        ends = helper.make_tensor_value_info('ends', TensorProto.INT64, [2])\n        axes = helper.make_tensor_value_info('axes', TensorProto.INT64, [2])\n        steps = helper.make_tensor_value_info('steps', TensorProto.INT64, [2])\n        node = helper.make_node(\n            op_type='Slice',\n            inputs=['input', 'starts', 'ends', 'axes', 'steps'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('strided_slice', [node], [input], [output], constants=[starts, ends, axes, steps],\n                              values={'starts': [-1, -1], 'ends': [-sys.maxsize, -sys.maxsize], 'axes': [2, 3], 'steps': [-1, -1]})\n\n    def test_flip(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 30, 30])\n        starts = helper.make_tensor_value_info('starts', TensorProto.INT64, [2])\n        ends = helper.make_tensor_value_info('ends', TensorProto.INT64, [2])\n        axes = helper.make_tensor_value_info('axes', TensorProto.INT64, [2])\n        steps = helper.make_tensor_value_info('steps', TensorProto.INT64, [2])\n        node = helper.make_node(\n            op_type='Slice',\n            inputs=['input', 'starts', 'ends', 'axes', 'steps'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('flip', [node], [input], [output], constants=[starts, ends, axes, steps],\n                              values={'starts': [-2, -2], 'ends': [0, 0], 'axes': [2, 3], 'steps': [-1, -1]})\n\n    def test_l1_normalization(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='LpNormalization',\n            inputs=['input'],\n            outputs=['output'],\n            axis=1,\n            p=1,\n        )\n\n        self._test_conversion('l1_normalization', [node], [input], [output])\n\n    def test_l2_normalization(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='LpNormalization',\n            inputs=['input'],\n            outputs=['output'],\n            axis=1,\n            p=2,\n        )\n\n        self._test_conversion('l2_normalization', [node], [input], [output])\n\n    def test_mean_variance_normalization(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='MeanVarianceNormalization',\n            inputs=['input'],\n            outputs=['output'],\n            axes=[0, 2, 3],\n        )\n\n        self._test_conversion('mean_variance_normalization', [node], [input], [output])\n\n    def test_instance_normalization(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        scale = helper.make_tensor_value_info('scale', TensorProto.FLOAT, [16])\n        offset = helper.make_tensor_value_info('offset', TensorProto.FLOAT, [16])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 16, 32, 32])\n        node = helper.make_node(\n            op_type='InstanceNormalization',\n            inputs=['input', 'scale', 'offset'],\n            outputs=['output'],\n        )\n\n        self._test_conversion('instance_normalization', [node], [input], [output], constants=[scale, offset])\n\n    def test_lp_reduce(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 1, 32, 32])\n        node = helper.make_node(\n            op_type='ReduceL1',\n            inputs=['input'],\n            outputs=['output'],\n            axes=[1],\n            keepdims=True,\n        )\n\n        self._test_conversion('lp_reduce', [node], [input], [output])\n\n    def test_nearest_upsample(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 64, 64])\n        node = helper.make_node(\n            op_type='Upsample',\n            inputs=['input'],\n            outputs=['output'],\n            scales=[1.0, 1.0, 2.0, 2.0],\n            mode='nearest',\n        )\n\n        self._test_conversion('nearest_upsample', [node], [input], [output], opset_version=8)\n\n    def test_linear_upsample(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 64, 64])\n        node = helper.make_node(\n            op_type='Upsample',\n            inputs=['input'],\n            outputs=['output'],\n            scales=[1.0, 1.0, 2.0, 2.0],\n            mode='linear',\n        )\n\n        self._test_conversion('linear_upsample', [node], [input], [output], opset_version=8)\n\n    def test_resize_nearest_upsample(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        scales = helper.make_tensor_value_info('scales', TensorProto.FLOAT, [4])\n        roi = helper.make_tensor_value_info('roi', TensorProto.FLOAT, [0])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 64, 64])\n        node = helper.make_node(\n            op_type='Resize',\n            inputs=['input', 'roi', 'scales'],\n            outputs=['output'],\n            mode='nearest',\n        )\n\n        self._test_conversion('resize_nearest_upsample', [node], [input], [output], constants=[scales, roi],\n                              values={'scales': [1.0, 1.0, 2.0, 2.0], 'roi': []})\n\n    def test_resize_linear_upsample(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        scales = helper.make_tensor_value_info('scales', TensorProto.FLOAT, [4])\n        roi = helper.make_tensor_value_info('roi', TensorProto.FLOAT, [0])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 64, 64])\n        node = helper.make_node(\n            op_type='Resize',\n            inputs=['input', 'roi', 'scales'],\n            outputs=['output'],\n            mode='linear',\n        )\n\n        self._test_conversion('resize_liner_upsample', [node], [input], [output], constants=[scales, roi],\n                              values={'scales': [1.0, 1.0, 2.0, 2.0], 'roi': []})\n\n    def test_resize_nearest_downsample(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 3, 32, 32])\n        scales = helper.make_tensor_value_info('scales', TensorProto.FLOAT, [4])\n        roi = helper.make_tensor_value_info('roi', TensorProto.FLOAT, [0])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 3, 16, 16])\n        node = helper.make_node(\n            op_type='Resize',\n            inputs=['input', 'roi', 'scales'],\n            outputs=['output'],\n            mode='nearest',\n        )\n\n        self._test_conversion('resize_nearest_downsample', [node], [input], [output], constants=[scales, roi],\n                              values={'scales': [1.0, 1.0, 0.5, 0.5], 'roi': []})\n\n    def test_cast(self):\n        input = helper.make_tensor_value_info('input', TensorProto.INT32, [1, 4, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 4, 32, 32])\n        node = helper.make_node(\n            op_type='Cast',\n            inputs=['input'],\n            outputs=['output'],\n            to=TensorProto.FLOAT,\n        )\n\n        self._test_conversion('cast', [node], [input], [output])\n\n    def test_gather(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [1, 16, 32, 32])\n        indices = helper.make_tensor_value_info('indices', TensorProto.INT32, [24])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [1, 24, 32, 32])\n        node = helper.make_node(\n            op_type='Gather',\n            inputs=['input', 'indices'],\n            outputs=['output'],\n            axis=1,\n        )\n\n        self._test_conversion('gather', [node], [input, indices], [output])\n\n    def test_lstm(self):\n        X = helper.make_tensor_value_info('X', TensorProto.FLOAT, [5, 4, 32])\n        W = helper.make_tensor_value_info('W', TensorProto.FLOAT, [1, 256, 32])\n        R = helper.make_tensor_value_info('R', TensorProto.FLOAT, [1, 256, 64])\n        B = helper.make_tensor_value_info('B', TensorProto.FLOAT, [1, 512])\n        h0 = helper.make_tensor_value_info('h0', TensorProto.FLOAT, [1, 4, 64])\n        c0 = helper.make_tensor_value_info('c0', TensorProto.FLOAT, [1, 4, 64])\n        hn = helper.make_tensor_value_info('hn', TensorProto.FLOAT, [1, 4, 64])\n        cn = helper.make_tensor_value_info('cn', TensorProto.FLOAT, [1, 4, 64])\n        node = helper.make_node(\n            op_type='LSTM',\n            inputs=['X', 'W', 'R', 'B', '', 'h0', 'c0'],\n            outputs=['', 'hn', 'cn'],\n            hidden_size=64,\n            direction=\"forward\",\n        )\n\n        self._test_conversion('lstm', [node], [X, h0, c0], [hn, cn], constants=[W, R, B])\n\n    def test_depth_to_space(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [4, 64, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [4, 4, 128, 128])\n        node = helper.make_node(\n            op_type='DepthToSpace',\n            inputs=['input'],\n            outputs=['output'],\n            blocksize=4,\n        )\n\n        self._test_conversion('depth_to_space', [node], [input], [output])\n\n    def test_depth_to_space_CRD(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [4, 64, 32, 32])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [4, 4, 128, 128])\n        node = helper.make_node(\n            op_type='DepthToSpace',\n            inputs=['input'],\n            outputs=['output'],\n            blocksize=4,\n            mode=\"CRD\"\n        )\n\n        self._test_conversion('depth_to_space_crd', [node], [input], [output])\n\n    def test_space_to_depth(self):\n        input = helper.make_tensor_value_info('input', TensorProto.FLOAT, [4, 4, 128, 128])\n        output = helper.make_tensor_value_info('output', TensorProto.FLOAT, [4, 64, 32, 32])\n        node = helper.make_node(\n            op_type='SpaceToDepth',\n            inputs=['input'],\n            outputs=['output'],\n            blocksize=4,\n        )\n\n        self._test_conversion('space_to_depth', [node], [input], [output])\n\n    def test_min_recude(self):\n        self._test_reduce('ReduceMin', keepdims=False)\n\n    def test_max_recude(self):\n        self._test_reduce('ReduceMax', keepdims=False)\n\n    def test_mean_recude(self):\n        self._test_reduce('ReduceMean', keepdims=False)\n\n    def test_sum_recude(self):\n        self._test_reduce('ReduceSum', keepdims=False)\n\n    def test_max_recude_keepdims(self):\n        self._test_reduce('ReduceMax', keepdims=True)\n\n    def test_relu(self):\n        self._test_unary('Relu')\n\n    def test_sigmoid(self):\n        self._test_unary('Sigmoid')\n\n    def test_tanh(self):\n        self._test_unary('Tanh')\n\n    def test_softplus(self):\n        self._test_unary('Softplus')\n\n    def test_selu(self):\n        self._test_unary('Selu')\n\n    def test_not(self):\n        self._test_unary('Not', dtype=TensorProto.BOOL)\n\n    def test_elu(self):\n        self._test_unary('Elu')\n\n    def test_erf(self):\n        self._test_unary('Erf')\n\n    def test_abs(self):\n        self._test_unary('Abs')\n\n    def test_sign(self):\n        self._test_unary('Sign')\n\n    def test_sin(self):\n        self._test_unary('Sin')\n\n    def test_cos(self):\n        self._test_unary('Cos')\n\n    def test_tan(self):\n        self._test_unary('Tan')\n\n    def test_asin(self):\n        self._test_unary('Asin')\n\n    def test_acos(self):\n        self._test_unary('Acos')\n\n    def test_atan(self):\n        self._test_unary('Atan')\n\n    def test_sinh(self):\n        self._test_unary('Sinh')\n\n    def test_cosh(self):\n        self._test_unary('Cosh')\n\n    def test_tanh(self):\n        self._test_unary('Tanh')\n\n    def test_exp(self):\n        self._test_unary('Exp')\n\n    def test_log(self):\n        self._test_unary('Log')\n\n    def test_neg(self):\n        self._test_unary('Neg')\n\n    def test_sqrt(self):\n        self._test_unary('Sqrt')\n\n    def test_ceil(self):\n        self._test_unary('Ceil')\n\n    def test_floor(self):\n        self._test_unary('Floor')\n\n    def test_round(self):\n        self._test_unary('Round')\n\n    def test_add(self):\n        self._test_binary('Add')\n\n    def test_sub(self):\n        self._test_binary('Sub')\n\n    def test_mul(self):\n        self._test_binary('Mul')\n\n    def test_div(self):\n        self._test_binary('Div')\n\n    def test_pow(self):\n        self._test_binary('Pow')\n\n    def test_min(self):\n        self._test_binary('Min')\n\n    def test_max(self):\n        self._test_binary('Max')\n\n    def test_and(self):\n        self._test_binary('And', input_dtype=TensorProto.BOOL, output_dtype=TensorProto.BOOL)\n\n    def test_or(self):\n        self._test_binary('Or', input_dtype=TensorProto.BOOL, output_dtype=TensorProto.BOOL)\n\n    def test_equal(self):\n        self._test_binary('Equal', output_dtype=TensorProto.BOOL)\n\n    def test_less(self):\n        self._test_binary('Less', output_dtype=TensorProto.BOOL)\n\n    def test_greater(self):\n        self._test_binary('Greater', output_dtype=TensorProto.BOOL)\n\n\n@unittest.skipIf(TestEnv._network_folder is None or not os.path.isdir(TestEnv._network_folder),\n                 \"no network test folder provided\")\nclass NetworkTestCases(TestEnv):\n\n    def test_alexnet(self):\n        self._test_conversion_from_file(self._network_folder + 'alexnet.onnx')\n\n    def test_googlenet(self):\n        self._test_conversion_from_file(self._network_folder + 'googlenet.onnx')\n\n    def test_inception_v1(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_v1.onnx')\n\n    def test_inception_v2(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_v2.onnx')\n\n    def test_mobilenet_v1(self):\n        self._test_conversion_from_file(self._network_folder + 'mobilenet_v1.onnx', epsilon=1e-4)\n\n    def test_mobilenet_v2(self):\n        self._test_conversion_from_file(self._network_folder + 'mobilenet_v2.onnx', epsilon=1e-4)\n\n    def test_resnet50_v1(self):\n        self._test_conversion_from_file(self._network_folder + 'resnet50_v1.onnx')\n\n    def test_resnet50_v2(self):\n        self._test_conversion_from_file(self._network_folder + 'resnet50_v2.onnx')\n\n    def test_squeezenet_v1(self):\n        self._test_conversion_from_file(self._network_folder + 'squeezenet_v1.onnx')\n\n    def test_shufflenet_v1(self):\n        self._test_conversion_from_file(self._network_folder + 'shufflenet_v1.onnx')\n\n    def test_shufflenet_v2(self):\n        self._test_conversion_from_file(self._network_folder + 'shufflenet_v2.onnx', epsilon=1e-4)\n\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "nnef_tools-pyproject/tests/conversion/tflite_test.py",
    "content": "# Copyright (c) 2020 The Khronos Group Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport nnef_tools.io.nnef as nnef_io\nimport nnef_tools.io.tf.lite as lite_io\nimport nnef_tools.conversion.tflite_to_nnef as tflite_to_nnef\nimport nnef_tools.conversion.nnef_to_tflite as nnef_to_tflite\nimport nnef_tools.optimization.nnef_optimizer as nnef_opt\nimport nnef_tools.optimization.tflite_optimizer as tflite_opt\nimport unittest\nimport tempfile\nimport os\ntry:\n    import tensorflow.compat.v1 as tf\n    tf.disable_v2_behavior()\nexcept ImportError:\n    import tensorflow as tf\n\n\nUNITTEST_FOLDER = os.environ.get('UNITTEST_FOLDER')\n\n\nclass TestEnv(unittest.TestCase):\n\n    _network_folder = os.path.join(UNITTEST_FOLDER, 'tflite/nets/') if UNITTEST_FOLDER else None\n    _output_folder = os.path.join(UNITTEST_FOLDER, 'tflite/ops/') if UNITTEST_FOLDER else None\n    _mirror_unsupported = False\n    _io_transpose = True\n    _optimize = True\n\n    def setUp(self) -> None:\n        self._tflite_reader = lite_io.Reader()\n        self._tflite_writer = lite_io.Writer()\n        self._tflite_to_nnef_converter = tflite_to_nnef.Converter(io_transpose=self._io_transpose,\n                                                                  mirror_unsupported=self._mirror_unsupported)\n        self._nnef_to_tflite_converter = nnef_to_tflite.Converter(io_transpose=self._io_transpose,\n                                                                  mirror_unsupported=self._mirror_unsupported)\n        self._nnef_reader = nnef_io.Reader(custom_shapes=self._nnef_to_tflite_converter.defined_shapes(),\n                                           decomposed=self._nnef_to_tflite_converter.decomposed_operations())\n        self._nnef_writer = nnef_io.Writer(fragments=self._tflite_to_nnef_converter.defined_operations())\n        self._nnef_optimizer = nnef_opt.Optimizer()\n        self._tflite_optimizer = tflite_opt.Optimizer()\n\n    def tearDown(self) -> None:\n        tf.reset_default_graph()\n\n    def _convert_to_nnef(self, filename):\n        tflite_graph = self._tflite_reader(filename)\n        if self._optimize:\n            tflite_graph = self._tflite_optimizer(tflite_graph)\n        nnef_graph = self._tflite_to_nnef_converter(tflite_graph)\n        if self._optimize:\n            nnef_graph = self._nnef_optimizer(nnef_graph)\n        self._nnef_writer(nnef_graph, filename + '.nnef')\n\n    def _convert_from_nnef(self, filename):\n        nnef_graph = self._nnef_reader(filename)\n        tflite_graph = self._nnef_to_tflite_converter(nnef_graph)\n        self._tflite_writer(tflite_graph, filename + '.tflite')\n\n    def _save_default_graph(self, inputs, outputs, filename):\n        with tf.Session() as sess:\n            sess.run(tf.global_variables_initializer())\n            converter = tf.lite.TFLiteConverter.from_session(sess, inputs, outputs)\n            tflite_model = converter.convert()\n            with open(filename, \"wb\") as file:\n                file.write(tflite_model)\n\n    @staticmethod\n    def _exec_model(model_path):\n        np.random.seed(0)\n\n        interpreter = tf.lite.Interpreter(model_path=model_path,\n                                          experimental_op_resolver_type=tf.lite.experimental.OpResolverType.BUILTIN_WITHOUT_DEFAULT_DELEGATES)\n        interpreter.allocate_tensors()\n\n        for input in interpreter.get_input_details():\n            shape = input['shape']\n            dtype = input['dtype']\n            data = TestEnv._random_data(dtype, shape)\n            interpreter.set_tensor(input['index'], data)\n\n        interpreter.invoke()\n\n        return [TestEnv._dequantize(interpreter.get_tensor(output['index']), *output['quantization'])\n                for output in interpreter.get_output_details()]\n\n    @staticmethod\n    def _dequantize(data, scale, zero_point):\n        return scale * (data - zero_point) if scale else data\n\n    @staticmethod\n    def _random_data(dtype, shape):\n        if dtype == bool or dtype == np.bool_:\n            return np.random.random(shape) > 0.5\n        elif np.issubdtype(dtype, np.integer):\n            return np.maximum(np.floor(np.random.random(shape) * 256).astype(dtype), 255)\n        else:\n            return np.random.random(shape).astype(dtype)\n\n    def _test_conversion(self, name, inputs, outputs, epsilon=1e-4):\n        filename = tempfile.mktemp() if self._output_folder is None else self._output_folder + name + '.tflite'\n        self._save_default_graph(inputs, outputs, filename)\n        self._test_conversion_from_file(filename, epsilon=epsilon)\n\n    def _test_conversion_from_file(self, filename, epsilon=1e-4):\n        self._convert_to_nnef(filename)\n        self._convert_from_nnef(filename + '.nnef')\n\n        original_outputs = self._exec_model(filename)\n        converted_outputs = self._exec_model(filename + '.nnef.tflite')\n\n        self.assertEqual(len(original_outputs), len(converted_outputs))\n        for original, converted in zip(original_outputs, converted_outputs):\n            if original.dtype == bool:\n                self.assertTrue(np.all(original == converted))\n            else:\n                diff = np.max(np.abs(original - converted))\n                self.assertLess(diff, epsilon)\n\n\nclass TestCases(TestEnv):\n\n    def test_conv1d(self):\n        input = tf.placeholder(shape=(4, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv1d(input, filter, stride=1, padding='SAME')\n\n        self._test_conversion('conv1d', [input], [output])\n\n    def test_conv2d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv2d(input, filter, strides=1, padding='SAME')\n\n        self._test_conversion('conv2d', [input], [output])\n\n    def test_conv2d_transpose(self):\n        input = tf.placeholder(shape=(4, 32, 32, 16), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        output = tf.nn.conv2d_transpose(input, filter, strides=1, padding='SAME', output_shape=(4, 32, 32, 3))\n\n        self._test_conversion('conv2d_transpose', [input], [output])\n\n    def test_depthwise_conv2d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 2)), dtype=tf.float32)\n        output = tf.nn.depthwise_conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')\n\n        self._test_conversion('depthwise_conv2d', [input], [output])\n\n    def test_max_pool2d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.max_pool2d(input, ksize=3, strides=1, padding='SAME')\n\n        self._test_conversion('max_pool2d', [input], [output])\n\n    def test_avg_pool2d(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.avg_pool2d(input, ksize=3, strides=1, padding='SAME')\n\n        self._test_conversion('avg_pool2d', [input], [output])\n\n    def test_reshape(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reshape(input, shape=(4, 32 * 32 * 3))\n\n        self._test_conversion('reshape', [input], [output])\n\n    def test_flatten(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reshape(input, shape=(4, -1))\n\n        self._test_conversion('flatten', [input], [output])\n\n    def test_squeeze(self):\n        input = tf.placeholder(shape=(4, 32, 32, 1), dtype=tf.float32)\n        output = tf.squeeze(input, axis=[3])\n\n        self._test_conversion('squeeze', [input], [output])\n\n    def test_unsqueeze(self):\n        input = tf.placeholder(shape=(4, 32, 32), dtype=tf.float32)\n        output = tf.expand_dims(input, axis=[3])\n\n        self._test_conversion('unsqueeze', [input], [output])\n\n    def test_transpose(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.transpose(input, perm=(0, 3, 1, 2))\n\n        self._test_conversion('transpose', [input], [output])\n\n    def test_concat(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.concat([input1, input2], axis=3)\n\n        self._test_conversion('concat', [input1, input2], [output])\n\n    def test_split_sizes(self):\n        input = tf.placeholder(shape=(4, 32, 32, 6), dtype=tf.float32)\n        [output1, output2] = tf.split(input, axis=3, num_or_size_splits=[3, 3])\n\n        self._test_conversion('split-sizes', [input], [output1, output2])\n\n    def test_split_num(self):\n        input = tf.placeholder(shape=(4, 32, 32, 6), dtype=tf.float32)\n        [output1, output2] = tf.split(input, axis=3, num_or_size_splits=2)\n\n        self._test_conversion('split-num', [input], [output1, output2])\n\n    def test_pad(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.pad(input, paddings=[[0, 0], [1, 2], [1, 2], [0, 0]])\n\n        self._test_conversion('pad', [input], [output])\n\n    def test_pad_reflect(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.pad(input, paddings=[[0, 0], [1, 2], [1, 2], [0, 0]], mode='REFLECT')\n\n        self._test_conversion('pad_reflect', [input], [output])\n\n    def test_pad_symmetric(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.pad(input, paddings=[[0, 0], [1, 2], [1, 2], [0, 0]], mode='SYMMETRIC')\n\n        self._test_conversion('pad_symmetric', [input], [output])\n\n    def test_tile(self):\n        input = tf.placeholder(shape=(4, 1, 1, 3), dtype=tf.float32)\n        output = tf.tile(input, multiples=(1, 32, 32, 1))\n\n        self._test_conversion('tile', [input], [output])\n\n    def test_slice(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.slice(input, begin=[0, 1, 1, 0], size=[4, 30, 30, 3])\n\n        self._test_conversion('slice', [input], [output])\n\n    def test_strided_slice(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = input[:, 1:-1, 1:-1, :]\n\n        self._test_conversion('strided_slice', [input], [output])\n\n    def test_strided_slice_flip(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = input[:, -2:0:-1, -2:0:-1, :]\n\n        self._test_conversion('strided_slice_flip', [input], [output])\n\n    def test_gather(self):\n        input = tf.placeholder(shape=(4, 32, 32, 16), dtype=tf.float32)\n        indices = tf.constant(np.random.random_integers(size=(24,), low=0, high=15), dtype=tf.int32)\n        output = tf.gather(input, indices, axis=3)\n\n        self._test_conversion('gather', [input], [output])\n\n    def test_relu(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.relu(input)\n\n        self._test_conversion('relu', [input], [output])\n\n    def test_relu6(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.relu6(input)\n\n        self._test_conversion('relu6', [input], [output])\n\n    def test_elu(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.elu(input)\n\n        self._test_conversion('elu', [input], [output])\n\n    def test_sigmoid(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.sigmoid(input)\n\n        self._test_conversion('sigmoid', [input], [output])\n\n    def test_tanh(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.tanh(input)\n\n        self._test_conversion('tanh', [input], [output])\n\n    def test_sin(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.sin(input)\n\n        self._test_conversion('sin', [input], [output])\n\n    def test_cos(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.cos(input)\n\n        self._test_conversion('cos', [input], [output])\n\n    def test_log(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.log(input)\n\n        self._test_conversion('log', [input], [output])\n\n    def test_exp(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.exp(input)\n\n        self._test_conversion('exp', [input], [output])\n\n    def test_neg(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.negative(input)\n\n        self._test_conversion('neg', [input], [output])\n\n    def test_floor(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.floor(input)\n\n        self._test_conversion('floor', [input], [output])\n\n    def test_ceil(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.ceil(input)\n\n        self._test_conversion('ceil', [input], [output])\n\n    def test_round(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.round(input)\n\n        self._test_conversion('round', [input], [output])\n\n    def test_sqr(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.square(input)\n\n        self._test_conversion('sqr', [input], [output])\n\n    def test_sqrt(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.sqrt(input)\n\n        self._test_conversion('sqrt', [input], [output])\n\n    def test_rsqrt(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.rsqrt(input)\n\n        self._test_conversion('rsqrt', [input], [output])\n\n    def test_not(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.math.logical_not(input)\n\n        self._test_conversion('not', [input], [output])\n\n    def test_abs(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.abs(input)\n\n        self._test_conversion('abs', [input], [output])\n\n    def test_leaky_relu(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.leaky_relu(input, alpha=0.1)\n\n        self._test_conversion('leaky_relu', [input], [output])\n\n    def test_add(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.add(input1, input2)\n\n        self._test_conversion('add', [input1, input2], [output])\n\n    def test_sub(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.subtract(input1, input2)\n\n        self._test_conversion('sub', [input1, input2], [output])\n\n    def test_mul(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.multiply(input1, input2)\n\n        self._test_conversion('mul', [input1, input2], [output])\n\n    def test_div(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.divide(input1, input2)\n\n        self._test_conversion('div', [input1, input2], [output])\n\n    def test_pow(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.pow(input1, input2)\n\n        self._test_conversion('pow', [input1, input2], [output])\n\n    def test_min(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.minimum(input1, input2)\n\n        self._test_conversion('min', [input1, input2], [output])\n\n    def test_max(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.maximum(input1, input2)\n\n        self._test_conversion('max', [input1, input2], [output])\n\n    def test_and(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.logical_and(input1, input2)\n\n        self._test_conversion('and', [input1, input2], [output])\n\n    def test_or(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.logical_or(input1, input2)\n\n        self._test_conversion('or', [input1, input2], [output])\n\n    def test_lt(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.less(input1, input2)\n\n        self._test_conversion('lt', [input1, input2], [output])\n\n    def test_le(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.less_equal(input1, input2)\n\n        self._test_conversion('le', [input1, input2], [output])\n\n    def test_gt(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.greater(input1, input2)\n\n        self._test_conversion('gt', [input1, input2], [output])\n\n    def test_ge(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.greater_equal(input1, input2)\n\n        self._test_conversion('ge', [input1, input2], [output])\n\n    def test_eq(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.equal(input1, input2)\n\n        self._test_conversion('eq', [input1, input2], [output])\n\n    def test_ne(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.not_equal(input1, input2)\n\n        self._test_conversion('ne', [input1, input2], [output])\n\n    def test_min_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reduce_min(input, axis=3, keepdims=True)\n\n        self._test_conversion('min_reduce', [input], [output])\n\n    def test_max_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reduce_max(input, axis=3, keepdims=True)\n\n        self._test_conversion('max_reduce', [input], [output])\n\n    def test_mean_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reduce_mean(input, axis=3, keepdims=True)\n\n        self._test_conversion('mean_reduce', [input], [output])\n\n    def test_sum_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.reduce_sum(input, axis=3, keepdims=True)\n\n        self._test_conversion('sum_reduce', [input], [output])\n\n    def test_any_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        output = tf.reduce_any(input, axis=3, keepdims=True)\n\n        self._test_conversion('any_reduce', [input], [output])\n\n    def test_argmin_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.argmin(input, axis=-1)\n\n        self._test_conversion('axgmin_reduce', [input], [output])\n\n    def test_argmax_reduce(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.argmax(input, axis=-1)\n\n        self._test_conversion('axgmax_reduce', [input], [output])\n\n    def test_stack(self):\n        input1 = tf.placeholder(shape=(4, 32, 32, 1), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(4, 32, 32, 1), dtype=tf.float32)\n        input1 = tf.squeeze(input1, axis=3)\n        input2 = tf.squeeze(input2, axis=3)\n        output = tf.stack([input1, input2], axis=3)\n\n        self._test_conversion('stack', [input1, input2], [output])\n\n    def test_unstack(self):\n        input = tf.placeholder(shape=(4, 32, 32, 2), dtype=tf.float32)\n        [output1, output2] = tf.unstack(input, axis=3)\n        output1 = tf.expand_dims(output1, axis=3)\n        output2 = tf.expand_dims(output2, axis=3)\n\n        self._test_conversion('unstack', [input], [output1, output2])\n\n    def test_conv_bias_relu_pool(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        filter = tf.constant(np.random.random(size=(5, 5, 3, 16)), dtype=tf.float32)\n        bias = tf.constant(np.random.random(size=16,), dtype=tf.float32)\n        mean = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        variance = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        scale = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        offset = tf.constant(np.random.random(size=16, ), dtype=tf.float32)\n        filtered = tf.nn.conv2d(input, filter, strides=1, padding='SAME')\n        biased = tf.nn.bias_add(filtered, bias)\n        normed, _mean, _variance = tf.nn.fused_batch_norm(biased, scale, offset, mean, variance, is_training=False)\n        relu = tf.nn.relu(normed)\n        pooled = tf.nn.max_pool2d(relu, ksize=2, strides=2, padding='SAME')\n\n        self._test_conversion('conv_bias_relu_pool', [input], [pooled])\n\n    def test_softmax(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.softmax(input)\n\n        self._test_conversion('softmax', [input], [output])\n\n    def test_matmul(self):\n        input1 = tf.placeholder(shape=(10, 100), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(100, 20), dtype=tf.float32)\n        output = tf.matmul(input1, input2)\n\n        self._test_conversion('matmul', [input1, input2], [output])\n\n    def test_matmul_trans(self):\n        input1 = tf.placeholder(shape=(10, 100), dtype=tf.float32)\n        input2 = tf.placeholder(shape=(20, 100), dtype=tf.float32)\n        output = tf.matmul(input1, input2, transpose_b=True)\n\n        self._test_conversion('matmul-trans', [input1, input2], [output])\n\n    def test_l2_normalize(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.math.l2_normalize(input, axis=-1)\n\n        self._test_conversion('l2_normalize', [input], [output])\n\n    def test_lrn(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.nn.local_response_normalization(input, depth_radius=3)\n\n        self._test_conversion('lrn', [input], [output])\n\n    def test_upsample_nearest(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.image.resize_nearest_neighbor(input, size=(64, 64))\n\n        self._test_conversion('upsample-nearest', [input], [output])\n\n    def test_downsample_nearest(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.image.resize_nearest_neighbor(input, size=(16, 16))\n\n        self._test_conversion('downsample-nearest', [input], [output])\n\n    def test_upsample_linear(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.image.resize_bilinear(input, size=(64, 64))\n\n        self._test_conversion('upsample-linear', [input], [output])\n\n    def test_select(self):\n        cond = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.bool)\n        left = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        right = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.where(cond, left, right)\n\n        self._test_conversion('select', [cond, left, right], [output])\n\n    def test_batch_norm(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        mean = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        variance = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        scale = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        offset = tf.constant(np.random.random((3,)), dtype=tf.float32)\n        output = tf.nn.batch_normalization(input, scale=scale, offset=offset, mean=mean, variance=variance,\n                                           variance_epsilon=1e-5)\n\n        self._test_conversion('batch_norm', [input], [output])\n\n    def test_add_n(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.float32)\n        output = tf.add_n([input, input, input])\n\n        self._test_conversion('add_n', [input], [output])\n\n    def test_cast(self):\n        input = tf.placeholder(shape=(4, 32, 32, 3), dtype=tf.int32)\n        output = tf.cast(input, tf.float32)\n\n        self._test_conversion('cast', [input], [output])\n\n\n@unittest.skipIf(TestEnv._network_folder is None or not os.path.isdir(TestEnv._network_folder),\n                 \"no network test folder provided\")\nclass NetworkTestCases(TestEnv):\n\n    def test_inception_v1(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_v1.tflite')\n\n    def test_inception_v2(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_v2.tflite')\n\n    def test_inception_v3(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_v3.tflite')\n\n    def test_inception_v4(self):\n        self._test_conversion_from_file(self._network_folder + 'inception_v4.tflite')\n\n    def test_mobilenet_v1(self):\n        self._test_conversion_from_file(self._network_folder + 'mobilenet_v1.tflite')\n\n    def test_mobilenet_v2(self):\n        self._test_conversion_from_file(self._network_folder + 'mobilenet_v2.tflite')\n"
  }
]