Full Code of AIWintermuteAI/aXeleRate for AI

master 0012d683e1cb cached
135 files
572.1 KB
149.4k tokens
604 symbols
1 requests
Download .txt
Showing preview only (613K chars total). Download the full file or copy to clipboard to get everything.
Repository: AIWintermuteAI/aXeleRate
Branch: master
Commit: 0012d683e1cb
Files: 135
Total size: 572.1 KB

Directory structure:
gitextract_o2hqtp1u/

├── .github/
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   └── feature_request.yml
│   └── workflows/
│       └── python-publish.yml
├── .gitignore
├── LICENSE
├── README.md
├── axelerate/
│   ├── __init__.py
│   ├── evaluate.py
│   ├── infer.py
│   ├── networks/
│   │   ├── __init__.py
│   │   ├── classifier/
│   │   │   ├── __init__.py
│   │   │   ├── batch_gen.py
│   │   │   ├── directory_iterator.py
│   │   │   ├── frontend_classifier.py
│   │   │   ├── iterator.py
│   │   │   └── utils.py
│   │   ├── common_utils/
│   │   │   ├── __init__.py
│   │   │   ├── augment.py
│   │   │   ├── callbacks.py
│   │   │   ├── convert.py
│   │   │   ├── feature.py
│   │   │   ├── fit.py
│   │   │   ├── install_edge_tpu_compiler.sh
│   │   │   ├── install_openvino.sh
│   │   │   └── mobilenet_sipeed/
│   │   │       ├── __init__.py
│   │   │       ├── imagenet_utils.py
│   │   │       └── mobilenet.py
│   │   ├── segnet/
│   │   │   ├── __init__.py
│   │   │   ├── data_utils/
│   │   │   │   ├── __init__.py
│   │   │   │   └── data_loader.py
│   │   │   ├── frontend_segnet.py
│   │   │   ├── metrics.py
│   │   │   ├── models/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── _pspnet_2.py
│   │   │   │   ├── all_models.py
│   │   │   │   ├── basic_models.py
│   │   │   │   ├── config.py
│   │   │   │   ├── fcn.py
│   │   │   │   ├── model.py
│   │   │   │   ├── model_utils.py
│   │   │   │   ├── pspnet.py
│   │   │   │   ├── segnet.py
│   │   │   │   └── unet.py
│   │   │   ├── predict.py
│   │   │   └── train.py
│   │   └── yolo/
│   │       ├── __init__.py
│   │       ├── backend/
│   │       │   ├── __init__.py
│   │       │   ├── batch_gen.py
│   │       │   ├── decoder.py
│   │       │   ├── loss.py
│   │       │   ├── network.py
│   │       │   └── utils/
│   │       │       ├── __init__.py
│   │       │       ├── annotation.py
│   │       │       ├── box.py
│   │       │       ├── custom.py
│   │       │       └── eval/
│   │       │           ├── __init__.py
│   │       │           ├── _box_match.py
│   │       │           └── fscore.py
│   │       └── frontend.py
│   └── train.py
├── configs/
│   ├── classifier.json
│   ├── detector.json
│   ├── dogs_classifier.json
│   ├── face_detector.json
│   ├── kangaroo_detector.json
│   ├── lego_detector.json
│   ├── pascal_20_detector.json
│   ├── pascal_20_detector_2.json
│   ├── pascal_20_segnet.json
│   ├── person_detector.json
│   ├── raccoon_detector.json
│   ├── santa_uno.json
│   └── segmentation.json
├── example_scripts/
│   ├── arm_nn/
│   │   ├── README.md
│   │   ├── box.py
│   │   ├── cv_utils.py
│   │   ├── network_executor.py
│   │   ├── run_video_file.py
│   │   ├── run_video_stream.py
│   │   └── yolov2.py
│   ├── edge_tpu/
│   │   └── detector/
│   │       ├── box.py
│   │       └── detector_video.py
│   ├── k210/
│   │   ├── classifier/
│   │   │   └── santa_uno.py
│   │   ├── detector/
│   │   │   ├── yolov2/
│   │   │   │   ├── person_detector_v4.py
│   │   │   │   ├── raccoon_detector.py
│   │   │   │   └── raccoon_detector_uart.py
│   │   │   └── yolov3/
│   │   │       └── raccoon_detector.py
│   │   └── segnet/
│   │       └── segnet-support-is-WIP-contributions-welcome
│   ├── oak/
│   │   └── yolov2/
│   │       ├── YOLO_best_mAP.json
│   │       ├── box.py
│   │       ├── yolo.py
│   │       └── yolo_alt.py
│   └── tensorflow_lite/
│       ├── classifier/
│       │   ├── base_camera.py
│       │   ├── camera_opencv.py
│       │   ├── camera_pi.py
│       │   ├── classifier_file.py
│       │   ├── classifier_stream.py
│       │   ├── cv_utils.py
│       │   └── templates/
│       │       └── index.html
│       ├── detector/
│       │   ├── base_camera.py
│       │   ├── camera_opencv.py
│       │   ├── camera_pi.py
│       │   ├── cv_utils.py
│       │   ├── detector_file.py
│       │   ├── detector_stream.py
│       │   └── templates/
│       │       └── index.html
│       └── segnet/
│           ├── base_camera.py
│           ├── camera_opencv.py
│           ├── camera_pi.py
│           ├── cv_utils.py
│           ├── segnet_file.py
│           ├── segnet_stream.py
│           └── templates/
│               └── index.html
├── resources/
│   ├── aXeleRate_face_detector.ipynb
│   ├── aXeleRate_human_segmentation.ipynb
│   ├── aXeleRate_mark_detector.ipynb
│   ├── aXeleRate_pascal20_detector.ipynb
│   ├── aXeleRate_person_detector.ipynb
│   └── aXeleRate_standford_dog_classifier.ipynb
├── sample_datasets/
│   └── detector/
│       ├── anns/
│       │   ├── 2007_000032.xml
│       │   └── 2007_000033.xml
│       └── anns_validation/
│           ├── 2007_000243.xml
│           ├── 2007_000250.xml
│           ├── 2007_000645.xml
│           ├── 2007_001595.xml
│           ├── 2007_001834.xml
│           ├── 2007_003131.xml
│           ├── 2007_003201.xml
│           ├── 2007_003593.xml
│           ├── 2007_004627.xml
│           └── 2007_005803.xml
├── setup.py
└── tests_training_and_inference.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/FUNDING.yml
================================================
# These are supported funding model platforms

github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: ['https://www.buymeacoffee.com/hardwareai']


================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.yml
================================================
name: Bug Report
description: File a bug report
title: "[Bug]: "
labels: [bug, triage]
assignees:
  - AIWintermuteAI
body:
  - type: markdown
    attributes:
      value: |
        Thanks for taking the time to fill out this bug report! Before you do, however, make sure you have done the following.

  - type: checkboxes
    id: googled
    attributes:
      label: Check if applicable
      options:
        - label: I used Google/Bing/other search engines to thoroughly research my question and DID NOT find any suitable answers
          required: true

        - label: Additionally I went through the issues in this repository/MaixPy/Tensorflow repositories and DID NOT find any suitable answers
          required: true

  - type: textarea
    id: what-happened
    attributes:
      label: Describe the bug
      description: A clear and concise description of what the bug is, with screenshots/models/videos if necessary.
      value: |
            **To Reproduce**
            Steps to reproduce the behavior:
            1. Go to '...'
            2. Click on '....'
            3. Scroll down to '....'
            4. See error
    validations:
      required: true

  - type: textarea
    id: what-expected
    attributes:
      label: Expected behavior
      description: A clear and concise description of what you expected to happen.
    validations:
      required: true

  - type: textarea
    id: platform
    attributes:
      label: Platform
      description: What platform are you running the code on.
      value: |
            - Device: [e.g. Raspberry Pi 4 or M5 StickV]
            - OS/firmware: [e.g. Raspbian OS 32bit kernel version ...]
            - Version/commit number of aXeleRate: [e.g. d1816f5]
    validations:
      required: true

  - type: textarea
    id: logs
    attributes:
      label: Relevant log output
      description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
      render: shell



================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
blank_issues_enabled: false
contact_links:
  - name: Google
    url: https://google.com/
    about: Please find answers to general questions,i.e "what are anchors", "how is mAP calculated", "my cat coughing up fur can you help please" HERE.

================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.yml
================================================
name: Feature request
description: Suggest an idea for this project
title: "[Feature request]: "
labels: [enhancement, help wanted]

body:
  - type: markdown
    attributes:
      value: |
        Thanks for interest in improving aXeleRate! It is a personal project of mine, which I continually develop with help of other volunteers. 

  - type: checkboxes
    id: boxes
    attributes:
      label: Choose an option
      options:
        - label: I'd like to contribute to development by making a PR.
        - label: Alternatively I could consider a small beer donation to the developer as token of my appreciation. 

  - type: textarea
    id: feature
    attributes:
      label: Describe the desired feature
      description: A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]. Add screenshots/models/videos if necessary.
    validations:
      required: true

  - type: textarea
    id: what-expected
    attributes:
      label: Describe the solution you'd like
      description: A clear and concise description of what you want to happen.
    validations:
      required: true

  - type: textarea
    id: logs
    attributes:
      label: Relevant log output
      description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
      render: shell

================================================
FILE: .github/workflows/python-publish.yml
================================================
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

name: Upload Python Package

on:
  release:
    types: [created]

jobs:
  deploy:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: '3.x'
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install setuptools wheel twine
    - name: Build and publish
      env:
        TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
        TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
      run: |
        python setup.py sdist bdist_wheel
        twine upload dist/*


================================================
FILE: .gitignore
================================================
__pycache__/
axelerate/networks/common_utils/ncc
axelerate/networks/common_utils/ncc_linux_x86_64.tar.xz
axelerate.egg-info/
build/
dist/
_configs/
projects/
logs/
*.tflite
*.h5
*.kmodel
*.txt
*.pyc
.vscode/


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2020 Dmitry Maslov

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
<h1 align="center">
  <img src="https://raw.githubusercontent.com/AIWintermuteAI/aXeleRate/master/resources/logo.png" alt="aXeleRate" width="350">
</h1>

<h3 align="center">Keras-based framework for AI on the Edge</h3>

<hr>
<p align="center">
aXeleRate streamlines training and converting computer vision models to be run on various platforms with hardware acceleration. It is optimized for both the workflow on local machine(Ubuntu 18.04/20.04 - other Linux distributions might work, but not tested. Mac OS/Windows  are not supported) and on Google Colab. Currently supports trained model conversion to: .kmodel(K210), .tflite format(full integer and dynamic range quantization support available), .onnx formats. Experimental support: Google Edge TPU.
</p>

<table>
  <tr>
    <td>Standford Dog Breed Classification Dataset NASNetMobile backend + Classifier <a href="https://colab.research.google.com/github/AIWintermuteAI/aXeleRate/blob/master/resources/aXeleRate_standford_dog_classifier.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a> </td>
     <td>PASCAL-VOC 2012 Object Detection Dataset MobileNet1_0 backend + YOLOv3 <a href="https://colab.research.google.com/github/AIWintermuteAI/aXeleRate/blob/master/resources/aXeleRate_pascal20_detector.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a> </td>
     <td>Human parsing Semantic Segmentation MobileNet5_0 backend + Segnet-Basic <a href="https://colab.research.google.com/github/AIWintermuteAI/aXeleRate/blob/master/resources/aXeleRate_human_segmentation.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a> </td>
  </tr>
  <tr>
    <td><img src="https://raw.githubusercontent.com/AIWintermuteAI/aXeleRate/master/resources/n02106550_7003.jpg" width=300 height=300></td>
    <td><img src="https://raw.githubusercontent.com/AIWintermuteAI/aXeleRate/master/resources/2009_001349.jpg" width=300 height=300></td>
    <td><img src="https://raw.githubusercontent.com/AIWintermuteAI/aXeleRate/master/resources/66.jpg" width=250 height=350></td>
  </tr>
 </table>

### aXeleRate

TL;DR

aXeleRate is meant for people who need to run computer vision applications(image classification, object detection, semantic segmentation) on the edge devices with hardware acceleration. It has easy configuration process through config file or config dictionary(for Google Colab) and automatic conversion of the best model for training session into the required file format. You put the properly formatted data in, start the training script and (hopefully) come back to see a converted model that is ready for deployment on your device!

### :wrench: Key Features
  - Supports multiple computer vision models: object detection(YOLOv3), image classification, semantic segmentation(SegNet-basic)
  - Different feature extractors to be used with the above network types: Full Yolo, Tiny Yolo, MobileNet, SqueezeNet, NASNetMobile, ResNet50, and DenseNet121. 
  - Automatic conversion of the best model for the training session. aXeleRate will download the suitable converter automatically.
  - Currently supports trained model conversion to: .kmodel(K210), .tflite format(full integer and dynamic range quantization support available), .tflite(Edge TPU), .onnx(for later on-device optimization with TensorRT).
  - Model version control made easier. Keras model files and converted models are saved in the project folder, grouped by the training date. Training history is saved as .png graph in the model folder.
  - Two modes of operation: locally, with train.py script and .json config file and remote, tailored for Google Colab, with module import and dictionary config.

### 💾 Install

Stable version:

pip install axelerate

Daily development version:

pip install git+https://github.com/AIWintermuteAI/aXeleRate

If installing in Anaconda environment, make sure you have necessary CUDA/CUDNN version installed in that environment to use GPU for training.

###  :question: F.A.Q.

Q: I trained a YOLO model, but it doesn't run on K210 with MaixPy firmware.

A: While there can be a lot of reasons for that (memory constrains is one of them), master branch of aXeleRate trains YOLOv3 model, which shows better convergence, especially for datasets with smaller objects and non-square image sizes. There is a [PR for adding YOLOv3 support](https://github.com/sipeed/MaixPy/pull/451) to MaixPy (where you can also see my comparisons of the two), but it is not merged at the moment. There are two options you can choose to train the model, that can run on K210 MaixPy:
- switch to legacy branch on aXeleRate with ```git switch legacy-yolov2``` (if you are running the training locally you will also need to re-install aXeleRate after that with ```pip install -e .```. The trained model should be compatible with current MaixPy.
- use [this pre-compiled firmware](https://drive.google.com/file/d/1q1BcWA8GiTQ_3Q9vYkSysRvGD62K2zh4/view?usp=sharing) with experimental support for YOLOv3 (examples included) or compile your own from [this PR's branch](https://github.com/sipeed/MaixPy/pull/451).

###  :computer: Project Story

aXeleRate started as a personal project of mine for training YOLOv2 based object detection networks and exporting them to .kmodel format to be run on K210 chip. I also needed to train image classification networks. And sometimes I needed to run inference with Tensorflow Lite on Raspberry Pi. As a result I had a whole bunch of disconnected scripts each had somewhat overlapping functionality. So, I decided to fix that and share the results with other people who might have similar workflows.

aXeleRate is still work in progress project. I will be making some changes from time to time and if you find it useful and can contribute, PRs are very much welcome!

:ballot_box_with_check: TODO list:

TODO list is moving to Github Projects!

### Acknowledgements

  - YOLOv2 Keras code jeongjoonsup and Ngoc Anh Huynh https://github.com/experiencor/keras-yolo2 https://github.com/penny4860/Yolo-digit-detector
  - SegNet Keras code Divam Gupta https://github.com/divamgupta/image-segmentation-keras
  - Big Thank You to creator/maintainers of Keras/Tensorflow

### Donation
Recently there were a few people that wanted to make a small donation to aXeleRate, because it helped them with their work. I was caught off guard with the question about donations :) I didn't have anything set up, so I quickly created a page for them to be able to send money. If aXeleRate was useful in your work, you can donate a pizza or a beer to the project here https://www.buymeacoffee.com/hardwareai . But times are tough now(and always), so if you don't have much to spare, don't feel guilty! aXeleRate is totally open source and free to use.


================================================
FILE: axelerate/__init__.py
================================================
from .train import setup_training
from .infer import setup_inference
from .evaluate import setup_evaluation


================================================
FILE: axelerate/evaluate.py
================================================
import os
import argparse
import json
import cv2
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.image as mpimg

from tensorflow.keras import backend as K 

from axelerate.networks.yolo.frontend import create_yolo
from axelerate.networks.yolo.backend.utils.box import draw_boxes
from axelerate.networks.yolo.backend.utils.annotation import parse_annotation
from axelerate.networks.yolo.backend.utils.eval.fscore import count_true_positives, calc_score
from axelerate.networks.segnet.frontend_segnet import create_segnet
from axelerate.networks.classifier.frontend_classifier import get_labels, create_classifier

K.clear_session()

DEFAULT_THRESHOLD = 0.3

def save_report(config, report, report_file):
    with open(report_file, 'w') as outfile:
        outfile.write("REPORT\n")
        outfile.write(str(report))
        outfile.write("\nCONFIG\n")
        outfile.write(json.dumps(config, indent=4, sort_keys=False))

def show_image(filename):
    image = mpimg.imread(filename)
    plt.figure()
    plt.imshow(image)
    plt.show(block=False)
    plt.pause(1)
    plt.close()
    print(filename)

def prepare_image(img_path, network):
    orig_image = cv2.imread(img_path)
    input_image = cv2.cvtColor(orig_image, cv2.COLOR_BGR2RGB) 
    input_image = cv2.resize(input_image, (network.input_size[1], network.input_size[0]))
    input_image = network.norm(input_image)
    input_image = np.expand_dims(input_image, 0)
    return orig_image, input_image

def setup_evaluation(config, weights, threshold = None):
    try:
        matplotlib.use('TkAgg')
    except:
        pass
    #added for compatibility with < 0.5.7 versions
    try:
        input_size = config['model']['input_size'][:]
    except:
        input_size = [config['model']['input_size'],config['model']['input_size']]

    """make directory to save inference results """
    dirname = os.path.dirname(weights)

    if config['model']['type']=='Classifier':
        print('Classifier')  

        if config['model']['labels']:
            labels = config['model']['labels']
        else:
            labels = get_labels(config['train']['train_image_folder'])

        # 1.Construct the model 
        classifier = create_classifier(config['model']['architecture'],
                                       labels,
                                       input_size,
                                       config['model']['fully-connected'],
                                       config['model']['dropout'])

        # 2. Load the pretrained weights
        classifier.load_weights(weights)

        report, cm = classifier.evaluate(config['train']['valid_image_folder'], 16)
        save_report(config, report, os.path.join(dirname, 'report.txt'))

    if config['model']['type']=='SegNet':
        print('Segmentation')           
        # 1. Construct the model 
        segnet = create_segnet(config['model']['architecture'],
                                   input_size,
                                   config['model']['n_classes'])   
        # 2. Load the pretrained weights (if any) 
        segnet.load_weights(weights)
        report = segnet.evaluate(config['train']['valid_image_folder'], config['train']['valid_annot_folder'], 2)
        save_report(config, report, os.path.join(dirname, 'report.txt'))
        print(report)

    if config['model']['type']=='Detector':
        # 2. create yolo instance & predict
        yolo = create_yolo(config['model']['architecture'],
                           config['model']['labels'],
                           input_size,
                           config['model']['anchors'],
                           config['model']['obj_thresh'],
                           config['model']['iou_thresh'],
                           config['model']['coord_scale'],
                           config['model']['object_scale'],
                           config['model']['no_object_scale'],                           
                           config['weights']['backend'])    
        yolo.load_weights(weights)

        # 3. read image
        annotations = parse_annotation(config['train']['valid_annot_folder'],
                                       config['train']['valid_image_folder'],
                                       config['model']['labels'],
                                       is_only_detect=config['train']['is_only_detect'])

        threshold = threshold if threshold else config['model']['obj_thresh']

        dirname = os.path.join(os.path.dirname(weights), 'Inference_results') #temporary

        if os.path.isdir(dirname):
            print("Folder {} is already exists. Image files in directory might be overwritten".format(dirname))
        else:
            print("Folder {} is created.".format(dirname))
            os.makedirs(dirname)

        n_true_positives = 0
        n_truth = 0
        n_pred = 0
        inference_time = []

        for i in range(len(annotations)):
            img_path = annotations.fname(i)
            img_fname = os.path.basename(img_path)
            true_boxes = annotations.boxes(i)
            true_labels = annotations.code_labels(i)

            orig_image, input_image = prepare_image(img_path, yolo)
            height, width = orig_image.shape[:2]
            prediction_time, boxes, scores = yolo.predict(input_image, height, width, float(threshold))
            classes = np.argmax(scores, axis=1) if len(scores) > 0 else []
            inference_time.append(prediction_time)

            # 4. save detection result
            orig_image = draw_boxes(orig_image, boxes, scores, classes, config['model']['labels'])
            output_path = os.path.join(dirname, os.path.split(img_fname)[-1])
            cv2.imwrite(output_path, orig_image)
            print("{}-boxes are detected. {} saved.".format(len(boxes), output_path))
            n_true_positives += count_true_positives(boxes, true_boxes, classes, true_labels)
            n_truth += len(true_boxes)
            n_pred += len(boxes)

        report = calc_score(n_true_positives, n_truth, n_pred)
        save_report(config, report, os.path.join(dirname, 'report.txt'))
        print(report)

        if len(inference_time)>1:
            print("Average prediction time:{} ms".format(sum(inference_time[1:])/len(inference_time[1:])))

if __name__ == '__main__':
    # 1. extract arguments

    argparser = argparse.ArgumentParser(
        description='Run evaluation script')

    argparser.add_argument(
        '-c',
        '--config',
        help='path to configuration file')

    argparser.add_argument(
        '-t',
        '--threshold',
        help='detection threshold')

    argparser.add_argument(
        '-w',
        '--weights',
        help='trained weight files')

    args = argparser.parse_args()
    with open(args.config) as config_buffer:
        config = json.loads(config_buffer.read())
    setup_evaluation(config, args.weights, args.threshold)


================================================
FILE: axelerate/infer.py
================================================
import glob
import os
import argparse
import json
import cv2
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.image as mpimg

from tensorflow.keras import backend as K 

from tensorflow.keras import backend as K 
from axelerate.networks.yolo.frontend import create_yolo
from axelerate.networks.yolo.backend.utils.box import draw_boxes
from axelerate.networks.segnet.frontend_segnet import create_segnet
from axelerate.networks.segnet.predict import visualize_segmentation
from axelerate.networks.classifier.frontend_classifier import get_labels, create_classifier

K.clear_session()
    
def show_image(filename):
    image = mpimg.imread(filename)
    plt.figure()
    plt.imshow(image)
    plt.show(block=False)
    plt.pause(1)
    plt.close()
    print(filename)

def prepare_image(img_path, network, input_size):
    orig_image = cv2.imread(img_path)
    input_image = cv2.cvtColor(orig_image, cv2.COLOR_BGR2RGB) 
    input_image = cv2.resize(input_image, (input_size[1], input_size[0]))
    input_image = network.norm(input_image)
    input_image = np.expand_dims(input_image, 0)
    return orig_image, input_image

def find_imgs(folder):
    ext_list = ['/**/*.jpg', '/**/*.jpeg', '/**/*.png', '/**/*.JPG', '/**/*.JPEG']
    image_files_list = []
    image_search = lambda ext : glob.glob(folder + ext, recursive=True)
    for ext in ext_list: image_files_list.extend(image_search(ext))
    return image_files_list

def setup_inference(config, weights, threshold = None, folder = None):
    try:
        matplotlib.use('TkAgg')
    except:
        pass

    #added for compatibility with < 0.5.7 versions
    try:
        input_size = config['model']['input_size'][:]
    except:
        input_size = [config['model']['input_size'], config['model']['input_size']]

    """make directory to save inference results """
    dirname = os.path.join(os.path.dirname(weights), 'Inference_results')
    if os.path.isdir(dirname):
        print("Folder {} is already exists. Image files in directory might be overwritten".format(dirname))
    else:
        print("Folder {} is created.".format(dirname))
        os.makedirs(dirname)

    if config['model']['type']=='Classifier':
        print('Classifier')    
        if config['model']['labels']:
            labels = config['model']['labels']
        else:
            labels = get_labels(config['train']['train_image_folder'])
            
        # 1.Construct the model 
        classifier = create_classifier(config['model']['architecture'],
                                       labels,
                                       input_size,
                                       config['model']['fully-connected'],
                                       config['model']['dropout'])  
                                        
        # 2. Load the trained weights
        classifier.load_weights(weights)
        
        font = cv2.FONT_HERSHEY_SIMPLEX
        background_color = (70, 120, 70) # grayish green background for text
        text_color = (255, 255, 255)   # white text

        file_folder = folder if folder else config['train']['valid_image_folder']

        image_files_list = find_imgs(file_folder)
        
        inference_time = []
        for filepath in image_files_list:
            output_path = os.path.join(dirname, os.path.basename(filepath))
            orig_image, input_image = prepare_image(filepath, classifier, input_size)
            prediction_time, prob, img_class = classifier.predict(input_image)
            inference_time.append(prediction_time)
            
            text = "{}:{:.2f}".format(img_class, prob)

            # label shape and colorization
            size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1)[0]
            left = 10
            top = 35 - size[1]
            right = left + size[0]
            bottom = top + size[1]

            # set up the colored rectangle background for text
            cv2.rectangle(orig_image, (left - 1, top - 5),(right + 1, bottom + 1), background_color, -1)
            # set up text
            cv2.putText(orig_image, text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.5, text_color, 1)
            cv2.imwrite(output_path, orig_image)
            show_image(output_path)
            print("{}:{}".format(img_class, prob))

        if len(inference_time)>1:
            print("Average prediction time:{} ms".format(sum(inference_time[1:])/len(inference_time[1:])))

    if config['model']['type']=='SegNet':
        print('Segmentation')           
        # 1. Construct the model 
        segnet = create_segnet(config['model']['architecture'],
                                   input_size,
                                   config['model']['n_classes'])   
        # 2. Load the trained weights
        segnet.load_weights(weights)

        file_folder = folder if folder else config['train']['valid_image_folder']
        image_files_list = find_imgs(file_folder)

        inference_time = []
        for filepath in image_files_list:

            orig_image, input_image = prepare_image(filepath, segnet, input_size)
            out_fname = os.path.join(dirname, os.path.basename(filepath))
            prediction_time, output_array = segnet.predict(input_image)
            seg_img = visualize_segmentation(output_array, orig_image, segnet.n_classes, overlay_img = True)
            cv2.imwrite(out_fname, seg_img)
            show_image(out_fname)

    if config['model']['type']=='Detector':
        # 2. create yolo instance & predict
        yolo = create_yolo(config['model']['architecture'],
                           config['model']['labels'],
                           input_size,
                           config['model']['anchors'],
                           config['model']['obj_thresh'],
                           config['model']['iou_thresh'],
                           config['model']['coord_scale'],
                           config['model']['object_scale'],
                           config['model']['no_object_scale'],                           
                           config['weights']['backend'])                           
        yolo.load_weights(weights)
        
        file_folder = folder if folder else config['train']['valid_image_folder']
        threshold = threshold if threshold else config['model']['obj_thresh']
        image_files_list = find_imgs(file_folder)

        inference_time = []
        for filepath in image_files_list:

            img_fname = os.path.basename(filepath)
            orig_image, input_image = prepare_image(filepath, yolo, input_size)
            height, width = orig_image.shape[:2]

            prediction_time, boxes, scores = yolo.predict(input_image, height, width, float(threshold))
            classes = np.argmax(scores, axis=1) if len(scores) > 0 else []
            print(classes)
            inference_time.append(prediction_time)

            # 4. save detection result
            orig_image = draw_boxes(orig_image, boxes, scores, classes, config['model']['labels'])
            output_path = os.path.join(dirname, os.path.basename(filepath))
            cv2.imwrite(output_path, orig_image)
            print("{}-boxes are detected. {} saved.".format(len(boxes), output_path))
            show_image(output_path)

        if len(inference_time)>1:
            print("Average prediction time:{} ms".format(sum(inference_time[1:])/len(inference_time[1:])))

if __name__ == '__main__':
    # 1. extract arguments

    argparser = argparse.ArgumentParser(
        description='Run inference script')

    argparser.add_argument(
        '-c',
        '--config',
        help='path to configuration file')

    argparser.add_argument(
        '-t',
        '--threshold',
        help='detection threshold')

    argparser.add_argument(
        '-w',
        '--weights',
        help='trained weight files')

    argparser.add_argument(
        '-f',
        '--folder',
        help='folder with image files to run inference on')   

    args = argparser.parse_args()
    
    if args.create_dataset:
        from pascal_voc_writer import Writer
        
    with open(args.config) as config_buffer:
        config = json.loads(config_buffer.read())
    setup_inference(config, args.weights, args.threshold, args.folder)


================================================
FILE: axelerate/networks/__init__.py
================================================


================================================
FILE: axelerate/networks/classifier/__init__.py
================================================


================================================
FILE: axelerate/networks/classifier/batch_gen.py
================================================
## Code heavily adapted from:
## *https://github.com/keras-team/keras-preprocessing/blob/master/keras_preprocessing/

"""Utilities for real-time data augmentation on image data. """

from .directory_iterator import DirectoryIterator
from axelerate.networks.common_utils.augment import process_image_classification
from tensorflow.keras.utils import Sequence
import cv2
import os

def create_datagen(img_folder, batch_size, input_size, project_folder, augment, norm):

    datagen = ImageDataAugmentor(preprocess_input = norm,
                                 process_image = process_image_classification,
                                 augment = augment)
    
    generator = datagen.flow_from_directory(img_folder,
                                        target_size = input_size,
                                        color_mode = 'rgb',
                                        batch_size = batch_size,
                                        class_mode = 'categorical', 
                                        shuffle = augment)
    if project_folder:             
        labels = (generator.class_indices)
        labels = dict((v,k) for k,v in labels.items())
        fo = open(os.path.join(project_folder,"labels.txt"), "w")
        for k,v in labels.items():
            print(v)
            fo.write(v+"\n")
        fo.close()
    return generator
    
    
class ImageDataAugmentor(Sequence):
    """Generate batches of tensor image data with real-time data augmentation.
    The data will be looped over (in batches).
    # Arguments
        preprocessing_input: function that will be implied on each input.
            The function will run after the image is resized and augmented.
            The function should take one argument:
            one image, and should output a Numpy tensor with the same shape.
        augment: augmentations passed as albumentations or imgaug transformation 
            or sequence of transformations.     
        data_format: Image data format,
            either "channels_first" or "channels_last".
            "channels_last" mode means that the images should have shape
            `(samples, height, width, channels)`,
            "channels_first" mode means that the images should have shape
            `(samples, channels, height, width)`.
            It defaults to the `image_data_format` value found in your
            Keras config file at `~/.keras/keras.json`.
            If you never set it, then it will be "channels_last".
    """

    def __init__(self,
                 augment = False,
                 process_image=None,
                 preprocess_input=None,
                 data_format='channels_last'):
               
        self.augment = augment
        self.process_image = process_image
        self.preprocess_input = preprocess_input

        if data_format not in {'channels_last', 'channels_first'}:
            raise ValueError(
                '`data_format` should be `"channels_last"` '
                '(channel after row and column) or '
                '`"channels_first"` (channel before row and column). '
                'Received: %s' % data_format)
        self.data_format = data_format
        if data_format == 'channels_first':
            self.channel_axis = 1
            self.row_axis = 2
            self.col_axis = 3
        if data_format == 'channels_last':
            self.channel_axis = 3
            self.row_axis = 1
            self.col_axis = 2

    def flow_from_directory(self,
                            directory,
                            target_size=(256, 256),
                            color_mode='rgb',
                            classes=None,
                            class_mode='categorical',
                            batch_size=32,
                            shuffle=True,
                            seed=None,
                            save_to_dir=None,
                            save_prefix='',
                            save_format='png',
                            follow_links=False,
                            subset=None,
                            interpolation=cv2.INTER_NEAREST):
        """Takes the path to a directory & generates batches of augmented data.
        # Arguments
            directory: string, path to the target directory.
                It should contain one subdirectory per class.
                Any PNG, JPG, BMP, PPM or TIF images
                inside each of the subdirectories directory tree
                will be included in the generator.
                See [this script](
                https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d)
                for more details.
            target_size: Tuple of integers `(height, width)`,
                default: `(256, 256)`.
                The dimensions to which all images found will be resized.
            color_mode: One of "gray", "rgb", "rgba". Default: "rgb".
                Whether the images will be converted to
                have 1, 3, or 4 channels.
            classes: Optional list of class subdirectories
                (e.g. `['dogs', 'cats']`). Default: None.
                If not provided, the list of classes will be automatically
                inferred from the subdirectory names/structure
                under `directory`, where each subdirectory will
                be treated as a different class
                (and the order of the classes, which will map to the label
                indices, will be alphanumeric).
                The dictionary containing the mapping from class names to class
                indices can be obtained via the attribute `class_indices`.
            class_mode: One of "categorical", "binary", "sparse",
                "input", or None. Default: "categorical".
                Determines the type of label arrays that are returned:
                - "categorical" will be 2D one-hot encoded labels,
                - "binary" will be 1D binary labels,
                    "sparse" will be 1D integer labels,
                - "input" will be images identical
                    to input images (mainly used to work with autoencoders).
                - If None, no labels are returned
                  (the generator will only yield batches of image data,
                  which is useful to use with `model.predict_generator()`).
                  Please note that in case of class_mode None,
                  the data still needs to reside in a subdirectory
                  of `directory` for it to work correctly.
            batch_size: Size of the batches of data (default: 32).
            shuffle: Whether to shuffle the data (default: True)
                If set to False, sorts the data in alphanumeric order.
            seed: Optional random seed for shuffling and transformations.
            save_to_dir: None or str (default: None).
                This allows you to optionally specify
                a directory to which to save
                the augmented pictures being generated
                (useful for visualizing what you are doing).
            save_prefix: Str. Prefix to use for filenames of saved pictures
                (only relevant if `save_to_dir` is set).
            save_format: One of "png", "jpeg"
                (only relevant if `save_to_dir` is set). Default: "png".
            follow_links: Whether to follow symlinks inside
                class subdirectories (default: False).
            subset: Subset of data (`"training"` or `"validation"`) if
                `validation_split` is set in `ImageDataAugmentor`.
            interpolation: Interpolation method used to
                resample the image if the
                target size is different from that of the loaded image.
                Supported methods are `"nearest"`, `"bilinear"`,
                and `"bicubic"`.
                If PIL version 1.1.3 or newer is installed, `"lanczos"` is also
                supported. If PIL version 3.4.0 or newer is installed,
                `"box"` and `"hamming"` are also supported.
                By default, `"nearest"` is used.
        # Returns
            A `DirectoryIterator` yielding tuples of `(x, y)`
                where `x` is a numpy array containing a batch
                of images with shape `(batch_size, *target_size, channels)`
                and `y` is a numpy array of corresponding labels.
        """
        return DirectoryIterator(
            directory,
            self,
            target_size=target_size,
            color_mode=color_mode,
            classes=classes,
            class_mode=class_mode,
            data_format=self.data_format,
            batch_size=batch_size,
            shuffle=shuffle,
            seed=seed,
            save_to_dir=save_to_dir,
            save_prefix=save_prefix,
            save_format=save_format,
            follow_links=follow_links,
            subset=subset,
            interpolation=interpolation
        )
    

    def transform_image(self, image, desired_w, desired_h):
        """
        Transforms an image by first augmenting and then standardizing
        """
        image = self.process_image(image, desired_w, desired_h, self.augment)
        image = self.preprocess_input(image)
        
        return image


================================================
FILE: axelerate/networks/classifier/directory_iterator.py
================================================
"""Utilities for real-time data augmentation on image data.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import multiprocessing.pool
from six.moves import range

import numpy as np
import cv2

from .iterator import BatchFromFilesMixin, Iterator
from .utils import _list_valid_filenames_in_directory


class DirectoryIterator(BatchFromFilesMixin, Iterator):
    """Iterator capable of reading images from a directory on disk.

    # Arguments
        directory: string, path to the directory to read images from.
            Each subdirectory in this directory will be
            considered to contain images from one class,
            or alternatively you could specify class subdirectories
            via the `classes` argument.
        image_data_generator: Instance of `ImageDataAugmentor`
            to use for random transformations and normalization.
        target_size: tuple of integers, dimensions to resize input images to.
        color_mode: One of `"rgb"`, `"rgba"`, `"gray"`.
            Color mode to read images.
        classes: Optional list of strings, names of subdirectories
            containing images from each class (e.g. `["dogs", "cats"]`).
            It will be computed automatically if not set.
        class_mode: Mode for yielding the targets:
            `"binary"`: binary targets (if there are only two classes),
            `"categorical"`: categorical targets,
            `"sparse"`: integer targets,
            `"input"`: targets are images identical to input images (mainly
                used to work with autoencoders),
            `None`: no targets get yielded (only input images are yielded).
        batch_size: Integer, size of a batch.
        shuffle: Boolean, whether to shuffle the data between epochs.
            If set to False, sorts the data in alphanumeric order.
        seed: Random seed for data shuffling.
        data_format: String, one of `channels_first`, `channels_last`.
        save_to_dir: Optional directory where to save the pictures
            being yielded, in a viewable format. This is useful
            for visualizing the random transformations being
            applied, for debugging purposes.
        save_prefix: String prefix to use for saving sample
            images (if `save_to_dir` is set).
        save_format: Format to use for saving sample images
            (if `save_to_dir` is set).
        follow_links: boolean,follow symbolic links to subdirectories
        subset: Subset of data (`"training"` or `"validation"`) if
            validation_split is set in ImageDataAugmentor.
        interpolation: Interpolation method used to
            resample the image if the
            target size is different from that of the loaded image.
            Supported methods are `"cv2.INTER_NEAREST"`, `"cv2.INTER_LINEAR"`, `"cv2.INTER_AREA"`, `"cv2.INTER_CUBIC"`
            and `"cv2.INTER_LANCZOS4"`
            By default, `"cv2.INTER_NEAREST"` is used.
        dtype: Dtype to use for generated arrays.
    """
    allowed_class_modes = {'categorical', 'binary', 'sparse', 'input', None}

    def __init__(self,
                 directory,
                 image_data_generator,
                 target_size=(256, 256),
                 color_mode='rgb',
                 classes=None,
                 class_mode='categorical',
                 batch_size=32,
                 shuffle=True,
                 seed=None,
                 data_format='channels_last',
                 save_to_dir=None,
                 save_prefix='',
                 save_format='png',
                 follow_links=False,
                 subset=None,
                 interpolation=cv2.INTER_NEAREST,
                 dtype='float32'):
        super(DirectoryIterator, self).set_processing_attrs(image_data_generator,
                                                            target_size,
                                                            color_mode,
                                                            data_format,
                                                            save_to_dir,
                                                            save_prefix,
                                                            save_format,
                                                            subset,
                                                            interpolation)
        self.directory = directory
        self.classes = classes
        if class_mode not in self.allowed_class_modes:
            raise ValueError('Invalid class_mode: {}; expected one of: {}'
                             .format(class_mode, self.allowed_class_modes))
        self.class_mode = class_mode
        self.dtype = dtype
        # First, count the number of samples and classes.
        self.samples = 0

        if not classes:
            classes = []
            for subdir in sorted(os.listdir(directory)):
                if os.path.isdir(os.path.join(directory, subdir)):
                    classes.append(subdir)
        self.num_classes = len(classes)
        self.class_indices = dict(zip(classes, range(len(classes))))

        pool = multiprocessing.pool.ThreadPool()

        # Second, build an index of the images
        # in the different class subfolders.
        results = []
        self.filenames = []
        i = 0
        for dirpath in (os.path.join(directory, subdir) for subdir in classes):
            results.append(
                pool.apply_async(_list_valid_filenames_in_directory,
                                 (dirpath, self.white_list_formats, self.split,
                                  self.class_indices, follow_links)))
        classes_list = []
        for res in results:
            classes, filenames = res.get()
            classes_list.append(classes)
            self.filenames += filenames
        self.samples = len(self.filenames)
        self.classes = np.zeros((self.samples,), dtype='int32')
        for classes in classes_list:
            self.classes[i:i + len(classes)] = classes
            i += len(classes)

        print('Found %d images belonging to %d classes.' %
              (self.samples, self.num_classes))
        pool.close()
        pool.join()
        self._filepaths = [
            os.path.join(self.directory, fname) for fname in self.filenames
        ]
        super(DirectoryIterator, self).__init__(self.samples,
                                                batch_size,
                                                shuffle,
                                                seed)

    @property
    def filepaths(self):
        return self._filepaths

    @property
    def labels(self):
        return self.classes

    @property  # mixin needs this property to work
    def sample_weight(self):
        # no sample weights will be returned
        return None


================================================
FILE: axelerate/networks/classifier/frontend_classifier.py
================================================
import time
import os
import numpy as np
import matplotlib.pyplot as plt

from sklearn.metrics import classification_report, confusion_matrix, ConfusionMatrixDisplay

from axelerate.networks.common_utils.feature import create_feature_extractor
from axelerate.networks.classifier.batch_gen import create_datagen
from axelerate.networks.common_utils.fit import train
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Dropout
from tensorflow.keras.applications.mobilenet import preprocess_input

def get_labels(directory):
    labels = sorted(os.listdir(directory))
    return labels

def create_classifier(architecture, labels, input_size, layers, dropout, weights = None, save_bottleneck = False):
    base_model = create_feature_extractor(architecture, input_size, weights)
    x = base_model.feature_extractor.outputs[0]
    x = GlobalAveragePooling2D()(x)
    if len(layers) != 0:
        for layer in layers[0:-1]:
            x = Dense(layer, activation = 'relu')(x) 
            x = Dropout(dropout)(x)
        x = Dense(layers[-1], activation = 'relu')(x)
    preds = Dense(len(labels), activation = 'softmax')(x)
    model = Model(inputs = base_model.feature_extractor.inputs[0],outputs = preds, name = 'classifier')

    bottleneck_layer = None
    if save_bottleneck:
        bottleneck_layer = base_model.feature_extractor.layers[-1].name
    network = Classifier(model, input_size, labels, base_model.normalize, bottleneck_layer)

    return network

class Classifier(object):
    def __init__(self,
                 network,
                 input_size,
                 labels,
                 norm,
                 bottleneck_layer):
        self.network = network       
        self.labels = labels
        self.input_size = input_size
        self.bottleneck_layer = bottleneck_layer
        self.norm = norm

    def load_weights(self, weight_path, by_name=False):
        if os.path.exists(weight_path):
            print("Loading pre-trained weights for the whole model: ", weight_path)
            self.network.load_weights(weight_path)
        else:
            print("Failed to load pre-trained weights for the whole model. It might be because you didn't specify any or the weight file cannot be found")

    def save_bottleneck(self, model_path, bottleneck_layer):
        bottleneck_weights_path = os.path.join(os.path.dirname(model_path),'bottleneck_weigths.h5')
        model = load_model(model_path)
        for layer in model.layers:
            if layer.name == bottleneck_layer:
                output = layer.output
        bottleneck_model = Model(model.input, output)
        bottleneck_model.save_weights(bottleneck_weights_path)

    def predict(self, img):

        start_time = time.time()
        Y_pred = np.squeeze(self.network(img, training = False))
        elapsed_ms = (time.time() - start_time)  * 1000

        y_pred = np.argmax(Y_pred)
        prob = Y_pred[y_pred]

        prediction = self.labels[y_pred]

        return elapsed_ms, prob, prediction

    def evaluate(self, img_folder, batch_size):

        self.generator = create_datagen(img_folder, batch_size, self.input_size, None, False, self.norm)

        Y_pred = self.network.predict(self.generator, len(self.generator) // batch_size + 1)

        y_pred = np.argmax(Y_pred, axis=1)

        print('Classification Report')
        report = classification_report(self.generator.classes, y_pred, target_names = self.labels)
        print(report)

        print('Confusion Matrix')
        cm = confusion_matrix(self.generator.classes, y_pred)
        disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels = self.labels)
        disp.plot(include_values=True, cmap='Blues', ax=None)
        plt.show()

        return report, cm

    def train(self,
              img_folder,
              nb_epoch,
              project_folder,
              batch_size = 8,
              augumentation = False,
              learning_rate = 1e-4, 
              train_times = 1,
              valid_times = 1,
              valid_img_folder = "",
              first_trainable_layer = None,
              metrics = "val_loss"):

        if metrics != "accuracy" and metrics != "loss":
            print("Unknown metric for Classifier, valid options are: val_loss or val_accuracy. Defaulting ot val_loss")
            metrics = "loss"

        train_generator = create_datagen(img_folder, batch_size, self.input_size, project_folder, augumentation, self.norm)
        validation_generator = create_datagen(valid_img_folder, batch_size, self.input_size, project_folder, False, self.norm)

        model_layers, model_path = train(self.network,
                                        'categorical_crossentropy',
                                        train_generator,
                                        validation_generator,
                                        learning_rate, 
                                        nb_epoch, 
                                        project_folder,
                                        first_trainable_layer, 
                                        metric_name = metrics)

        if self.bottleneck_layer:
            self.save_bottleneck(model_path, self.bottleneck_layer)
        return model_layers, model_path

    


================================================
FILE: axelerate/networks/classifier/iterator.py
================================================
"""Utilities for real-time data augmentation on image data.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import threading
import numpy as np
from keras_preprocessing import get_keras_submodule
import matplotlib.pyplot as plt

try:
    IteratorType = get_keras_submodule('utils').Sequence
except ImportError:
    IteratorType = object

from .utils import (array_to_img,
                    img_to_array,
                    load_img)


class Iterator(IteratorType):
    """Base class for image data iterators.

    Every `Iterator` must implement the `_get_batch_of_samples`
    method.

    # Arguments
        n: Integer, total number of samples in the dataset to loop over.
        batch_size: Integer, size of a batch.
        shuffle: Boolean, whether to shuffle the data between epochs.
        seed: Random seeding for data shuffling.
    """
    white_list_formats = ('png', 'jpg', 'jpeg', 'bmp', 'ppm', 'tif', 'tiff')

    def __init__(self, n, batch_size, shuffle, seed):
        self.n = n
        self.batch_size = batch_size
        self.seed = seed
        self.shuffle = shuffle
        self.batch_index = 0
        self.total_batches_seen = 0
        self.lock = threading.Lock()
        self.index_array = None
        self.index_generator = self._flow_index()

    def _set_index_array(self):
        self.index_array = np.arange(self.n)
        if self.shuffle:
            self.index_array = np.random.permutation(self.n)

    def __getitem__(self, idx):
        if idx >= len(self):
            raise ValueError('Asked to retrieve element {idx}, '
                             'but the Sequence '
                             'has length {length}'.format(idx=idx,
                                                          length=len(self)))
        if self.seed is not None:
            np.random.seed(self.seed + self.total_batches_seen)
        self.total_batches_seen += 1
        if self.index_array is None:
            self._set_index_array()
        index_array = self.index_array[self.batch_size * idx:
                                       self.batch_size * (idx + 1)]
        return self._get_batches_of_transformed_samples(index_array)

    def __len__(self):
        return (self.n + self.batch_size - 1) // self.batch_size  # round up

    def on_epoch_end(self):
        self._set_index_array()

    def reset(self):
        self.batch_index = 0

    def _flow_index(self):
        # Ensure self.batch_index is 0.
        self.reset()
        while 1:
            if self.seed is not None:
                np.random.seed(self.seed + self.total_batches_seen)
            if self.batch_index == 0:
                self._set_index_array()

            if self.n == 0:
                # Avoiding modulo by zero error
                current_index = 0
            else:
                current_index = (self.batch_index * self.batch_size) % self.n
            if self.n > current_index + self.batch_size:
                self.batch_index += 1
            else:
                self.batch_index = 0
            self.total_batches_seen += 1
            yield self.index_array[current_index:
                                   current_index + self.batch_size]

    def __iter__(self):
        # Needed if we want to do something like:
        # for x, y in data_gen.flow(...):
        return self

    def __next__(self, *args, **kwargs):
        return self.next(*args, **kwargs)

    def next(self):
        """For python 2.x.

        # Returns
            The next batch.
        """
        with self.lock:
            index_array = next(self.index_generator)
        # The transformation of images is not under thread lock
        # so it can be done in parallel
        return self._get_batches_of_transformed_samples(index_array)

    def _get_batches_of_transformed_samples(self, index_array):
        """Gets a batch of transformed samples.

        # Arguments
            index_array: Array of sample indices to include in batch.

        # Returns
            A batch of transformed samples.
        """
        raise NotImplementedError


class BatchFromFilesMixin():
    """Adds methods related to getting batches from filenames

    It includes the logic to transform image files to batches.
    """

    def set_processing_attrs(self,
                             image_data_generator,
                             target_size,
                             color_mode,
                             data_format,
                             save_to_dir,
                             save_prefix,
                             save_format,
                             subset,
                             interpolation):
        """Sets attributes to use later for processing files into a batch.

        # Arguments
            image_data_generator: Instance of `ImageDataAugmentor`
                to use for random transformations and normalization.
            target_size: tuple of integers, dimensions to resize input images to.
            color_mode: One of `"rgb"`, `"rgba"`, `"gray"`.
                Color mode to read images.
            data_format: String, one of `channels_first`, `channels_last`.
            save_to_dir: Optional directory where to save the pictures
                being yielded, in a viewable format. This is useful
                for visualizing the random transformations being
                applied, for debugging purposes.
            save_prefix: String prefix to use for saving sample
                images (if `save_to_dir` is set).
            save_format: Format to use for saving sample images
                (if `save_to_dir` is set).
            subset: Subset of data (`"training"` or `"validation"`) if
                validation_split is set in ImageDataAugmentor.
            interpolation: Interpolation method used to
                resample the image if the
                target size is different from that of the loaded image.
                Supported methods are `"cv2.INTER_NEAREST"`, `"cv2.INTER_LINEAR"`, `"cv2.INTER_AREA"`, `"cv2.INTER_CUBIC"`
                and `"cv2.INTER_LANCZOS4"`
                By default, `"cv2.INTER_NEAREST"` is used.
        """
        self.image_data_generator = image_data_generator
        self.target_size = tuple(target_size)
        if color_mode not in {'rgb', 'rgba', 'gray'}:
            raise ValueError('Invalid color mode:', color_mode,
                             '; expected "rgb", "rgba", or "gray".')
        self.color_mode = color_mode
        self.data_format = data_format
        if self.color_mode == 'rgba':
            if self.data_format == 'channels_last':
                self.image_shape = self.target_size + (4,)
            else:
                self.image_shape = (4,) + self.target_size
        elif self.color_mode == 'rgb':
            if self.data_format == 'channels_last':
                self.image_shape = self.target_size + (3,)
            else:
                self.image_shape = (3,) + self.target_size
        else:
            if self.data_format == 'channels_last':
                self.image_shape = self.target_size + (1,)
            else:
                self.image_shape = (1,) + self.target_size
        self.save_to_dir = save_to_dir
        self.save_prefix = save_prefix
        self.save_format = save_format
        self.interpolation = interpolation
        if subset is not None:
            validation_split = self.image_data_generator._validation_split
            if subset == 'validation':
                split = (0, validation_split)
            elif subset == 'training':
                split = (validation_split, 1)
            else:
                raise ValueError(
                    'Invalid subset name: %s;'
                    'expected "training" or "validation"' % (subset,))
        else:
            split = None
        self.split = split
        self.subset = subset

    def _get_batch_of_samples(self, index_array, apply_standardization=True):
        """Gets a batch of transformed samples.

        # Arguments
            index_array: Array of sample indices to include in batch.

        # Returns
            A batch of transformed samples.
        """
        # build batch of image data
        # self.filepaths is dynamic, is better to call it once outside the loop
        filepaths = self.filepaths
        
        # build batch of image data
        batch_x = np.array([load_img(filepaths[x], 
                                     color_mode=self.color_mode,
                                     target_size=self.target_size, 
                                     interpolation=self.interpolation) for x in index_array])    

        # apply the augmentations and custom transformations to the image data
        batch_x = np.array([self.image_data_generator.transform_image(x, self.target_size[0], self.target_size[1]) for x in batch_x])

        # transform to `channels_first` format if needed
        if self.data_format == "channels_first":
            batch_x = np.array([np.swapaxes(x,0,2) for x in batch_x])

        # optionally save augmented images to disk for debugging purposes
        if self.save_to_dir:
            for i, j in enumerate(index_array):
                img = array_to_img(batch_x[i], self.data_format, scale=True)
                fname = '{prefix}_{index}_{hash}.{format}'.format(
                    prefix=self.save_prefix,
                    index=j,
                    hash=np.random.randint(1e7),
                    format=self.save_format)
                img.save(os.path.join(self.save_to_dir, fname))
        # build batch of labels
            
        if self.class_mode == 'input':
            batch_y = batch_x.copy()
        elif self.class_mode in {'binary', 'sparse'}:
            batch_y = np.empty(len(batch_x), dtype=self.dtype)
            for i, n_observation in enumerate(index_array):
                batch_y[i] = self.classes[n_observation]
        elif self.class_mode == 'categorical':
            batch_y = np.zeros((len(batch_x), len(self.class_indices)),
                               dtype=self.dtype)
            for i, n_observation in enumerate(index_array):
                batch_y[i, self.classes[n_observation]] = 1.
        elif self.class_mode == 'multi_output':
            batch_y = [output[index_array] for output in self.labels]
        elif self.class_mode == 'raw':
            batch_y = self.labels[index_array]
        else:
            return batch_x
        if self.sample_weight is None:
            return batch_x, batch_y
        else:
            return batch_x, batch_y, self.sample_weight[index_array]

    def _get_batches_of_transformed_samples(self, index_array):
        return self._get_batch_of_samples(index_array)


    def show_batch(self, rows:int=5, apply_standardization:bool=False, **plt_kwargs):
        img_arr = np.random.choice(range(len(self.classes)), rows**2)
        if self.class_mode is None:
            imgs = self._get_batch_of_samples(img_arr, apply_standardization=apply_standardization)
        else:
            imgs, _ = self._get_batch_of_samples(img_arr, apply_standardization=apply_standardization)
            lbls = np.array(self.labels)[img_arr]
        
            try:
                inv_class_indices = {v: k for k, v in self.class_indices.items()}
                lbls = [inv_class_indices.get(k) for k in lbls]
            except:
                pass

        if self.data_format == "channels_first":
            imgs = np.array([np.swapaxes(img,0,2) for img in imgs])

        if not 'figsize' in plt_kwargs:
            plt_kwargs['figsize'] = (12,12)

        plt.close('all')
        plt.figure(**plt_kwargs)

        for idx, img in enumerate(imgs):
            plt.subplot(rows, rows, idx+1)
            plt.imshow(img.squeeze())
            if lbls is not None:
                plt.title(lbls[idx])
            plt.axis('off')
        
        plt.subplots_adjust(hspace=0.5, wspace=0.5)
        plt.show()
        
    @property
    def filepaths(self):
        """List of absolute paths to image files"""
        raise NotImplementedError(
            '`filepaths` property method has not been implemented in {}.'
            .format(type(self).__name__)
        )

    @property
    def labels(self):
        """Class labels of every observation"""
        raise NotImplementedError(
            '`labels` property method has not been implemented in {}.'
            .format(type(self).__name__)
        )

    @property
    def sample_weight(self):
        raise NotImplementedError(
            '`sample_weight` property method has not been implemented in {}.'
            .format(type(self).__name__)
        )


================================================
FILE: axelerate/networks/classifier/utils.py
================================================
"""Utilities for real-time data augmentation on image data.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import warnings

import numpy as np
import cv2
try:
    from PIL import ImageEnhance
    from PIL import Image as pil_image
except ImportError:
    pil_image = None
    ImageEnhance = None


if pil_image is not None:
    _PIL_INTERPOLATION_METHODS = {
        'nearest': pil_image.NEAREST,
        'bilinear': pil_image.BILINEAR,
        'bicubic': pil_image.BICUBIC,
    }
    # These methods were only introduced in version 3.4.0 (2016).
    if hasattr(pil_image, 'HAMMING'):
        _PIL_INTERPOLATION_METHODS['hamming'] = pil_image.HAMMING
    if hasattr(pil_image, 'BOX'):
        _PIL_INTERPOLATION_METHODS['box'] = pil_image.BOX
    # This method is new in version 1.1.3 (2013).
    if hasattr(pil_image, 'LANCZOS'):
        _PIL_INTERPOLATION_METHODS['lanczos'] = pil_image.LANCZOS


def validate_filename(filename, white_list_formats):
    """Check if a filename refers to a valid file.

    # Arguments
        filename: String, absolute path to a file
        white_list_formats: Set, allowed file extensions

    # Returns
        A boolean value indicating if the filename is valid or not
    """
    return (filename.lower().endswith(white_list_formats) and
            os.path.isfile(filename))


def save_img(path,
             x,
             data_format='channels_last',
             file_format=None,
             scale=True,
             **kwargs):
    """Saves an image stored as a Numpy array to a path or file object.

    # Arguments
        path: Path or file object.
        x: Numpy array.
        data_format: Image data format,
            either "channels_first" or "channels_last".
        file_format: Optional file format override. If omitted, the
            format to use is determined from the filename extension.
            If a file object was used instead of a filename, this
            parameter should always be used.
        scale: Whether to rescale image values to be within `[0, 255]`.
        **kwargs: Additional keyword arguments passed to `PIL.Image.save()`.
    """
    img = array_to_img(x, data_format=data_format, scale=scale)
    if img.mode == 'RGBA' and (file_format == 'jpg' or file_format == 'jpeg'):
        warnings.warn('The JPG format does not support '
                      'RGBA images, converting to RGB.')
        img = img.convert('RGB')
    img.save(path, format=file_format, **kwargs)


def load_img(fname, color_mode='rgb', target_size=None, interpolation=cv2.INTER_NEAREST):
    if color_mode == "rgb":
        img = cv2.imread(fname)
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        
    elif color_mode == "rgba":
        img = cv2.imread(fname,-1) 
        if img.shape[-1]!=4: #Add alpha-channel if not RGBA
            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGBA)
            
    elif color_mode == "gray":
        img = cv2.imread(fname, 0)

    else:
        img = cv2.imread(fname)
        
    if target_size is not None:
        width_height_tuple = (target_size[1], target_size[0])
        if img.shape[0:2] != width_height_tuple:
            img = cv2.resize(img, dsize=width_height_tuple, interpolation = interpolation)

    if color_mode == "gray":
        return img[..., np.newaxis] #Add dummy axis. This is done here, cause `cv2.resize` removes the dummy axes

    else:
        return img


def list_pictures(directory, ext=('jpg', 'jpeg', 'bmp', 'png', 'ppm', 'tif',
                                  'tiff')):
    """Lists all pictures in a directory, including all subdirectories.

    # Arguments
        directory: string, absolute path to the directory
        ext: tuple of strings or single string, extensions of the pictures

    # Returns
        a list of paths
    """
    ext = tuple('.%s' % e for e in ((ext,) if isinstance(ext, str) else ext))
    return [os.path.join(root, f)
            for root, _, files in os.walk(directory) for f in files
            if f.lower().endswith(ext)]


def _iter_valid_files(directory, white_list_formats, follow_links):
    """Iterates on files with extension in `white_list_formats` contained in `directory`.

    # Arguments
        directory: Absolute path to the directory
            containing files to be counted
        white_list_formats: Set of strings containing allowed extensions for
            the files to be counted.
        follow_links: Boolean, follow symbolic links to subdirectories.

    # Yields
        Tuple of (root, filename) with extension in `white_list_formats`.
    """
    def _recursive_list(subpath):
        return sorted(os.walk(subpath, followlinks=follow_links),
                      key=lambda x: x[0])

    for root, _, files in _recursive_list(directory):
        for fname in sorted(files):
            if fname.lower().endswith('.tiff'):
                warnings.warn('Using ".tiff" files with multiple bands '
                              'will cause distortion. Please verify your output.')
            if fname.lower().endswith(white_list_formats):
                yield root, fname


def _list_valid_filenames_in_directory(directory, white_list_formats, split,
                                       class_indices, follow_links):
    """Lists paths of files in `subdir` with extensions in `white_list_formats`.

    # Arguments
        directory: absolute path to a directory containing the files to list.
            The directory name is used as class label
            and must be a key of `class_indices`.
        white_list_formats: set of strings containing allowed extensions for
            the files to be counted.
        split: tuple of floats (e.g. `(0.2, 0.6)`) to only take into
            account a certain fraction of files in each directory.
            E.g.: `segment=(0.6, 1.0)` would only account for last 40 percent
            of images in each directory.
        class_indices: dictionary mapping a class name to its index.
        follow_links: boolean, follow symbolic links to subdirectories.

    # Returns
         classes: a list of class indices
         filenames: the path of valid files in `directory`, relative from
             `directory`'s parent (e.g., if `directory` is "dataset/class1",
            the filenames will be
            `["class1/file1.jpg", "class1/file2.jpg", ...]`).
    """
    dirname = os.path.basename(directory)
    if split:
        num_files = len(list(
            _iter_valid_files(directory, white_list_formats, follow_links)))
        start, stop = int(split[0] * num_files), int(split[1] * num_files)
        valid_files = list(
            _iter_valid_files(
                directory, white_list_formats, follow_links))[start: stop]
    else:
        valid_files = _iter_valid_files(
            directory, white_list_formats, follow_links)
    classes = []
    filenames = []
    for root, fname in valid_files:
        classes.append(class_indices[dirname])
        absolute_path = os.path.join(root, fname)
        relative_path = os.path.join(
            dirname, os.path.relpath(absolute_path, directory))
        filenames.append(relative_path)

    return classes, filenames


def array_to_img(x, data_format='channels_last', scale=True, dtype='float32'):
    """Converts a 3D Numpy array to a PIL Image instance.

    # Arguments
        x: Input Numpy array.
        data_format: Image data format.
            either "channels_first" or "channels_last".
        scale: Whether to rescale image values
            to be within `[0, 255]`.
        dtype: Dtype to use.

    # Returns
        A PIL Image instance.

    # Raises
        ImportError: if PIL is not available.
        ValueError: if invalid `x` or `data_format` is passed.
    """
    if pil_image is None:
        raise ImportError('Could not import PIL.Image. '
                          'The use of `array_to_img` requires PIL.')
    x = np.asarray(x, dtype=dtype)
    if x.ndim != 3:
        raise ValueError('Expected image array to have rank 3 (single image). '
                         'Got array with shape: %s' % (x.shape,))

    if data_format not in {'channels_first', 'channels_last'}:
        raise ValueError('Invalid data_format: %s' % data_format)

    # Original Numpy array x has format (height, width, channel)
    # or (channel, height, width)
    # but target PIL image has format (width, height, channel)
    if data_format == 'channels_first':
        x = x.transpose(1, 2, 0)
    if scale:
        x = x + max(-np.min(x), 0)
        x_max = np.max(x)
        if x_max != 0:
            x /= x_max
        x *= 255
    if x.shape[2] == 4:
        # RGBA
        return pil_image.fromarray(x.astype('uint8'), 'RGBA')
    elif x.shape[2] == 3:
        # RGB
        return pil_image.fromarray(x.astype('uint8'), 'RGB')
    elif x.shape[2] == 1:
        # grayscale
        return pil_image.fromarray(x[:, :, 0].astype('uint8'), 'L')
    else:
        raise ValueError('Unsupported channel number: %s' % (x.shape[2],))


def img_to_array(img, data_format='channels_last', dtype='float32'):
    """Converts a PIL Image instance to a Numpy array.

    # Arguments
        img: PIL Image instance.
        data_format: Image data format,
            either "channels_first" or "channels_last".
        dtype: Dtype to use for the returned array.

    # Returns
        A 3D Numpy array.

    # Raises
        ValueError: if invalid `img` or `data_format` is passed.
    """
    if data_format not in {'channels_first', 'channels_last'}:
        raise ValueError('Unknown data_format: %s' % data_format)
    # Numpy array x has format (height, width, channel)
    # or (channel, height, width)
    # but original PIL image has format (width, height, channel)
    x = np.asarray(img, dtype=dtype)
    if len(x.shape) == 3:
        if data_format == 'channels_first':
            x = x.transpose(2, 0, 1)
    elif len(x.shape) == 2:
        if data_format == 'channels_first':
            x = x.reshape((1, x.shape[0], x.shape[1]))
        else:
            x = x.reshape((x.shape[0], x.shape[1], 1))
    else:
        raise ValueError('Unsupported image shape: %s' % (x.shape,))
    return x


================================================
FILE: axelerate/networks/common_utils/__init__.py
================================================


================================================
FILE: axelerate/networks/common_utils/augment.py
================================================
# -*- coding: utf-8 -*-
import numpy as np
np.random.seed(1337)
import imgaug as ia
from imgaug import augmenters as iaa
from imgaug.augmentables.segmaps import SegmentationMapsOnImage
from imgaug.augmentables.bbs import BoundingBox, BoundingBoxesOnImage
import cv2
import os
import glob
import random

class ImgAugment(object):
    def __init__(self, w, h, jitter):
        """
        # Args
            desired_w : int
            desired_h : int
            jitter : bool
        """
        self._jitter = jitter
        self._w = w
        self._h = h

    def imread(self, img_file, boxes, labels):
        """
        # Args
            img_file : str
            boxes : array, shape of (N, 4)
        
        # Returns
            image : 3d-array, shape of (h, w, 3)
            boxes_ : array, same shape of boxes
                jittered & resized bounding box
        """
        # 1. read image file
        try:
            image = cv2.imread(img_file)
            image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        except:
            print("This image has an annotation file, but cannot be open. Check the integrity of your dataset.", img_file)
            raise
        
        boxes_ = np.copy(boxes)
        labels_ = np.copy(labels)
  
        # 2. resize and augment image     
        image, boxes_, labels_ = process_image_detection(image, boxes_, labels_, self._w, self._h, self._jitter) 

        return image, boxes_, labels_


def _to_bbs(boxes, labels, shape):
    new_boxes = []
    for i in range(len(boxes)):
        x1,y1,x2,y2 = boxes[i]
        new_box = BoundingBox(x1,y1,x2,y2, labels[i])
        new_boxes.append(new_box)
    bbs = BoundingBoxesOnImage(new_boxes, shape)
    return bbs

def _to_array(bbs):
    new_boxes = []
    new_labels = []
    for bb in bbs.bounding_boxes:
        x1 = int(bb.x1)
        x2 = int(bb.x2)
        y1 = int(bb.y1)
        y2 = int(bb.y2)
        label = bb.label
        new_boxes.append([x1,y1,x2,y2])
        new_labels.append(label)
    return new_boxes, new_labels


def process_image_detection(image, boxes, labels, desired_w, desired_h, augment):
    
    # resize the image to standard size
    if (desired_w and desired_h) or augment:
        bbs = _to_bbs(boxes, labels, image.shape)

        if (desired_w and desired_h):
            # Rescale image and bounding boxes
            image = ia.imresize_single_image(image, (desired_w, desired_h))
            bbs = bbs.on(image)

        if augment:
            aug_pipe = _create_augment_pipeline()
            image, bbs = aug_pipe(image=image, bounding_boxes=bbs)
            bbs = bbs.remove_out_of_image().clip_out_of_image()

        new_boxes, new_labels = _to_array(bbs)
        #if len(new_boxes) != len(boxes):
        #    print(new_boxes)
        #    print(boxes)
        #    print("_________________")

        return image, np.array(new_boxes), new_labels
    else:
        return image, np.array(boxes), labels

def process_image_classification(image, desired_w, desired_h, augment):
    
    # resize the image to standard size
    if (desired_w and desired_h) or augment:

        if (desired_w and desired_h):
            # Rescale image
            image = ia.imresize_single_image(image, (desired_w, desired_h))

        if augment:
            aug_pipe = _create_augment_pipeline()
            image = aug_pipe(image=image)
        
    return image

def process_image_segmentation(image, segmap, input_w, input_h, output_w, output_h, augment):
    # resize the image to standard size
    if (input_w and input_h) or augment:
        segmap = SegmentationMapsOnImage(segmap, shape=image.shape)

        if (input_w and input_h):
            # Rescale image and segmaps
            image = ia.imresize_single_image(image, (input_w, input_h))
            segmap = segmap.resize((output_w, output_h), interpolation="nearest")

        if augment:
            aug_pipe = _create_augment_pipeline()
            image, segmap = aug_pipe(image=image, segmentation_maps=segmap)

    return image, segmap.get_arr()


def _create_augment_pipeline():

    sometimes = lambda aug: iaa.Sometimes(0.1, aug)

    aug_pipe = iaa.Sequential(
        [
            iaa.Fliplr(0.5), 
            iaa.Flipud(0.2), 
            iaa.Affine(translate_percent={"x": (-0.1, 0.1), "y": (-0.1, 0.1)}),
            iaa.OneOf([iaa.Affine(scale=(0.8, 1.2)),
                        iaa.Affine(rotate=(-10, 10)),
                        iaa.Affine(shear=(-10, 10))]),

                        sometimes(iaa.OneOf([
                               iaa.GaussianBlur((0, 3.0)),
                               iaa.AverageBlur(k=(2, 7)),
                               iaa.MedianBlur(k=(3, 11)),
                           ])),
                           sometimes(iaa.Sharpen(alpha=(0, 1.0), lightness=(0.75, 1.5))),
                           sometimes(iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.05 * 255), per_channel=0.5)),
                           sometimes(iaa.OneOf([
                               iaa.Dropout((0.01, 0.1), per_channel=0.5),
                               iaa.CoarseDropout((0.03, 0.15), size_percent=(0.02, 0.05), per_channel=0.2),
                           ])),
                           sometimes(iaa.Add((-10, 10), per_channel=0.5)),  
                           sometimes(iaa.Multiply((0.5, 1.5), per_channel=0.5)), 
                           sometimes(iaa.LinearContrast((0.5, 2.0), per_channel=0.5)) 
        ],
        random_order=True
    )

    return aug_pipe


def visualize_detection_dataset(img_folder, ann_folder, num_imgs = None, img_size=None, augment=None):
    import matplotlib.pyplot as plt
    import matplotlib
    from axelerate.networks.yolo.backend.utils.annotation import PascalVocXmlParser
    try:
        matplotlib.use('TkAgg')
    except:
        pass

    parser = PascalVocXmlParser()
    aug = ImgAugment(img_size, img_size, jitter=augment)
    for ann in os.listdir(ann_folder)[:num_imgs]:
        annotation_file = os.path.join(ann_folder, ann)
        fname = parser.get_fname(annotation_file)
        labels = parser.get_labels(annotation_file)
        boxes = parser.get_boxes(annotation_file)
        img_file =  os.path.join(img_folder, fname)
        img, boxes_, labels_ = aug.imread(img_file, boxes, labels)
        
        for i in range(len(boxes_)):
            x1, y1, x2, y2 = boxes_[i]
            cv2.rectangle(img, (x1,y1), (x2,y2), (0,255,0), 3)
            cv2.putText(img, 
                        '{}'.format(labels_[i]), 
                        (x1, y1 - 13), 
                        cv2.FONT_HERSHEY_SIMPLEX, 
                        1e-3 * img.shape[0], 
                        (255,0,0), 1)

        plt.imshow(img)
        plt.show(block=False)
        plt.pause(1)
        plt.close()

def visualize_segmentation_dataset(images_path, segs_path, num_imgs = None, img_size=None, augment=False, n_classes=255):
    import matplotlib.pyplot as plt
    import matplotlib
    from axelerate.networks.segnet.data_utils.data_loader import get_pairs_from_paths, DATA_LOADER_SEED, class_colors, DataLoaderError

    try:
        matplotlib.use('TkAgg')
    except:
        pass

    def _get_colored_segmentation_image(img, seg, colors, n_classes, img_size, do_augment=False):
        """ Return a colored segmented image """

        img, seg = process_image_segmentation(img, seg, img_size, img_size, img_size, img_size, do_augment)
        seg_img = np.zeros_like(seg)

        for c in range(n_classes):
            seg_img[:, :, 0] += ((seg[:, :, 0] == c) *
                                (colors[c][0])).astype('uint8')
            seg_img[:, :, 1] += ((seg[:, :, 0] == c) *
                                (colors[c][1])).astype('uint8')
            seg_img[:, :, 2] += ((seg[:, :, 0] == c) *
                                (colors[c][2])).astype('uint8')

        return img, seg_img

    try:
        # Get image-segmentation pairs
        img_seg_pairs = get_pairs_from_paths(images_path, segs_path, ignore_non_matching=True)
        # Get the colors for the classes
        colors = class_colors

        print("Please press any key to display the next image")
        for im_fn, seg_fn in img_seg_pairs[:num_imgs]:
            img = cv2.imread(im_fn)[...,::-1]
            seg = cv2.imread(seg_fn)
            print("Found the following classes in the segmentation image:", np.unique(seg))
            img, seg_img = _get_colored_segmentation_image(img, seg, colors, n_classes, img_size, do_augment=augment)
            fig = plt.figure(figsize=(14,7))
            ax1 = fig.add_subplot(1,2,1)
            ax1.imshow(img)
            ax3 = fig.add_subplot(1,2,2)
            ax3.imshow(seg_img)
            plt.show(block=False)
            plt.pause(1)
            plt.close()
    except DataLoaderError as e:
        print("Found error during data loading\n{0}".format(str(e)))
        return False

def visualize_classification_dataset(img_folder, num_imgs = None, img_size=None, augment=None):
    import matplotlib.pyplot as plt
    import matplotlib
    try:
        matplotlib.use('TkAgg')
    except:
        pass
    font = cv2.FONT_HERSHEY_SIMPLEX
    image_files_list = []
    image_search = lambda ext : glob.glob(img_folder + ext, recursive=True)
    for ext in ['/**/*.jpg', '/**/*.jpeg', '/**/*.png']: image_files_list.extend(image_search(ext))
    random.shuffle(image_files_list)
    for filename in image_files_list[0:num_imgs]:
        image = cv2.imread(filename)[...,::-1]
        image = process_image_classification(image, img_size, img_size, augment)
        cv2.putText(image, os.path.dirname(filename).split('/')[-1], (10,30), font, image.shape[1]/700 , (255, 0, 0), 2, True)
        plt.figure()
        plt.imshow(image)
        plt.show(block=False)
        plt.pause(1)
        plt.close()
        print(filename)


if __name__ == '__main__':
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument("--type", type=str)
    parser.add_argument("--images", type=str)
    parser.add_argument("--annotations", type=str)
    parser.add_argument("--num_imgs", type=int)
    parser.add_argument("--img_size", type=int)
    parser.add_argument("--aug", type=bool)
    args = parser.parse_args()
    if args.type == 'detection':
        visualize_detection_dataset(args.images, args.annotations, args.num_imgs, args.img_size, args.aug)
    if args.type == 'segmentation':
        visualize_segmentation_dataset(args.images, args.annotations, args.num_imgs, args.img_size, args.aug)
    if args.type == 'classification':
        visualize_classification_dataset(args.images, args.num_imgs, args.img_size, args.aug)


================================================
FILE: axelerate/networks/common_utils/callbacks.py
================================================
import numpy as np
from tensorflow import keras
from tensorflow.keras import backend as K

def cosine_decay_with_warmup(global_step,
                             learning_rate_base,
                             total_steps,
                             warmup_learning_rate=0.0,
                             warmup_steps=0,
                             hold_base_rate_steps=0):
    """Cosine decay schedule with warm up period.
    Cosine annealing learning rate as described in:
      Loshchilov and Hutter, SGDR: Stochastic Gradient Descent with Warm Restarts.
      ICLR 2017. https://arxiv.org/abs/1608.03983
    In this schedule, the learning rate grows linearly from warmup_learning_rate
    to learning_rate_base for warmup_steps, then transitions to a cosine decay
    schedule.
    Arguments:
        global_step {int} -- global step.
        learning_rate_base {float} -- base learning rate.
        total_steps {int} -- total number of training steps.
    Keyword Arguments:
        warmup_learning_rate {float} -- initial learning rate for warm up. (default: {0.0})
        warmup_steps {int} -- number of warmup steps. (default: {0})
        hold_base_rate_steps {int} -- Optional number of steps to hold base learning rate
                                    before decaying. (default: {0})
    Returns:
      a float representing learning rate.
    Raises:
      ValueError: if warmup_learning_rate is larger than learning_rate_base,
        or if warmup_steps is larger than total_steps.
    """

    if total_steps < warmup_steps:
        raise ValueError('total_steps must be larger or equal to '
                         'warmup_steps.')
    learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
        np.pi *
        (global_step - warmup_steps - hold_base_rate_steps
         ) / float(total_steps - warmup_steps - hold_base_rate_steps)))
    if hold_base_rate_steps > 0:
        learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
                                 learning_rate, learning_rate_base)
    if warmup_steps > 0:
        if learning_rate_base < warmup_learning_rate:
            raise ValueError('learning_rate_base must be larger or equal to '
                             'warmup_learning_rate.')
        slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
        warmup_rate = slope * global_step + warmup_learning_rate
        learning_rate = np.where(global_step < warmup_steps, warmup_rate,
                                 learning_rate)
    return np.where(global_step > total_steps, 0.0, learning_rate)


class WarmUpCosineDecayScheduler(keras.callbacks.Callback):
    """Cosine decay with warmup learning rate scheduler
    """

    def __init__(self,
                 learning_rate_base,
                 total_steps,
                 global_step_init=0,
                 warmup_learning_rate=0.0,
                 warmup_steps=0,
                 hold_base_rate_steps=0,
                 verbose=0):
        """Constructor for cosine decay with warmup learning rate scheduler.
    Arguments:
        learning_rate_base {float} -- base learning rate.
        total_steps {int} -- total number of training steps.
    Keyword Arguments:
        global_step_init {int} -- initial global step, e.g. from previous checkpoint.
        warmup_learning_rate {float} -- initial learning rate for warm up. (default: {0.0})
        warmup_steps {int} -- number of warmup steps. (default: {0})
        hold_base_rate_steps {int} -- Optional number of steps to hold base learning rate
                                    before decaying. (default: {0})
        verbose {int} -- 0: quiet, 1: update messages. (default: {0})
        """

        super(WarmUpCosineDecayScheduler, self).__init__()
        self.learning_rate_base = learning_rate_base
        self.total_steps = total_steps
        self.global_step = global_step_init
        self.warmup_learning_rate = warmup_learning_rate
        self.warmup_steps = warmup_steps
        self.hold_base_rate_steps = hold_base_rate_steps
        self.verbose = verbose
        self.learning_rates = []
        self.current_lr = 0.0
        
    def on_epoch_end(self, epoch, logs={}):
        if self.verbose == 1:
            print('Epoch %05d: Learning rate is %s.\n' % (epoch, self.current_lr))        

    def on_batch_end(self, batch, logs=None):
        self.global_step = self.global_step + 1
        lr = K.get_value(self.model.optimizer.lr)
        self.learning_rates.append(lr)

    def on_batch_begin(self, batch, logs=None):
        self.current_lr = cosine_decay_with_warmup(global_step=self.global_step,
                                      learning_rate_base=self.learning_rate_base,
                                      total_steps=self.total_steps,
                                      warmup_learning_rate=self.warmup_learning_rate,
                                      warmup_steps=self.warmup_steps,
                                      hold_base_rate_steps=self.hold_base_rate_steps)
        K.set_value(self.model.optimizer.lr, self.current_lr)
        if self.verbose ==2:
            print('\nBatch %05d: setting learning rate to %s.' % (self.global_step + 1, self.current_lr))



================================================
FILE: axelerate/networks/common_utils/convert.py
================================================
import tensorflow as tf
import tensorflow.keras.backend as k
import subprocess
import os
import cv2
import argparse
import tarfile
import glob
import shutil
import numpy as np
import shlex

k210_converter_path=os.path.join(os.path.dirname(__file__),"ncc","ncc")
k210_converter_download_path=os.path.join(os.path.dirname(os.path.abspath(__file__)),'ncc_linux_x86_64.tar.xz')
nncase_download_url="https://github.com/kendryte/nncase/releases/download/v0.2.0-beta4/ncc_linux_x86_64.tar.xz"
cwd = os.path.dirname(os.path.realpath(__file__))

def run_command(cmd, cwd=None):
    with subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, executable='/bin/bash', universal_newlines=True, cwd=cwd) as p:
        while True:
            line = p.stdout.readline()
            if not line:
                break
            print(line)    
        exit_code = p.poll()
    return exit_code

class Converter(object):
    def __init__(self, converter_type, backend=None, dataset_path=None):
        if 'tflite' in converter_type:
            print('Tflite Converter ready')

        if 'k210' in converter_type:
            if os.path.exists(k210_converter_path):
                print('K210 Converter ready')
            else:
                print('Downloading K210 Converter')
                _path = tf.keras.utils.get_file(k210_converter_download_path, nncase_download_url)     
                print(_path)    
                tar_file = tarfile.open(k210_converter_download_path)
                tar_file.extractall(os.path.join(os.path.dirname(__file__),"ncc"))
                tar_file.close()
                os.chmod(k210_converter_path, 0o775)

        if 'edgetpu' in converter_type:
            rc, out = subprocess.getstatusoutput('dpkg -l edgetpu-compiler')
            if rc == 0:
                print('Edge TPU Converter ready')
            else:
                print('Installing Edge TPU Converter')
                cmd = "bash install_edge_tpu_compiler.sh"
                result = run_command(cmd, cwd)
                print(result)
                
        if 'openvino' in converter_type:
            rc = os.path.isdir('/opt/intel/openvino')
            if rc:
                print('OpenVINO Converter ready')
            else:
                print('Installing OpenVINO Converter')
                cmd = "bash install_openvino.sh"
                result = run_command(cmd, cwd)
                print(result)       
                
        if 'onnx' in converter_type:
            try:
                import tf2onnx
            except:
                cmd = "pip install tf2onnx"
                result = run_command(cmd, cwd)
                print(result)              
                
        self._converter_type = converter_type
        self._backend = backend
        self._dataset_path=dataset_path

    def edgetpu_dataset_gen(self):
        num_imgs = 300
        image_files_list = []
        from axelerate.networks.common_utils.feature import create_feature_extractor
        backend = create_feature_extractor(self._backend, [self._img_size[0], self._img_size[1]])
        image_search = lambda ext : glob.glob(self._dataset_path + ext, recursive=True)
        for ext in ['/**/*.jpg', '/**/*.jpeg', '/**/*.png']: image_files_list.extend(image_search(ext))

        for filename in image_files_list[:num_imgs]:
            image = cv2.imread(filename)
            image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
            image = cv2.resize(image, (self._img_size[0], self._img_size[1]))
            data = np.array(backend.normalize(image), dtype=np.float32)
            data = np.expand_dims(data, 0)
            yield [data]

    def k210_dataset_gen(self):
        num_imgs = 300
        image_files_list = []
        from axelerate.networks.common_utils.feature import create_feature_extractor
        backend = create_feature_extractor(self._backend, [self._img_size[0], self._img_size[1]])
        image_search = lambda ext : glob.glob(self._dataset_path + ext, recursive=True)
        for ext in ['/**/*.jpg', '/**/*.jpeg', '/**/*.png']: image_files_list.extend(image_search(ext))
        temp_folder = os.path.join(os.path.dirname(__file__),'tmp')
        os.mkdir(temp_folder)
        for filename in image_files_list[:num_imgs]:
            image = cv2.imread(filename)
            image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
            image = cv2.resize(image, (self._img_size[0], self._img_size[1]))
            data = np.array(backend.normalize(image), dtype=np.float32)
            data = np.expand_dims(data, 0)
            bin_filename = os.path.basename(filename).split('.')[0]+'.bin'
            with open(os.path.join(temp_folder, bin_filename), "wb") as f: 
                data = np.transpose(data, [0, 3, 1, 2])
                data.tofile(f)
        return temp_folder

    def convert_edgetpu(self, model_path):
        output_path = os.path.dirname(model_path)
        print(output_path)
        cmd = "edgetpu_compiler --out_dir {} {}".format(output_path, model_path)
        print(cmd)
        result = run_command(cmd)
        print(result)

    def convert_k210(self, model_path):
        folder_name = self.k210_dataset_gen()
        output_name = os.path.basename(model_path).split(".")[0]+".kmodel"
        output_path = os.path.join(os.path.dirname(model_path),output_name)
        print(output_path)
        cmd = '{} compile "{}" "{}" -i tflite --weights-quantize-threshold 1000 --dataset-format raw --dataset "{}"'.format(k210_converter_path, model_path, output_path, folder_name)
        print(cmd)
        result = run_command(cmd)
        shutil.rmtree(folder_name, ignore_errors=True)
        print(result)

    def convert_ir(self, model_path, model_layers):
        input_model = os.path.join(model_path.split(".")[0], "saved_model.pb")
        output_dir = os.path.dirname(model_path)
        output_layer = model_layers[-2].name+'/BiasAdd'
        cmd = 'source /opt/intel/openvino/bin/setupvars.sh && python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model "{}" --output {} --batch 1 --reverse_input_channels --data_type FP16 --mean_values [127.5,127.5,127.5] --scale_values [127.5] --output_dir "{}"'.format(input_model, output_layer, output_dir)
        print(cmd)
        result = run_command(cmd)
        print(result)

    def convert_oak(self, model_path):
        output_name = model_path.split(".")[0]+".blob"
        cmd = 'source /opt/intel/openvino/bin/setupvars.sh && /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/myriad_compile -m "{}" -o "{}" -ip U8 -VPU_MYRIAD_PLATFORM VPU_MYRIAD_2480 -VPU_NUMBER_OF_SHAVES 4 -VPU_NUMBER_OF_CMX_SLICES 4'.format(model_path.split(".")[0] + '.xml', output_name)
        print(cmd)
        result = run_command(cmd)
        print(result)

    def convert_onnx(self, model):
        spec = (tf.TensorSpec((None, *self._img_size, 3), tf.float32, name="input"),)
        output_path = self.model_path.split(".")[0] + '.onnx'
        model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model, input_signature=spec, output_path = output_path)

    def convert_tflite(self, model, model_layers, target=None):
        model_type = model.name
        model.summary()

        if target=='k210': 
            if model_type == 'yolo' or model_type == 'segnet':
                print("Converting to tflite without Reshape for K210 YOLO")
                if len(model.outputs) == 2:
                    output1 = model.get_layer(name="detection_layer_1").output
                    output2 = model.get_layer(name="detection_layer_2").output
                    model = tf.keras.Model(inputs=model.input, outputs=[output1, output2])
                else:
                    model = tf.keras.Model(inputs=model.input, outputs=model.layers[-2].output)
                    
            model.input.set_shape(1 + model.input.shape[1:])
            converter = tf.lite.TFLiteConverter.from_keras_model(model)

        elif target == 'edgetpu':
            converter = tf.lite.TFLiteConverter.from_keras_model(model)
            converter.optimizations = [tf.lite.Optimize.DEFAULT]
            converter.representative_dataset = self.edgetpu_dataset_gen
            converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
            converter.inference_input_type = tf.uint8
            converter.inference_output_type = tf.uint8

        elif target == 'tflite_dynamic':
            converter = tf.lite.TFLiteConverter.from_keras_model(model)
            converter.optimizations = [tf.lite.Optimize.DEFAULT]
            
        elif target == 'tflite_fullint':
            converter = tf.lite.TFLiteConverter.from_keras_model(model)
            converter.optimizations = [tf.lite.Optimize.DEFAULT]            
            converter.representative_dataset = self.edgetpu_dataset_gen
            
        else:
            converter = tf.lite.TFLiteConverter.from_keras_model(model)

        tflite_model = converter.convert()
        open(os.path.join (self.model_path.split(".")[0] + '.tflite'), "wb").write(tflite_model)

    def convert_model(self, model_path):
        k.clear_session()
        k.set_learning_phase(0)
        model = tf.keras.models.load_model(model_path, compile=False)
        model_layers = model.layers
        self._img_size = model.input_shape[1:3]
        self.model_path = os.path.abspath(model_path)

        if 'k210' in self._converter_type:
            self.convert_tflite(model, model_layers, 'k210')
            self.convert_k210(self.model_path.split(".")[0] + '.tflite')

        if 'edgetpu' in self._converter_type:
            self.convert_tflite(model, model_layers, 'edgetpu')
            self.convert_edgetpu(model_path.split(".")[0] + '.tflite')

        if 'onnx' in self._converter_type:
            self.convert_onnx(model)
            
        if 'openvino' in self._converter_type:
            model.save(model_path.split(".")[0])
            self.convert_ir(model_path, model_layers)
            self.convert_oak(model_path)

        if 'tflite' in self._converter_type:
            self.convert_tflite(model, model_layers, self._converter_type)

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Keras model conversion to .kmodel, .tflite, or .onnx")
    parser.add_argument("--model_path", "-m", type=str, required=True,
                        help="path to keras model")
    parser.add_argument("--converter_type", type=str, default='k210',
                        help="batch size")
    parser.add_argument("--dataset_path", type=str, required=False,
                        help="path to calibration dataset")
    parser.add_argument("--backend", type=str, default='MobileNet7_5',
                    help="network feature extractor, e.g. Mobilenet/YOLO/NASNet/etc")                    
    args = parser.parse_args()
    converter = Converter(args.converter_type, args.backend, args.dataset_path)
    converter.convert_model(args.model_path)


================================================
FILE: axelerate/networks/common_utils/feature.py
================================================
import tensorflow
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Reshape, Activation, Conv2D, Input, MaxPooling2D, BatchNormalization, Flatten, Dense, Lambda, ZeroPadding2D
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import Concatenate
from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.applications import NASNetMobile
from tensorflow.keras.applications import ResNet50

from .mobilenet_sipeed.mobilenet import MobileNet

def create_feature_extractor(architecture, input_size, weights = None):
    """
    # Args
        architecture : str
        input_size : int

    # Returns
        feature_extractor : BaseFeatureExtractor instance
    """
    if architecture == 'DenseNet121':
        feature_extractor = DenseNet121Feature(input_size, weights)
    elif architecture == 'SqueezeNet':
        feature_extractor = SqueezeNetFeature(input_size, weights)
    elif architecture == 'MobileNet1_0':
        feature_extractor = MobileNetFeature(input_size, weights, alpha=1)
    elif architecture == 'MobileNet7_5':
        feature_extractor = MobileNetFeature(input_size, weights, alpha=0.75)
    elif architecture == 'MobileNet5_0':
        feature_extractor = MobileNetFeature(input_size, weights, alpha=0.5)
    elif architecture == 'MobileNet2_5':
        feature_extractor = MobileNetFeature(input_size, weights, alpha=0.25)
    elif architecture == 'Full Yolo':
        feature_extractor = FullYoloFeature(input_size, weights)
    elif architecture == 'Tiny Yolo':
        feature_extractor = TinyYoloFeature(input_size, weights)
    elif architecture == 'NASNetMobile':
        feature_extractor = NASNetMobileFeature(input_size, weights)
    elif architecture == 'ResNet50':
        feature_extractor = ResNet50Feature(input_size, weights)
    else:
        raise Exception('Architecture not supported! Name should be Full Yolo, Tiny Yolo, MobileNet1_0, MobileNet7_5, MobileNet5_0, MobileNet2_5, SqueezeNet, NASNetMobile, ResNet50 or DenseNet121')
    return feature_extractor



class BaseFeatureExtractor(object):
    """docstring for ClassName"""

    # to be defined in each subclass
    def __init__(self, input_size):
        raise NotImplementedError("error message")

    # to be defined in each subclass
    def normalize(self, image):
        raise NotImplementedError("error message")       

    def get_input_size(self):
        input_shape = self.feature_extractor.get_input_shape_at(0)
        assert input_shape[1] == input_shape[2]
        return input_shape[1]

    def get_output_size(self, layer = None):
        if not layer:
            output_shape = self.feature_extractor.outputs[0].shape
        output_shape = self.feature_extractor.get_layer(layer).output.shape
        return output_shape[1:3]

    def get_output_tensor(self, layer):
        return self.feature_extractor.get_layer(layer).output

    def extract(self, input_image):
        return self.feature_extractor(input_image)

class FullYoloFeature(BaseFeatureExtractor):
    """docstring for ClassName"""
    def __init__(self, input_size, weights=None):
        input_image = Input(shape=(input_size[0], input_size[1], 3))

        # the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K)
        def space_to_depth_x2(x):
            return tensorflow.nn.space_to_depth(x, block_size=2)

        # Layer 1
        x = Conv2D(32, (3,3), strides=(1,1), padding='same', name='conv_1', use_bias=False)(input_image)
        x = BatchNormalization(name='norm_1')(x)
        x = LeakyReLU(alpha=0.1)(x)
        x = MaxPooling2D(pool_size=(2, 2))(x)

        # Layer 2
        x = Conv2D(64, (3,3), strides=(1,1), padding='same', name='conv_2', use_bias=False)(x)
        x = BatchNormalization(name='norm_2')(x)
        x = LeakyReLU(alpha=0.1)(x)
        x = MaxPooling2D(pool_size=(2, 2))(x)

        # Layer 3
        x = Conv2D(128, (3,3), strides=(1,1), padding='same', name='conv_3', use_bias=False)(x)
        x = BatchNormalization(name='norm_3')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 4
        x = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_4', use_bias=False)(x)
        x = BatchNormalization(name='norm_4')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 5
        x = Conv2D(128, (3,3), strides=(1,1), padding='same', name='conv_5', use_bias=False)(x)
        x = BatchNormalization(name='norm_5')(x)
        x = LeakyReLU(alpha=0.1)(x)
        x = MaxPooling2D(pool_size=(2, 2))(x)

        # Layer 6
        x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_6', use_bias=False)(x)
        x = BatchNormalization(name='norm_6')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 7
        x = Conv2D(128, (1,1), strides=(1,1), padding='same', name='conv_7', use_bias=False)(x)
        x = BatchNormalization(name='norm_7')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 8
        x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_8', use_bias=False)(x)
        x = BatchNormalization(name='norm_8')(x)
        x = LeakyReLU(alpha=0.1)(x)
        x = MaxPooling2D(pool_size=(2, 2))(x)

        # Layer 9
        x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_9', use_bias=False)(x)
        x = BatchNormalization(name='norm_9')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 10
        x = Conv2D(256, (1,1), strides=(1,1), padding='same', name='conv_10', use_bias=False)(x)
        x = BatchNormalization(name='norm_10')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 11
        x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_11', use_bias=False)(x)
        x = BatchNormalization(name='norm_11')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 12
        x = Conv2D(256, (1,1), strides=(1,1), padding='same', name='conv_12', use_bias=False)(x)
        x = BatchNormalization(name='norm_12')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 13
        x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_13', use_bias=False)(x)
        x = BatchNormalization(name='norm_13')(x)
        x = LeakyReLU(alpha=0.1)(x)

        skip_connection = x

        x = MaxPooling2D(pool_size=(2, 2))(x)

        # Layer 14
        x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_14', use_bias=False)(x)
        x = BatchNormalization(name='norm_14')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 15
        x = Conv2D(512, (1,1), strides=(1,1), padding='same', name='conv_15', use_bias=False)(x)
        x = BatchNormalization(name='norm_15')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 16
        x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_16', use_bias=False)(x)
        x = BatchNormalization(name='norm_16')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 17
        x = Conv2D(512, (1,1), strides=(1,1), padding='same', name='conv_17', use_bias=False)(x)
        x = BatchNormalization(name='norm_17')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 18
        x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_18', use_bias=False)(x)
        x = BatchNormalization(name='norm_18')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 19
        x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_19', use_bias=False)(x)
        x = BatchNormalization(name='norm_19')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 20
        x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_20', use_bias=False)(x)
        x = BatchNormalization(name='norm_20')(x)
        x = LeakyReLU(alpha=0.1)(x)

        # Layer 21
        skip_connection = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_21', use_bias=False)(skip_connection)
        skip_connection = BatchNormalization(name='norm_21')(skip_connection)
        skip_connection = LeakyReLU(alpha=0.1)(skip_connection)
        skip_connection = Lambda(space_to_depth_x2)(skip_connection)

        x = Concatenate()([skip_connection, x])

        # Layer 22
        x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_22', use_bias=False)(x)
        x = BatchNormalization(name='norm_22')(x)
        x = LeakyReLU(alpha=0.1)(x)

        self.feature_extractor = Model(input_image, x)

        if weights == 'imagenet':
            print('Imagenet for YOLO backend are not available yet, defaulting to random weights')
        elif weights == None:
            pass
        else:
            print('Loaded backend weigths: '+weights)
            self.feature_extractor.load_weights(weights)

    def normalize(self, image):
        return image / 255.

class TinyYoloFeature(BaseFeatureExtractor):
    """docstring for ClassName"""
    def __init__(self, input_size, weights):
        input_image = Input(shape=(input_size[0], input_size[1], 3))

        # Layer 1
        x = Conv2D(16, (3,3), strides=(1,1), padding='same', name='conv_1', use_bias=False)(input_image)
        x = BatchNormalization(name='norm_1')(x)
        x = LeakyReLU(alpha=0.1)(x)
        x = MaxPooling2D(pool_size=(2, 2))(x)

        # Layer 2 - 5
        for i in range(0,4):
            x = Conv2D(24*(2**i), (3,3), strides=(1,1), padding='same', name='conv_' + str(i+2), use_bias=False)(x)
            x = BatchNormalization(name='norm_' + str(i+2))(x)
            x = LeakyReLU(alpha=0.1)(x)
            x = MaxPooling2D(pool_size=(2, 2))(x)

        # Layer 6
        x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_6', use_bias=False)(x)
        x = BatchNormalization(name='norm_6')(x)
        x = LeakyReLU(alpha=0.1)(x)
        x = MaxPooling2D(pool_size=(2, 2), strides=(1,1), padding='same')(x)

        # Layer 7 - 8
        for i in range(0,2):
            x = Conv2D(312, (3,3), strides=(1,1), padding='same', name='conv_' + str(i+7), use_bias=False)(x)
            x = BatchNormalization(name='norm_' + str(i+7))(x)
            x = LeakyReLU(alpha=0.1)(x)

        self.feature_extractor = Model(input_image, x)

        if weights == 'imagenet':
            print('Imagenet for YOLO backend are not available yet, defaulting to random weights')
        elif weights == None:
            pass
        else:
            print('Loaded backend weigths: '+weights)
            self.feature_extractor.load_weights(weights)


    def normalize(self, image):
        return image / 255.

class MobileNetFeature(BaseFeatureExtractor):
    """docstring for ClassName"""
    def __init__(self, input_size, weights, alpha):
        input_image = Input(shape=(input_size[0], input_size[1], 3))
        input_shapes_imagenet = [(128, 128,3), (160, 160,3), (192, 192,3), (224, 224,3)]
        input_shape =(128,128,3)
        for item in input_shapes_imagenet:
            if item[0] <= input_size[0]:
                input_shape = item

        if weights == 'imagenet':
            mobilenet = MobileNet(input_shape=input_shape, input_tensor=input_image, alpha = alpha, weights = 'imagenet', include_top=False, backend=tensorflow.keras.backend, layers=tensorflow.keras.layers, models=tensorflow.keras.models, utils=tensorflow.keras.utils)
            print('Successfully loaded imagenet backend weights')
        else:
            mobilenet = MobileNet(input_shape=(input_size[0],input_size[1],3),alpha = alpha,depth_multiplier = 1, dropout = 0.001, weights = None, include_top=False, backend=tensorflow.keras.backend, layers=tensorflow.keras.layers,models=tensorflow.keras.models,utils=tensorflow.keras.utils)
            if weights:
                print('Loaded backend weigths: '+weights)
                mobilenet.load_weights(weights)

        #x = mobilenet(input_image)
        self.feature_extractor = mobilenet

    def normalize(self, image):
        image = image / 255.
        image = image - 0.5
        image = image * 2.

        return image		

class SqueezeNetFeature(BaseFeatureExtractor):
    """docstring for ClassName"""
    def __init__(self, input_size, weights):

        # define some auxiliary variables and the fire module
        sq1x1  = "squeeze1x1"
        exp1x1 = "expand1x1"
        exp3x3 = "expand3x3"
        relu   = "relu_"

        def fire_module(x, fire_id, squeeze=16, expand=64):
            s_id = 'fire' + str(fire_id) + '/'
            x = Conv2D(squeeze, (1, 1), padding='valid', name=s_id + sq1x1)(x)
            x = Activation('relu', name=s_id + relu + sq1x1)(x)

            left = Conv2D(expand,  (1, 1), padding='valid', name=s_id + exp1x1)(x)
            left = Activation('relu', name=s_id + relu + exp1x1)(left)

            right = Conv2D(expand,  (3, 3), padding='same',  name=s_id + exp3x3)(x)
            right = Activation('relu', name=s_id + relu + exp3x3)(right)

            x = Concatenate(axis=3, name=s_id + 'concat')([left, right])

            return x

        # define the model of SqueezeNet
        input_image = Input(shape=(input_size[0], input_size[1], 3))
        x = ZeroPadding2D(padding=((1, 1), (1, 1)), name='pad')(input_image)
        x = Conv2D(64, (3, 3), strides=(2, 2), padding='valid', name='conv1')(x)
        x = Activation('relu', name='relu_conv1')(x)
        x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)

        x = fire_module(x, fire_id=2, squeeze=16, expand=64)
        x = fire_module(x, fire_id=3, squeeze=16, expand=64)
        x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool3')(x)

        x = fire_module(x, fire_id=4, squeeze=32, expand=128)
        x = fire_module(x, fire_id=5, squeeze=32, expand=128)
        x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool5')(x)

        x = fire_module(x, fire_id=6, squeeze=48, expand=192)
        x = fire_module(x, fire_id=7, squeeze=48, expand=192)
        x = fire_module(x, fire_id=8, squeeze=64, expand=256)
        x = fire_module(x, fire_id=9, squeeze=64, expand=256)

        self.feature_extractor = Model(input_image, x)  
        
        if weights == 'imagenet':
            print('Imagenet for SqueezeNet backend are not available yet, defaulting to random weights')
        elif weights == None:
            pass
        else:
            print('Loaded backend weigths: '+ weights)
            self.feature_extractor.load_weights(weights)


    def normalize(self, image):
        image = image[..., ::-1]
        image = image.astype('float')

        image[..., 0] -= 103.939
        image[..., 1] -= 116.779
        image[..., 2] -= 123.68

        return image    

class DenseNet121Feature(BaseFeatureExtractor):
    """docstring for ClassName"""
    def __init__(self, input_size, weights):
        input_image = Input(shape=(input_size[0], input_size[1], 3))

        if weights == 'imagenet':
            densenet = DenseNet121(input_tensor=input_image, include_top=False, weights='imagenet', pooling=None)
            print('Successfully loaded imagenet backend weights')
        else:
            densenet = DenseNet121(input_tensor=input_image, include_top=False, weights=None, pooling=None)
            if weights:
                densenet.load_weights(weights)
                print('Loaded backend weigths: ' + weights)

        self.feature_extractor = densenet

    def normalize(self, image):
        from tensorflow.keras.applications.densenet import preprocess_input
        return preprocess_input(image)

class NASNetMobileFeature(BaseFeatureExtractor):
    """docstring for ClassName"""
    def __init__(self, input_size, weights):
        input_image = Input(shape=(input_size[0], input_size[1], 3))

        if weights == 'imagenet':
            nasnetmobile = NASNetMobile(input_tensor=input_image, include_top=False, weights='imagenet', pooling=None)
            print('Successfully loaded imagenet backend weights')
        else:
            nasnetmobile = NASNetMobile(input_tensor=input_image, include_top=False, weights=None, pooling=None)
            if weights:
                nasnetmobile.load_weights(weights)
                print('Loaded backend weigths: ' + weights)
        self.feature_extractor = nasnetmobile

    def normalize(self, image):
        from tensorflow.keras.applications.nasnet import preprocess_input
        return preprocess_input(image)

class ResNet50Feature(BaseFeatureExtractor):
    """docstring for ClassName"""
    def __init__(self, input_size, weights):
        input_image = Input(shape=(input_size[0], input_size[1], 3))

        if weights == 'imagenet':
            resnet50 = ResNet50(input_tensor=input_image, weights='imagenet', include_top=False, pooling = None)
            print('Successfully loaded imagenet backend weights')
        else:
            resnet50 = ResNet50(input_tensor=input_image, include_top=False, pooling = None)
            if weights:
                resnet50.load_weights(weights)
                print('Loaded backend weigths: ' + weights)

        self.feature_extractor = resnet50

    def normalize(self, image):
        image = image[..., ::-1]
        image = image.astype('float')

        image[..., 0] -= 103.939
        image[..., 1] -= 116.779
        image[..., 2] -= 123.68

        return image 


================================================
FILE: axelerate/networks/common_utils/fit.py
================================================
import shutil
import os
import time
import tensorflow as tf
import numpy as np
import warnings

from axelerate.networks.common_utils.callbacks import WarmUpCosineDecayScheduler
from axelerate.networks.yolo.backend.utils.custom import MergeMetrics
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
from datetime import datetime

def train(model,
         loss_func,
         train_batch_gen,
         valid_batch_gen,
         learning_rate = 1e-4,
         nb_epoch = 300,
         project_folder = 'project',
         first_trainable_layer = None,
         metric=None,
         metric_name="val_loss"):
    """A function that performs training on a general keras model.

    # Args
        model : keras.models.Model instance
        loss_func : function
            refer to https://keras.io/losses/

        train_batch_gen : keras.utils.Sequence instance
        valid_batch_gen : keras.utils.Sequence instance
        learning_rate : float
        saved_weights_name : str
    """

    # Create project directory
    train_start = time.time()
    train_date = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
    path = os.path.join(project_folder, train_date)
    basename = model.name + "_best_"+ metric_name
    print('Current training session folder is {}'.format(path))
    os.makedirs(path)
    save_weights_name = os.path.join(path, basename + '.h5')
    save_weights_name_ctrlc = os.path.join(path, basename + '_ctrlc.h5')
    print('\n')

    # 1 Freeze layers
    layer_names = [layer.name for layer in model.layers]
    fixed_layers = []
    if first_trainable_layer in layer_names:
        for layer in model.layers:
            if layer.name == first_trainable_layer:
                break
            layer.trainable = False
            fixed_layers.append(layer.name)
    elif not first_trainable_layer:
        pass
    else:
        print('First trainable layer specified in config file is not in the model. Did you mean one of these?')
        for i,layer in enumerate(model.layers):
            print(i,layer.name)
        raise Exception('First trainable layer specified in config file is not in the model')

    if fixed_layers != []:
        print("The following layers do not update weights!!!")
        print("    ", fixed_layers)

    # 2 create optimizer
    optimizer = tf.keras.optimizers.legacy.Adam(learning_rate=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)

    if not metric:
        metric = metric_name
    else:
        metric = metric[metric_name]

    print(metric)    
    # 3. create loss function
    model.compile(loss=loss_func, optimizer=optimizer, metrics=metric if metric != 'loss' else None)
    model.summary()

    #4 create callbacks   
    
    tensorboard_callback = tf.keras.callbacks.TensorBoard("logs", histogram_freq=1)
    
    warm_up_lr = WarmUpCosineDecayScheduler(learning_rate_base=learning_rate,
                                            total_steps=len(train_batch_gen)*nb_epoch,
                                            warmup_learning_rate=0.0,
                                            warmup_steps=len(train_batch_gen)*min(3, nb_epoch-1),
                                            hold_base_rate_steps=0,
                                            verbose=1)

    if metric_name in ['recall', 'precision']:
        mergedMetric = MergeMetrics(model, metric_name, 1, True, save_weights_name, tensorboard_callback)
        callbacks = [mergedMetric, warm_up_lr, tensorboard_callback]  
    else:  
        early_stop = EarlyStopping(monitor='val_' + metric, 
                                min_delta=0.001, 
                                patience=20, 
                                mode='auto', 
                                verbose=2,
                                restore_best_weights=True)
                       
        checkpoint = ModelCheckpoint(save_weights_name, 
                                 monitor='val_' + metric, 
                                 verbose=2, 
                                 save_best_only=True, 
                                 mode='auto', 
                                 period=1)
                                 
        reduce_lr = ReduceLROnPlateau(monitor='val_' + metric,
                                factor=0.2,
                                patience=10,
                                min_lr=1e-6,
                                mode='auto',
                                verbose=2)   
        callbacks = [early_stop, checkpoint, warm_up_lr, tensorboard_callback] 

    # 4. training
    try:
        model.fit(train_batch_gen,
                steps_per_epoch  = len(train_batch_gen), 
                epochs           = nb_epoch,
                validation_data  = valid_batch_gen,
                validation_steps = len(valid_batch_gen),
                callbacks        = callbacks,                        
                verbose          = 1,
                workers          = 4,
                max_queue_size   = 10,
                use_multiprocessing = True)
    except KeyboardInterrupt:
        print("Saving model and copying logs")
        model.save(save_weights_name_ctrlc, overwrite=True, include_optimizer=False)
        shutil.copytree("logs", os.path.join(path, "logs"))  
        return model.layers, save_weights_name_ctrlc 
        
    shutil.copytree("logs", os.path.join(path, "logs"))
    _print_time(time.time()-train_start)
    return model.layers, save_weights_name

def _print_time(process_time):
    if process_time < 60:
        print("{:d}-seconds to train".format(int(process_time)))
    else:
        print("{:d}-mins to train".format(int(process_time/60)))



================================================
FILE: axelerate/networks/common_utils/install_edge_tpu_compiler.sh
================================================
wget https://packages.cloud.google.com/apt/doc/apt-key.gpg 

sudo apt-key add apt-key.gpg &&

echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list

sudo apt-get update && sudo apt-get install -y edgetpu-compiler &&

rm apt-key.gpg


================================================
FILE: axelerate/networks/common_utils/install_openvino.sh
================================================
sudo apt-get install -y pciutils cpio &&
wget http://registrationcenter-download.intel.com/akdlm/irc_nas/16345/l_openvino_toolkit_p_2020.1.023.tgz &&
tar xf l_openvino_toolkit_p_2020.1.023.tgz &&
cd l_openvino_toolkit_p_2020.1.023 && 
sudo -E ./install_openvino_dependencies.sh && 
sed -i 's/decline/accept/g' silent.cfg && 
sudo -E ./install.sh --silent silent.cfg


================================================
FILE: axelerate/networks/common_utils/mobilenet_sipeed/__init__.py
================================================
"""Enables dynamic setting of underlying Keras module.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

_KERAS_BACKEND = None
_KERAS_LAYERS = None
_KERAS_MODELS = None
_KERAS_UTILS = None


def set_keras_submodules(backend=None,
                         layers=None,
                         models=None,
                         utils=None,
                         engine=None):
    # Deprecated, will be removed in the future.
    global _KERAS_BACKEND
    global _KERAS_LAYERS
    global _KERAS_MODELS
    global _KERAS_UTILS
    _KERAS_BACKEND = backend
    _KERAS_LAYERS = layers
    _KERAS_MODELS = models
    _KERAS_UTILS = utils


def get_keras_submodule(name):
    # Deprecated, will be removed in the future.
    if name not in {'backend', 'layers', 'models', 'utils'}:
        raise ImportError(
            'Can only retrieve one of "backend", '
            '"layers", "models", or "utils". '
            'Requested: %s' % name)
    if _KERAS_BACKEND is None:
        raise ImportError('You need to first `import keras` '
                          'in order to use `keras_applications`. '
                          'For instance, you can do:\n\n'
                          '```\n'
                          'import keras\n'
                          'from keras_applications import vgg16\n'
                          '```\n\n'
                          'Or, preferably, this equivalent formulation:\n\n'
                          '```\n'
                          'from keras import applications\n'
                          '```\n')
    if name == 'backend':
        return _KERAS_BACKEND
    elif name == 'layers':
        return _KERAS_LAYERS
    elif name == 'models':
        return _KERAS_MODELS
    elif name == 'utils':
        return _KERAS_UTILS


def get_submodules_from_kwargs(kwargs):
    backend = kwargs.get('backend', _KERAS_BACKEND)
    layers = kwargs.get('layers', _KERAS_LAYERS)
    models = kwargs.get('models', _KERAS_MODELS)
    utils = kwargs.get('utils', _KERAS_UTILS)
    for key in kwargs.keys():
        if key not in ['backend', 'layers', 'models', 'utils']:
            raise TypeError('Invalid keyword argument: %s', key)
    return backend, layers, models, utils


def correct_pad(backend, inputs, kernel_size):
    """Returns a tuple for zero-padding for 2D convolution with downsampling.

    # Arguments
        input_size: An integer or tuple/list of 2 integers.
        kernel_size: An integer or tuple/list of 2 integers.

    # Returns
        A tuple.
    """
    img_dim = 2 if backend.image_data_format() == 'channels_first' else 1
    input_size = backend.int_shape(inputs)[img_dim:(img_dim + 2)]

    if isinstance(kernel_size, int):
        kernel_size = (kernel_size, kernel_size)

    if input_size[0] is None:
        adjust = (1, 1)
    else:
        adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2)

    correct = (kernel_size[0] // 2, kernel_size[1] // 2)

    return ((correct[0] - adjust[0], correct[0]),
            (correct[1] - adjust[1], correct[1]))

__version__ = '1.0.7'


from . import mobilenet



================================================
FILE: axelerate/networks/common_utils/mobilenet_sipeed/imagenet_utils.py
================================================
"""Utilities for ImageNet data preprocessing & prediction decoding.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import json
import warnings
import numpy as np

from . import get_submodules_from_kwargs

CLASS_INDEX = None
CLASS_INDEX_PATH = ('https://s3.amazonaws.com/deep-learning-models/'
                    'image-models/imagenet_class_index.json')

# Global tensor of imagenet mean for preprocessing symbolic inputs
_IMAGENET_MEAN = None


def _preprocess_numpy_input(x, data_format, mode, **kwargs):
    """Preprocesses a Numpy array encoding a batch of images.

    # Arguments
        x: Input array, 3D or 4D.
        data_format: Data format of the image array.
        mode: One of "caffe", "tf" or "torch".
            - caffe: will convert the images from RGB to BGR,
                then will zero-center each color channel with
                respect to the ImageNet dataset,
                without scaling.
            - tf: will scale pixels between -1 and 1,
                sample-wise.
            - torch: will scale pixels between 0 and 1 and then
                will normalize each channel with respect to the
                ImageNet dataset.

    # Returns
        Preprocessed Numpy array.
    """
    backend, _, _, _ = get_submodules_from_kwargs(kwargs)
    if not issubclass(x.dtype.type, np.floating):
        x = x.astype(backend.floatx(), copy=False)

    if mode == 'tf':
        x /= 127.5
        x -= 1.
        return x

    if mode == 'torch':
        x /= 255.
        mean = [0.485, 0.456, 0.406]
        std = [0.229, 0.224, 0.225]
    else:
        if data_format == 'channels_first':
            # 'RGB'->'BGR'
            if x.ndim == 3:
                x = x[::-1, ...]
            else:
                x = x[:, ::-1, ...]
        else:
            # 'RGB'->'BGR'
            x = x[..., ::-1]
        mean = [103.939, 116.779, 123.68]
        std = None

    # Zero-center by mean pixel
    if data_format == 'channels_first':
        if x.ndim == 3:
            x[0, :, :] -= mean[0]
            x[1, :, :] -= mean[1]
            x[2, :, :] -= mean[2]
            if std is not None:
                x[0, :, :] /= std[0]
                x[1, :, :] /= std[1]
                x[2, :, :] /= std[2]
        else:
            x[:, 0, :, :] -= mean[0]
            x[:, 1, :, :] -= mean[1]
            x[:, 2, :, :] -= mean[2]
            if std is not None:
                x[:, 0, :, :] /= std[0]
                x[:, 1, :, :] /= std[1]
                x[:, 2, :, :] /= std[2]
    else:
        x[..., 0] -= mean[0]
        x[..., 1] -= mean[1]
        x[..., 2] -= mean[2]
        if std is not None:
            x[..., 0] /= std[0]
            x[..., 1] /= std[1]
            x[..., 2] /= std[2]
    return x


def _preprocess_symbolic_input(x, data_format, mode, **kwargs):
    """Preprocesses a tensor encoding a batch of images.

    # Arguments
        x: Input tensor, 3D or 4D.
        data_format: Data format of the image tensor.
        mode: One of "caffe", "tf" or "torch".
            - caffe: will convert the images from RGB to BGR,
                then will zero-center each color channel with
                respect to the ImageNet dataset,
                without scaling.
            - tf: will scale pixels between -1 and 1,
                sample-wise.
            - torch: will scale pixels between 0 and 1 and then
                will normalize each channel with respect to the
                ImageNet dataset.

    # Returns
        Preprocessed tensor.
    """
    global _IMAGENET_MEAN

    backend, _, _, _ = get_submodules_from_kwargs(kwargs)

    if mode == 'tf':
        x /= 127.5
        x -= 1.
        return x

    if mode == 'torch':
        x /= 255.
        mean = [0.485, 0.456, 0.406]
        std = [0.229, 0.224, 0.225]
    else:
        if data_format == 'channels_first':
            # 'RGB'->'BGR'
            if backend.ndim(x) == 3:
                x = x[::-1, ...]
            else:
                x = x[:, ::-1, ...]
        else:
            # 'RGB'->'BGR'
            x = x[..., ::-1]
        mean = [103.939, 116.779, 123.68]
        std = None

    if _IMAGENET_MEAN is None:
        _IMAGENET_MEAN = backend.constant(-np.array(mean))

    # Zero-center by mean pixel
    if backend.dtype(x) != backend.dtype(_IMAGENET_MEAN):
        x = backend.bias_add(
            x, backend.cast(_IMAGENET_MEAN, backend.dtype(x)),
            data_format=data_format)
    else:
        x = backend.bias_add(x, _IMAGENET_MEAN, data_format)
    if std is not None:
        x /= std
    return x


def preprocess_input(x, data_format=None, mode='caffe', **kwargs):
    """Preprocesses a tensor or Numpy array encoding a batch of images.

    # Arguments
        x: Input Numpy or symbolic tensor, 3D or 4D.
            The preprocessed data is written over the input data
            if the data types are compatible. To avoid this
            behaviour, `numpy.copy(x)` can be used.
        data_format: Data format of the image tensor/array.
        mode: One of "caffe", "tf" or "torch".
            - caffe: will convert the images from RGB to BGR,
                then will zero-center each color channel with
                respect to the ImageNet dataset,
                without scaling.
            - tf: will scale pixels between -1 and 1,
                sample-wise.
            - torch: will scale pixels between 0 and 1 and then
                will normalize each channel with respect to the
                ImageNet dataset.

    # Returns
        Preprocessed tensor or Numpy array.

    # Raises
        ValueError: In case of unknown `data_format` argument.
    """
    backend, _, _, _ = get_submodules_from_kwargs(kwargs)

    if data_format is None:
        data_format = backend.image_data_format()
    if data_format not in {'channels_first', 'channels_last'}:
        raise ValueError('Unknown data_format ' + str(data_format))

    if isinstance(x, np.ndarray):
        return _preprocess_numpy_input(x, data_format=data_format,
                                       mode=mode, **kwargs)
    else:
        return _preprocess_symbolic_input(x, data_format=data_format,
                                          mode=mode, **kwargs)


def decode_predictions(preds, top=5, **kwargs):
    """Decodes the prediction of an ImageNet model.

    # Arguments
        preds: Numpy tensor encoding a batch of predictions.
        top: Integer, how many top-guesses to return.

    # Returns
        A list of lists of top class prediction tuples
        `(class_name, class_description, score)`.
        One list of tuples per sample in batch input.

    # Raises
        ValueError: In case of invalid shape of the `pred` array
            (must be 2D).
    """
    global CLASS_INDEX

    backend, _, _, keras_utils = get_submodules_from_kwargs(kwargs)

    if len(preds.shape) != 2 or preds.shape[1] != 1000:
        raise ValueError('`decode_predictions` expects '
                         'a batch of predictions '
                         '(i.e. a 2D array of shape (samples, 1000)). '
                         'Found array with shape: ' + str(preds.shape))
    if CLASS_INDEX is None:
        fpath = keras_utils.get_file(
            'imagenet_class_index.json',
            CLASS_INDEX_PATH,
            cache_subdir='models',
            file_hash='c2c37ea517e94d9795004a39431a14cb')
        with open(fpath) as f:
            CLASS_INDEX = json.load(f)
    results = []
    for pred in preds:
        top_indices = pred.argsort()[-top:][::-1]
        result = [tuple(CLASS_INDEX[str(i)]) + (pred[i],) for i in top_indices]
        result.sort(key=lambda x: x[2], reverse=True)
        results.append(result)
    return results


def _obtain_input_shape(input_shape,
                        default_size,
                        min_size,
                        data_format,
                        require_flatten,
                        weights=None):
    """Internal utility to compute/validate a model's input shape.

    # Arguments
        input_shape: Either None (will return the default network input shape),
            or a user-provided shape to be validated.
        default_size: Default input width/height for the model.
        min_size: Minimum input width/height accepted by the model.
        data_format: Image data format to use.
        require_flatten: Whether the model is expected to
            be linked to a classifier via a Flatten layer.
        weights: One of `None` (random initialization)
            or 'imagenet' (pre-training on ImageNet).
            If weights='imagenet' input channels must be equal to 3.

    # Returns
        An integer shape tuple (may include None entries).

    # Raises
        ValueError: In case of invalid argument values.
    """
    if weights != 'imagenet' and input_shape and len(input_shape) == 3:
        if data_format == 'channels_first':
            if input_shape[0] not in {1, 3}:
                warnings.warn(
                    'This model usually expects 1 or 3 input channels. '
                    'However, it was passed an input_shape with ' +
                    str(input_shape[0]) + ' input channels.')
            default_shape = (input_shape[0], default_size, default_size)
        else:
            if input_shape[-1] not in {1, 3}:
                warnings.warn(
                    'This model usually expects 1 or 3 input channels. '
                    'However, it was passed an input_shape with ' +
                    str(input_shape[-1]) + ' input channels.')
            default_shape = (default_size, default_size, input_shape[-1])
    else:
        if data_format == 'channels_first':
            default_shape = (3, default_size, default_size)
        else:
            default_shape = (default_size, default_size, 3)
    if weights == 'imagenet' and require_flatten:
        if input_shape is not None:
            if input_shape != default_shape:
                raise ValueError('When setting `include_top=True` '
                                 'and loading `imagenet` weights, '
                                 '`input_shape` should be ' +
                                 str(default_shape) + '.')
        return default_shape
    if input_shape:
        if data_format == 'channels_first':
            if input_shape is not None:
                if len(input_shape) != 3:
                    raise ValueError(
                        '`input_shape` must be a tuple of three integers.')
                if input_shape[0] != 3 and weights == 'imagenet':
                    raise ValueError('The input must have 3 channels; got '
                                     '`input_shape=' + str(input_shape) + '`')
                if ((input_shape[1] is not None and input_shape[1] < min_size) or
                   (input_shape[2] is not None and input_shape[2] < min_size)):
                    raise ValueError('Input size must be at least ' +
                                     str(min_size) + 'x' + str(min_size) +
                                     '; got `input_shape=' +
                                     str(input_shape) + '`')
        else:
            if input_shape is not None:
                if len(input_shape) != 3:
                    raise ValueError(
                        '`input_shape` must be a tuple of three integers.')
                if input_shape[-1] != 3 and weights == 'imagenet':
                    raise ValueError('The input must have 3 channels; got '
                                     '`input_shape=' + str(input_shape) + '`')
                if ((input_shape[0] is not None and input_shape[0] < min_size) or
                   (input_shape[1] is not None and input_shape[1] < min_size)):
                    raise ValueError('Input size must be at least ' +
                                     str(min_size) + 'x' + str(min_size) +
                                     '; got `input_shape=' +
                                     str(input_shape) + '`')
    else:
        if require_flatten:
            input_shape = default_shape
        else:
            if data_format == 'channels_first':
                input_shape = (3, None, None)
            else:
                input_shape = (None, None, 3)
    if require_flatten:
        if None in input_shape:
            raise ValueError('If `include_top` is True, '
                             'you should specify a static `input_shape`. '
                             'Got `input_shape=' + str(input_shape) + '`')
    return input_shape


================================================
FILE: axelerate/networks/common_utils/mobilenet_sipeed/mobilenet.py
================================================
"""MobileNet v1 models for Keras.

MobileNet is a general architecture and can be used for multiple use cases.
Depending on the use case, it can use different input layer size and
different width factors. This allows different width models to reduce
the number of multiply-adds and thereby
reduce inference cost on mobile devices.

MobileNets support any input size greater than 32 x 32, with larger image sizes
offering better performance.
The number of parameters and number of multiply-adds
can be modified by using the `alpha` parameter,
which increases/decreases the number of filters in each layer.
By altering the image size and `alpha` parameter,
all 16 models from the paper can be built, with ImageNet weights provided.

The paper demonstrates the performance of MobileNets using `alpha` values of
1.0 (also called 100 % MobileNet), 0.75, 0.5 and 0.25.
For each of these `alpha` values, weights for 4 different input image sizes
are provided (224, 192, 160, 128).

The following table describes the size and accuracy of the 100% MobileNet
on size 224 x 224:
----------------------------------------------------------------------------
Width Multiplier (alpha) | ImageNet Acc |  Multiply-Adds (M) |  Params (M)
----------------------------------------------------------------------------
|   1.0 MobileNet-224    |    70.6 %     |        529        |     4.2     |
|   0.75 MobileNet-224   |    68.4 %     |        325        |     2.6     |
|   0.50 MobileNet-224   |    63.7 %     |        149        |     1.3     |
|   0.25 MobileNet-224   |    50.6 %     |        41         |     0.5     |
----------------------------------------------------------------------------

The following table describes the performance of
the 100 % MobileNet on various input sizes:
------------------------------------------------------------------------
      Resolution      | ImageNet Acc | Multiply-Adds (M) | Params (M)
------------------------------------------------------------------------
|  1.0 MobileNet-224  |    70.6 %    |        529        |     4.2     |
|  1.0 MobileNet-192  |    69.1 %    |        529        |     4.2     |
|  1.0 MobileNet-160  |    67.2 %    |        529        |     4.2     |
|  1.0 MobileNet-128  |    64.4 %    |        529        |     4.2     |
------------------------------------------------------------------------

The weights for all 16 models are obtained and translated
from TensorFlow checkpoints found at
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md

# Reference

- [MobileNets: Efficient Convolutional Neural Networks for
   Mobile Vision Applications](https://arxiv.org/pdf/1704.04861.pdf))
"""
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division

import os
import warnings

from . import get_submodules_from_kwargs
from . import imagenet_utils
from .imagenet_utils import decode_predictions
from .imagenet_utils import _obtain_input_shape


BASE_WEIGHT_PATH = ('https://github.com/fchollet/deep-learning-models/'
                    'releases/download/v0.6/')

backend = None
layers = None
models = None
keras_utils = None


def preprocess_input(x, **kwargs):
    """Preprocesses a numpy array encoding a batch of images.

    # Arguments
        x: a 4D numpy array consists of RGB values within [0, 255].

    # Returns
        Preprocessed array.
    """
    return imagenet_utils.preprocess_input(x, mode='tf', **kwargs)


def MobileNet(input_shape=None,
              alpha=1.0,
              depth_multiplier=1,
              dropout=1e-3,
              include_top=True,
              weights='imagenet',
              input_tensor=None,
              pooling=None,
              classes=1000,
              **kwargs):
    """Instantiates the MobileNet architecture.

    # Arguments
        input_shape: optional shape tuple, only to be specified
            if `include_top` is False (otherwise the input shape
            has to be `(224, 224, 3)`
            (with `channels_last` data format)
            or (3, 224, 224) (with `channels_first` data format).
            It should have exactly 3 inputs channels,
            and width and height should be no smaller than 32.
            E.g. `(200, 200, 3)` would be one valid value.
        alpha: controls the width of the network. This is known as the
            width multiplier in the MobileNet paper.
            - If `alpha` < 1.0, proportionally decreases the number
                of filters in each layer.
            - If `alpha` > 1.0, proportionally increases the number
                of filters in each layer.
            - If `alpha` = 1, default number of filters from the paper
                 are used at each layer.
        depth_multiplier: depth multiplier for depthwise convolution. This
            is called the resolution multiplier in the MobileNet paper.
        dropout: dropout rate
        include_top: whether to include the fully-connected
            layer at the top of the network.
        weights: one of `None` (random initialization),
              'imagenet' (pre-training on ImageNet),
              or the path to the weights file to be loaded.
        input_tensor: optional Keras tensor (i.e. output of
            `layers.Input()`)
            to use as image input for the model.
        pooling: Optional pooling mode for feature extraction
            when `include_top` is `False`.
            - `None` means that the output of the model
                will be the 4D tensor output of the
                last convolutional block.
            - `avg` means that global average pooling
                will be applied to the output of the
                last convolutional block, and thus
                the output of the model will be a
                2D tensor.
            - `max` means that global max pooling will
                be applied.
        classes: optional number of classes to classify images
            into, only to be specified if `include_top` is True, and
            if no `weights` argument is specified.

    # Returns
        A Keras model instance.

    # Raises
        ValueError: in case of invalid argument for `weights`,
            or invalid input shape.
        RuntimeError: If attempting to run this model with a
            backend that does not support separable convolutions.
    """
    global backend, layers, models, keras_utils
    backend, layers, models, keras_utils = get_submodules_from_kwargs(kwargs)

    if not (weights in {'imagenet', None} or os.path.exists(weights)):
        raise ValueError('The `weights` argument should be either '
                         '`None` (random initialization), `imagenet` '
                         '(pre-training on ImageNet), '
                         'or the path to the weights file to be loaded.')

    if weights == 'imagenet' and include_top and classes != 1000:
        raise ValueError('If using `weights` as `"imagenet"` with `include_top` '
                         'as true, `classes` should be 1000')

    # Determine proper input shape and default size.
    if input_shape is None:
        default_size = 224
    else:
        if backend.image_data_format() == 'channels_first':
            rows = input_shape[1]
            cols = input_shape[2]
        else:
            rows = input_shape[0]
            cols = input_shape[1]

        if rows == cols and rows in [128, 160, 192, 224]:
            default_size = rows
        else:
            default_size = 224

    input_shape = _obtain_input_shape(input_shape,
                                      default_size=default_size,
                                      min_size=32,
                                      data_format=backend.image_data_format(),
                                      require_flatten=include_top,
                                      weights=weights)

    if backend.image_data_format() == 'channels_last':
        row_axis, col_axis = (0, 1)
    else:
        row_axis, col_axis = (1, 2)
    rows = input_shape[row_axis]
    cols = input_shape[col_axis]

    if weights == 'imagenet':
        if depth_multiplier != 1:
            raise ValueError('If imagenet weights are being loaded, '
                             'depth multiplier must be 1')

        if alpha not in [0.25, 0.50, 0.75, 1.0]:
            raise ValueError('If imagenet weights are being loaded, '
                             'alpha can be one of'
                             '`0.25`, `0.50`, `0.75` or `1.0` only.')

        if rows != cols or rows not in [128, 160, 192, 224]:
            if rows is None:
                rows = 224
                warnings.warn('MobileNet shape is undefined.'
                              ' Weights for input shape '
                              '(224, 224) will be loaded.')
            else:
                raise ValueError('If imagenet weights are being loaded, '
                                 'input must have a static square shape '
                                 '(one of (128, 128), (160, 160), '
                                 '(192, 192), or (224, 224)). '
                                 'Input shape provided = %s' % (input_shape,))

    if backend.image_data_format() != 'channels_last':
        warnings.warn('The MobileNet family of models is only available '
                      'for the input data format "channels_last" '
                      '(width, height, channels). '
                      'However your settings specify the default '
                      'data format "channels_first" (channels, width, height).'
                      ' You should set `image_data_format="channels_last"` '
                      'in your Keras config located at ~/.keras/keras.json. '
                      'The model being returned right now will expect inputs '
                      'to follow the "channels_last" data format.')
        backend.set_image_data_format('channels_last')
        old_data_format = 'channels_first'
    else:
        old_data_format = None

    if input_tensor is None:
        img_input = layers.Input(shape=input_shape)
    else:
        if not backend.is_keras_tensor(input_tensor):
            img_input = layers.Input(tensor=input_tensor, shape=input_shape)
        else:
            img_input = input_tensor

    x = _conv_block(img_input, 32, alpha, strides=(2, 2))
    x = _depthwise_conv_block(x, 64, alpha, depth_multiplier, block_id=1)

    x = _depthwise_conv_block(x, 128, alpha, depth_multiplier,
                              strides=(2, 2), block_id=2)
    x = _depthwise_conv_block(x, 128, alpha, depth_multiplier, block_id=3)

    x = _depthwise_conv_block(x, 256, alpha, depth_multiplier,
                              strides=(2, 2), block_id=4)
    x = _depthwise_conv_block(x, 256, alpha, depth_multiplier, block_id=5)

    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier,
                              strides=(2, 2), block_id=6)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=7)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=8)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=9)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=10)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=11)

    x = _depthwise_conv_block(x, 1024, alpha, depth_multiplier,
                              strides=(2, 2), block_id=12)
    x = _depthwise_conv_block(x, 1024, alpha, depth_multiplier, block_id=13)

    if include_top:
        if backend.image_data_format() == 'channels_first':
            shape = (int(1024 * alpha), 1, 1)
        else:
            shape = (1, 1, int(1024 * alpha))

        x = layers.GlobalAveragePooling2D()(x)
        x = layers.Reshape(shape, name='reshape_1')(x)
        x = layers.Dropout(dropout, name='dropout')(x)
        x = layers.Conv2D(classes, (1, 1),
                          padding='same',
                          name='conv_preds')(x)
        x = layers.Activation('softmax', name='act_softmax')(x)
        x = layers.Reshape((classes,), name='reshape_2')(x)
    else:
        if pooling == 'avg':
            x = layers.GlobalAveragePooling2D()(x)
        elif pooling == 'max':
            x = layers.GlobalMaxPooling2D()(x)

    # Ensure that the model takes into account
    # any potential predecessors of `input_tensor`.
    if input_tensor is not None:
        inputs = keras_utils.get_source_inputs(input_tensor)
    else:
        inputs = img_input

    # Create model.
    model = models.Model(inputs, x, name='mobilenet_%0.2f_%s' % (alpha, rows))

    # Load weights.
    if weights == 'imagenet':
        if backend.image_data_format() == 'channels_first':
            raise ValueError('Weights for "channels_first" format '
                             'are not available.')
        if alpha == 1.0:
            alpha_text = '1_0'
        elif alpha == 0.75:
            alpha_text = '7_5'
        elif alpha == 0.50:
            alpha_text = '5_0'
        else:
            alpha_text = '2_5'

        if include_top:
            model_name = 'mobilenet_%s_%d_tf.h5' % (alpha_text, rows)
            weight_path = BASE_WEIGHT_PATH + model_name
            weights_path = keras_utils.get_file(model_name,
                                                weight_path,
                                                cache_subdir='models')
        else:
            model_name = 'mobilenet_%s_%d_tf_no_top.h5' % (alpha_text, rows)
            weight_path = BASE_WEIGHT_PATH + model_name
            weights_path = keras_utils.get_file(model_name,
                                                weight_path,
                                                cache_subdir='models')
        model.load_weights(weights_path)
    elif weights is not None:
        model.load_weights(weights)

    if old_data_format:
        backend.set_image_data_format(old_data_format)
    return model


def _conv_block(inputs, filters, alpha, kernel=(3, 3), strides=(1, 1)):
    """Adds an initial convolution layer (with batch normalization and relu6).

    # Arguments
        inputs: Input tensor of shape `(rows, cols, 3)`
            (with `channels_last` data format) or
            (3, rows, cols) (with `channels_first` data format).
            It should have exactly 3 inputs channels,
            and width and height should be no smaller than 32.
            E.g. `(224, 224, 3)` would be one valid value.
        filters: Integer, the dimensionality of the output space
            (i.e. the number of output filters in the convolution).
        alpha: controls the width of the network.
            - If `alpha` < 1.0, proportionally decreases the number
                of filters in each layer.
            - If `alpha` > 1.0, proportionally increases the number
                of filters in each layer.
            - If `alpha` = 1, default number of filters from the paper
                 are used at each layer.
        kernel: An integer or tuple/list of 2 integers, specifying the
            width and height of the 2D convolution window.
            Can be a single integer to specify the same value for
            all spatial dimensions.
        strides: An integer or tuple/list of 2 integers,
            specifying the strides of the convolution
            along the width and height.
            Can be a single integer to specify the same value for
            all spatial dimensions.
            Specifying any stride value != 1 is incompatible with specifying
            any `dilation_rate` value != 1.

    # Input shape
        4D tensor with shape:
        `(samples, channels, rows, cols)` if data_format='channels_first'
        or 4D tensor with shape:
        `(samples, rows, cols, channels)` if data_format='channels_last'.

    # Output shape
        4D tensor with shape:
        `(samples, filters, new_rows, new_cols)`
        if data_format='channels_first'
        or 4D tensor with shape:
        `(samples, new_rows, new_cols, filters)`
        if data_format='channels_last'.
        `rows` and `cols` values might have changed due to stride.

    # Returns
        Output tensor of block.
    """
    channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
    filters = int(filters * alpha)
    x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name='conv1_pad')(inputs)
    x = layers.Conv2D(filters, kernel,
                      padding='valid',
                      use_bias=False,
                      strides=strides,
                      name='conv1')(x)
    x = layers.BatchNormalization(axis=channel_axis, name='conv1_bn')(x)
    return layers.ReLU(6., name='conv1_relu')(x)


def _depthwise_conv_block(inputs, pointwise_conv_filters, alpha,
                          depth_multiplier=1, strides=(1, 1), block_id=1):
    """Adds a depthwise convolution block.

    A depthwise convolution block consists of a depthwise conv,
    batch normalization, relu6, pointwise convolution,
    batch normalization and relu6 activation.

    # Arguments
        inputs: Input tensor of shape `(rows, cols, channels)`
            (with `channels_last` data format) or
            (channels, rows, cols) (with `channels_first` data format).
        pointwise_conv_filters: Integer, the dimensionality of the output space
            (i.e. the number of output filters in the pointwise convolution).
        alpha: controls the width of the network.
            - If `alpha` < 1.0, proportionally decreases the number
                of filters in each layer.
            - If `alpha` > 1.0, proportionally increases the number
                of filters in each layer.
            - If `alpha` = 1, default number of filters from the paper
                 are used at each layer.
        depth_multiplier: The number of depthwise convolution output channels
            for each input channel.
            The total number of depthwise convolution output
            channels will be equal to `filters_in * depth_multiplier`.
        strides: An integer or tuple/list of 2 integers,
            specifying the strides of the convolution
            along the width and height.
            Can be a single integer to specify the same value for
            all spatial dimensions.
            Specifying any stride value != 1 is incompatible with specifying
            any `dilation_rate` value != 1.
        block_id: Integer, a unique identification designating
            the block number.

    # Input shape
        4D tensor with shape:
        `(batch, channels, rows, cols)` if data_format='channels_first'
        or 4D tensor with shape:
        `(batch, rows, cols, channels)` if data_format='channels_last'.

    # Output shape
        4D tensor with shape:
        `(batch, filters, new_rows, new_cols)`
        if data_format='channels_first'
        or 4D tensor with shape:
        `(batch, new_rows, new_cols, filters)`
        if data_format='channels_last'.
        `rows` and `cols` values might have changed due to stride.

    # Returns
        Output tensor of block.
    """
    channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
    pointwise_conv_filters = int(pointwise_conv_filters * alpha)

    if strides == (1, 1):
        x = inputs
    else:
        x = layers.ZeroPadding2D(((1, 1), (1, 1)),
                                 name='conv_pad_%d' % block_id)(inputs)
    x = layers.DepthwiseConv2D((3, 3),
                               padding='same' if strides == (1, 1) else 'valid',
                               depth_multiplier=depth_multiplier,
                               strides=strides,
                               use_bias=False,
                               name='conv_dw_%d' % block_id)(x)
    x = layers.BatchNormalization(
        axis=channel_axis, name='conv_dw_%d_bn' % block_id)(x)
    x = layers.ReLU(6., name='conv_dw_%d_relu' % block_id)(x)

    x = layers.Conv2D(pointwise_conv_filters, (1, 1),
                      padding='same',
                      use_bias=False,
                      strides=(1, 1),
                      name='conv_pw_%d' % block_id)(x)
    x = layers.BatchNormalization(axis=channel_axis,
                                  name='conv_pw_%d_bn' % block_id)(x)
    return layers.ReLU(6., name='conv_pw_%d_relu' % block_id)(x)


================================================
FILE: axelerate/networks/segnet/__init__.py
================================================


================================================
FILE: axelerate/networks/segnet/data_utils/__init__.py
================================================


================================================
FILE: axelerate/networks/segnet/data_utils/data_loader.py
================================================
import os
import numpy as np
np.random.seed(1337)
from tensorflow.keras.utils import Sequence
from axelerate.networks.common_utils.augment import process_image_segmentation
import glob
import itertools
import random
import six
import cv2

try:
    from tqdm import tqdm
except ImportError:
    print("tqdm not found, disabling progress bars")
    def tqdm(iter):
        return iter


from ..models.config import IMAGE_ORDERING

DATA_LOADER_SEED = 0

random.seed(DATA_LOADER_SEED)
class_colors = [(random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)) for _ in range(5000)]

class DataLoaderError(Exception):
    pass

def get_pairs_from_paths(images_path, segs_path, ignore_non_matching=True):
    """ Find all the images from the images_path directory and
        the segmentation images from the segs_path directory
        while checking integrity of data """

    ACCEPTABLE_IMAGE_FORMATS = [".jpg", ".jpeg", ".png" , ".bmp"]
    ACCEPTABLE_SEGMENTATION_FORMATS = [".png", ".bmp"]

    image_files = []
    segmentation_files = {}

    for dir_entry in os.listdir(images_path):
        if os.path.isfile(os.path.join(images_path, dir_entry)) and \
                os.path.splitext(dir_entry)[1] in ACCEPTABLE_IMAGE_FORMATS:
            file_name, file_extension = os.path.splitext(dir_entry)
            image_files.append((file_name, file_extension, os.path.join(images_path, dir_entry)))

    for dir_entry in os.listdir(segs_path):
        if os.path.isfile(os.path.join(segs_path, dir_entry)) and \
                os.path.splitext(dir_entry)[1] in ACCEPTABLE_SEGMENTATION_FORMATS:
            file_name, file_extension = os.path.splitext(dir_entry)
            if file_name in segmentation_files:
                raise DataLoaderError("Segmentation file with filename {0} already exists and is ambiguous to resolve with path {1}. Please remove or rename the latter.".format(file_name, os.path.join(segs_path, dir_entry)))
            segmentation_files[file_name] = (file_extension, os.path.join(segs_path, dir_entry))

    return_value = []
    # Match the images and segmentations
    for image_file, _, image_full_path in image_files:
        if image_file in segmentation_files:
            return_value.append((image_full_path, segmentation_files[image_file][1]))
        elif ignore_non_matching:
            print("No corresponding segmentation found for image {0}.".format(image_full_path))
            continue
        else:
            # Error out
            raise DataLoaderError("No corresponding segmentation found for image {0}.".format(image_full_path))

    return return_value


def get_image_array(image_input, norm, ordering='channels_first'):
    """ Load image array from input """
    if type(image_input) is np.ndarray:
        # It is already an array, use it as it is
        img = image_input
    elif  isinstance(image_input, six.string_types)  :
        if not os.path.isfile(image_input):
            raise DataLoaderError("get_image_array: path {0} doesn't exist".format(image_input))
        img = cv2.imread(image_input, 1)
    else:
        raise DataLoaderError("get_image_array: Can't process input type {0}".format(str(type(image_input))))
        
    if norm:
        img = norm(img)

    if ordering == 'channels_first':
        img = np.rollaxis(img, 2, 0)
    return img


def get_segmentation_array(image_input, nClasses, no_reshape=True):
    """ Load segmentation array from input """

    seg_labels = np.zeros((image_input.shape[0], image_input.shape[1], nClasses))

    if type(image_input) is np.ndarray:
        # It is already an array, use it as it is
        img = image_input
    elif isinstance(image_input, six.string_types) :
        if not os.path.isfile(image_input):
            raise DataLoaderError("get_segmentation_array: path {0} doesn't exist".format(image_input))
        img = cv2.imread(image_input, 1)
    else:
        raise DataLoaderError("get_segmentation_array: Can't process input type {0}".format(str(type(image_input))))

    img = img[:, :, 0]

    for c in range(nClasses):
        seg_labels[:, :, c] = (img == c).astype(int)

    if not no_reshape:
        seg_labels = np.reshape(seg_labels, (width*height, nClasses))

    return seg_labels


def verify_segmentation_dataset(images_path, segs_path, n_classes, show_all_errors=False):
    try:
        img_seg_pairs = get_pairs_from_paths(images_path, segs_path)
        if not len(img_seg_pairs):
            print("Couldn't load any data from images_path: {0} and segmentations path: {1}".format(images_path, segs_path))
            return False

        return_value = True
        for im_fn, seg_fn in tqdm(img_seg_pairs):
            img = cv2.imread(im_fn)
            seg = cv2.imread(seg_fn)
            # Check dimensions match
            if not img.shape == seg.shape:
                return_value = False
                print("The size of image {0} and its segmentation {1} doesn't match (possibly the files are corrupt).".format(im_fn, seg_fn))
                if not show_all_errors:
                    break
            else:
                max_pixel_value = np.max(seg[:, :, 0])
                if max_pixel_value >= n_classes:
                    return_value = False
                    print("The pixel values of the segmentation image {0} violating range [0, {1}]. Found maximum pixel value {2}".format(seg_fn, str(n_classes - 1), max_pixel_value))
                    if not show_all_errors:
                        break
        if return_value:
            print("Dataset verified! ")
        else:
            print("Dataset not verified!")
        return return_value
    except DataLoaderError as e:
        print("Found error during data loading\n{0}".format(str(e)))
        return False
        
        
def create_batch_generator(images_path, segs_path, 
                           input_size=224,
                           output_size=112,
                           n_classes=51,
                           batch_size=8,
                           repeat_times=1,
                           do_augment=False,
                           norm=None):

    worker = BatchGenerator(images_path, segs_path, batch_size,
                 n_classes, input_size, output_size, repeat_times, 
                 do_augment, norm)
    return worker


class BatchGenerator(Sequence):
    def __init__(self,
                 images_path, segs_path, batch_size,
                 n_classes,input_size, output_size, repeat_times,
                 do_augment=False, norm=None):
        self.norm = norm
        self.n_classes = n_classes
        self.input_size = input_size
        self.output_size = output_size
        self.do_augment = do_augment
        self._repeat_times = repeat_times
        self._batch_size = batch_size
        self.img_seg_pairs = get_pairs_from_paths(images_path, segs_path)
        random.shuffle(self.img_seg_pairs)
        self.zipped = itertools.cycle(self.img_seg_pairs)
        self.counter = 0

    def __len__(self):
        return int(len(self.img_seg_pairs) * self._repeat_times/self._batch_size)

    def __getitem__(self, idx):
        """
        # Args
            idx : batch index
        """
        x_batch = []
        y_batch= []
        for i in range(self._batch_size):
            img, seg = next(self.zipped)
            img = cv2.imread(img, 1)[...,::-1]
            seg = cv2.imread(seg, 1)

            im, seg = process_image_segmentation(img, seg, self.input_size[0], self.input_size[1], self.output_size[0], self.output_size[1], self.do_augment)

            x_batch.append(get_image_array(im, self.norm, ordering=IMAGE_ORDERING))
            y_batch.append(get_segmentation_array(seg, self.n_classes))

        x_batch = np.array(x_batch)
        y_batch = np.array(y_batch)
        self.counter += 1
        return x_batch, y_batch

    def on_epoch_end(self):
        self.counter = 0
        random.shuffle(self.img_seg_pairs)


================================================
FILE: axelerate/networks/segnet/frontend_segnet.py
================================================
import os
import numpy as np
import cv2
import time
from tqdm import tqdm

from axelerate.networks.segnet.data_utils.data_loader import create_batch_generator, verify_segmentation_dataset
from axelerate.networks.common_utils.feature import create_feature_extractor
from axelerate.networks.common_utils.fit import train
from axelerate.networks.segnet.models.segnet import mobilenet_segnet, squeezenet_segnet, full_yolo_segnet, tiny_yolo_segnet, nasnetmobile_segnet, resnet50_segnet, densenet121_segnet

def masked_categorical_crossentropy(gt , pr ):
    from tensorflow.keras.losses import categorical_crossentropy
    mask = 1 -  gt[: , : , 0] 
    return categorical_crossentropy(gt, pr)*mask

def create_segnet(architecture, input_size, n_classes, weights = None):

    if architecture == 'NASNetMobile':
        model = nasnetmobile_segnet(n_classes, input_size, encoder_level=4, weights = weights)
    elif architecture == 'SqueezeNet':
        model = squeezenet_segnet(n_classes, input_size, encoder_level=4, weights = weights)
    elif architecture == 'Full Yolo':
        model = full_yolo_segnet(n_classes, input_size, encoder_level=4, weights = weights)
    elif architecture == 'Tiny Yolo':
        model = tiny_yolo_segnet(n_classes, input_size, encoder_level=4, weights = weights)
    elif architecture == 'DenseNet121':
        model = densenet121_segnet(n_classes, input_size, encoder_level=4, weights = weights)
    elif architecture == 'ResNet50':
        model = resnet50_segnet(n_classes, input_size, encoder_level=4, weights = weights)
    elif 'MobileNet' in architecture:
        model = mobilenet_segnet(n_classes, input_size, encoder_level=4, weights = weights, architecture = architecture)

    output_size = (model.output_height, model.output_width)
    network = Segnet(model, input_size, n_classes, model.normalize, output_size)

    return network

class Segnet(object):
    def __init__(self,
                 network,
                 input_size,
                 n_classes,
                 norm,
                 output_size):
        self.network = network       
        self.n_classes = n_classes
        self.input_size = input_size
        self.output_size = output_size
        self.norm = norm

    def load_weights(self, weight_path, by_name=False):
        if os.path.exists(weight_path):
            print("Loading pre-trained weights for the whole model: ", weight_path)
            self.network.load_weights(weight_path)
        else:
            print("Failed to load pre-trained weights for the whole model. It might be because you didn't specify any or the weight file cannot be found")

    def predict(self, image):

        start_time = time.time()
        Y_pred = np.squeeze(self.network.predict(image))
        elapsed_ms = (time.time() - start_time)  * 1000

        y_pred = np.argmax(Y_pred, axis = 2)

        return elapsed_ms, y_pred


    def evaluate(self, img_folder, ann_folder, batch_size):

        self.generator = create_batch_generator(img_folder, ann_folder, self.input_size, 
                                                self.output_size, self.n_classes, 
                                                batch_size, 1, False, self.norm)
        tp = np.zeros(self.n_classes)
        fp = np.zeros(self.n_classes)
        fn = np.zeros(self.n_classes)
        n_pixels = np.zeros(self.n_classes)
        
        for inp, gt in tqdm(list(self.generator)):
                y_pred = self.network.predict(inp)

                y_pred = np.argmax(y_pred, axis=-1)
                gt = np.argmax(gt, axis=-1)

                for cl_i in range(self.n_classes):
                    
                    tp[cl_i] += np.sum((y_pred == cl_i) * (gt == cl_i))
                    fp[cl_i] += np.sum((y_pred == cl_i) * ((gt != cl_i)))
                    fn[cl_i] += np.sum((y_pred != cl_i) * ((gt == cl_i)))
                    n_pixels[cl_i] += np.sum(gt == cl_i)

        cl_wise_score = tp / (tp + fp + fn + 0.000000000001)
        n_pixels_norm = n_pixels /  np.sum(n_pixels)
        frequency_weighted_IU = np.sum(cl_wise_score*n_pixels_norm)
        mean_IU = np.mean(cl_wise_score)
        report = {"frequency_weighted_IU":frequency_weighted_IU , "mean_IU":mean_IU , "class_wise_IU":cl_wise_score}
        return report

    def train(self,
              img_folder,
              ann_folder,
              nb_epoch,
              project_folder,
              batch_size=8,
              do_augment=False,
              learning_rate=1e-4, 
              train_times=1,
              valid_times=1,
              valid_img_folder="",
              valid_ann_folder="",
              first_trainable_layer=None,
              ignore_zero_class=False,
              metrics='val_loss'):
        
        if metrics != "accuracy" and metrics != "loss":
            print("Unknown metric for SegNet, valid options are: val_loss or val_accuracy. Defaulting ot val_loss")
            metrics = "loss"

        if ignore_zero_class:
            loss_k = masked_categorical_crossentropy
        else:
            loss_k = 'categorical_crossentropy'
        train_generator = create_batch_generator(img_folder, ann_folder, self.input_size, 
                          self.output_size, self.n_classes,batch_size, train_times, do_augment, self.norm)

        validation_generator = create_batch_generator(valid_img_folder, valid_ann_folder, self.input_size, 
                               self.output_size, self.n_classes, batch_size, valid_times, False, self.norm)
        
        return train(self.network,
                            loss_k,
                            train_generator, 
                            validation_generator, 
                            learning_rate, 
                            nb_epoch, 
                            project_folder, 
                            first_trainable_layer, 
                            metric_name = metrics)
    


================================================
FILE: axelerate/networks/segnet/metrics.py
================================================
import numpy as np

EPS = 1e-12

def get_iou(gt, pr, n_classes):
    class_wise = np.zeros(n_classes)
    for cl in range(n_classes):
        intersection = np.sum((gt == cl)*(pr == cl))
        union = np.sum(np.maximum((gt == cl), (pr == cl)))
        iou = float(intersection)/(union + EPS)
        class_wise[cl] = iou
    return class_wise


================================================
FILE: axelerate/networks/segnet/models/__init__.py
================================================


================================================
FILE: axelerate/networks/segnet/models/_pspnet_2.py
================================================
# This code is proveded by Vladkryvoruchko and small modifications done by me .

from math import ceil
from sys import exit
from keras import layers
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
from keras.layers import BatchNormalization, Activation, Input, Dropout, \
    ZeroPadding2D, Lambda
from keras.layers.merge import Concatenate, Add
from keras.models import Model
from keras.optimizers import SGD
import tensorflow as tf

from .config import IMAGE_ORDERING
from .model_utils import get_segmentation_model, resize_image


learning_rate = 1e-3  # Layer specific learning rate
# Weight decay not implemented


def BN(name=""):
    return BatchNormalization(momentum=0.95, name=name, epsilon=1e-5)


class Interp(layers.Layer):

    def __init__(self, new_size, **kwargs):
        self.new_size = new_size
        super(Interp, self).__init__(**kwargs)

    def build(self, input_shape):
        super(Interp, self).build(input_shape)

    def call(self, inputs, **kwargs):
        new_height, new_width = self.new_size
        try:
            resized = tf.image.resize(inputs, [new_height, new_width])
        except AttributeError:
            resized = tf.image.resize_images(inputs, [new_height, new_width],
                                             align_corners=True)
        return resized

    def compute_output_shape(self, input_shape):
        return tuple([None,
                      self.new_size[0],
                      self.new_size[1],
                      input_shape[3]])

    def get_config(self):
        config = super(Interp, self).get_config()
        config['new_size'] = self.new_size
        return config


# def Interp(x, shape):
#    new_height, new_width = shape
#    resized = tf.image.resize_images(x, [new_height, new_width],
#                                      align_corners=True)
#    return resized


def residual_conv(prev, level, pad=1, lvl=1, sub_lvl=1, modify_stride=False):
    lvl = str(lvl)
    sub_lvl = str(sub_lvl)
    names = ["conv" + lvl + "_" + sub_lvl + "_1x1_reduce",
             "conv" + lvl + "_" + sub_lvl + "_1x1_reduce_bn",
             "conv" + lvl + "_" + sub_lvl + "_3x3",
             "conv" + lvl + "_" + sub_lvl + "_3x3_bn",
             "conv" + lvl + "_" + sub_lvl + "_1x1_increase",
             "conv" + lvl + "_" + sub_lvl + "_1x1_increase_bn"]
    if modify_stride is False:
        prev = Conv2D(64 * level, (1, 1), strides=(1, 1), name=names[0],
                      use_bias=False)(prev)
    elif modify_stride is True:
        prev = Conv2D(64 * level, (1, 1), strides=(2, 2), name=names[0],
                      use_bias=False)(prev)

    prev = BN(name=names[1])(prev)
    prev = Activation('relu')(prev)

    prev = ZeroPadding2D(padding=(pad, pad))(prev)
    prev = Conv2D(64 * level, (3, 3), strides=(1, 1), dilation_rate=pad,
                  name=names[2], use_bias=False)(prev)

    prev = BN(name=names[3])(prev)
    prev = Activation('relu')(prev)
    prev = Conv2D(256 * level, (1, 1), strides=(1, 1), name=names[4],
                  use_bias=False)(prev)
    prev = BN(name=names[5])(prev)
    return prev


def short_convolution_branch(prev, level, lvl=1, sub_lvl=1,
                             modify_stride=False):
    lvl = str(lvl)
    sub_lvl = str(sub_lvl)
    names = ["conv" + lvl + "_" + sub_lvl + "_1x1_proj",
             "conv" + lvl + "_" + sub_lvl + "_1x1_proj_bn"]

    if modify_stride is False:
        prev = Conv2D(256 * level, (1, 1), strides=(1, 1), name=names[0],
                      use_bias=False)(prev)
    elif modify_stride is True:
        prev = Conv2D(256 * level, (1, 1), strides=(2, 2), name=names[0],
                      use_bias=False)(prev)

    prev = BN(name=names[1])(prev)
    return prev


def empty_branch(prev):
    return prev


def residual_short(prev_layer, level, pad=1, lvl=1, sub_lvl=1,
                   modify_stride=False):
    prev_layer = Activation('relu')(prev_layer)
    block_1 = residual_conv(prev_layer, level,
                            pad=pad, lvl=lvl, sub_lvl=sub_lvl,
                            modify_stride=modify_stride)

    block_2 = short_convolution_branch(prev_layer, level,
                                       lvl=lvl, sub_lvl=sub_lvl,
                                       modify_stride=modify_stride)
    added = Add()([block_1, block_2])
    return added


def residual_empty(prev_layer, level, pad=1, lvl=1, sub_lvl=1):
    prev_layer = Activation('relu')(prev_layer)

    block_1 = residual_conv(prev_layer, level, pad=pad,
                            lvl=lvl, sub_lvl=sub_lvl)
    block_2 = empty_branch(prev_layer)
    added = Add()([block_1, block_2])
    return added


def ResNet(inp, layers):
    # Names for the first couple layers of model
    names = ["conv1_1_3x3_s2",
             "conv1_1_3x3_s2_bn",
             "conv1_2_3x3",
             "conv1_2_3x3_bn",
             "conv1_3_3x3",
             "conv1_3_3x3_bn"]

    # Short branch(only start of network)

    cnv1 = Conv2D(64, (3, 3), strides=(2, 2), padding='same', name=names[0],
                  use_bias=False)(inp)  # "conv1_1_3x3_s2"
    bn1 = BN(name=names[1])(cnv1)  # "conv1_1_3x3_s2/bn"
    relu1 = Activation('relu')(bn1)  # "conv1_1_3x3_s2/relu"

    cnv1 = Conv2D(64, (3, 3), strides=(1, 1), padding='same', name=names[2],
                  use_bias=False)(relu1)  # "conv1_2_3x3"
    bn1 = BN(name=names[3])(cnv1)  # "conv1_2_3x3/bn"
    relu1 = Activation('relu')(bn1)  # "conv1_2_3x3/relu"

    cnv1 = Conv2D(128, (3, 3), strides=(1, 1), padding='same', name=names[4],
                  use_bias=False)(relu1)  # "conv1_3_3x3"
    bn1 = BN(name=names[5])(cnv1)  # "conv1_3_3x3/bn"
    relu1 = Activation('relu')(bn1)  # "conv1_3_3x3/relu"

    res = MaxPooling2D(pool_size=(3, 3), padding='same',
                       strides=(2, 2))(relu1)  # "pool1_3x3_s2"

    # ---Residual layers(body of network)

    """
    Modify_stride --Used only once in first 3_1 convolutions block.
    changes stride of first convolution from 1 -> 2
    """

    # 2_1- 2_3
    res = residual_short(res, 1, pad=1, lvl=2, sub_lvl=1)
    for i in range(2):
        res = residual_empty(res, 1, pad=1, lvl=2, sub_lvl=i + 2)

    # 3_1 - 3_3
    res = residual_short(res, 2, pad=1, lvl=3, sub_lvl=1, modify_stride=True)
    for i in range(3):
        res = residual_empty(res, 2, pad=1, lvl=3, sub_lvl=i + 2)
    if layers is 50:
        # 4_1 - 4_6
        res = residual_short(res, 4, pad=2, lvl=4, sub_lvl=1)
        for i in range(5):
            res = residual_empty(res, 4, pad=2, lvl=4, sub_lvl=i + 2)
    elif layers is 101:
        # 4_1 - 4_23
        res = residual_short(res, 4, pad=2, lvl=4, sub_lvl=1)
        for i in range(22):
            res = residual_empty(res, 4, pad=2, lvl=4, sub_lvl=i + 2)
    else:
        print("This ResNet is not implemented")

    # 5_1 - 5_3
    res = residual_short(res, 8, pad=4, lvl=5, sub_lvl=1)
    for i in range(2):
        res = residual_empty(res, 8, pad=4, lvl=5, sub_lvl=i + 2)

    res = Activation('relu')(res)
    return res


def interp_block(prev_layer, level, feature_map_shape, input_shape):
    if input_shape == (473, 473):
        kernel_strides_map = {1: 60,
                              2: 30,
                              3: 20,
                              6: 10}
    elif input_shape == (713, 713):
        kernel_strides_map = {1: 90,
                              2: 45,
                              3: 30,
                              6: 15}
    else:
        print("Pooling parameters for input shape ",
              input_shape, " are not defined.")
        exit(1)

    names = [
        "conv5_3_pool" + str(level) + "_conv",
        "conv5_3_pool" + str(level) + "_conv_bn"
    ]
    kernel = (kernel_strides_map[level], kernel_strides_map[level])
    strides = (kernel_strides_map[level], kernel_strides_map[level])
    prev_layer = AveragePooling2D(kernel, strides=strides)(prev_layer)
    prev_layer = Conv2D(512, (1, 1), strides=(1, 1), name=names[0],
                        use_bias=False)(prev_layer)
    prev_layer = BN(name=names[1])(prev_layer)
    prev_layer = Activation('relu')(prev_layer)
    # prev_layer = Lambda(Interp, arguments={
    #                    'shape': feature_map_shape})(prev_layer)
    prev_layer = Interp(feature_map_shape)(prev_layer)
    return prev_layer


def build_pyramid_pooling_module(res, input_shape):
    """Build the Pyramid Pooling Module."""
    # ---PSPNet concat layers with Interpolation
    feature_map_size = tuple(int(ceil(input_dim / 8.0))
                             for input_dim in input_shape)

    interp_block1 = interp_block(res, 1, feature_map_size, input_shape)
    interp_block2 = interp_block(res, 2, feature_map_size, input_shape)
    interp_block3 = interp_block(res, 3, feature_map_size, input_shape)
    interp_block6 = interp_block(res, 6, feature_map_size, input_shape)

    # concat all these layers. resulted
    # shape=(1,feature_map_size_x,feature_map_size_y,4096)
    res = Concatenate()([res,
                         interp_block6,
                         interp_block3,
                         interp_block2,
                         interp_block1])
    return res


def _build_pspnet(nb_classes, resnet_layers, input_shape,
                  activation='softmax'):

    assert IMAGE_ORDERING == 'channels_last'

    inp = Input((input_shape[0], input_shape[1], 3))

    res = ResNet(inp, layers=resnet_layers)

    psp = build_pyramid_pooling_module(res, input_shape)

    x = Conv2D(512, (3, 3), strides=(1, 1), padding="same", name="conv5_4",
               use_bias=False)(psp)
    x = BN(name="conv5_4_bn")(x)
    x = Activation('relu')(x)
    x = Dropout(0.1)(x)

    x = Conv2D(nb_classes, (1, 1), strides=(1, 1), name="conv6")(x)
    # x = Lambda(Interp, arguments={'shape': (
    #    input_shape[0], input_shape[1])})(x)
    x = Interp([input_shape[0], input_shape[1]])(x)

    model = get_segmentation_model(inp, x)

    return model


================================================
FILE: axelerate/networks/segnet/models/all_models.py
================================================
from . import pspnet
from . import unet
from . import segnet
from . import fcn
model_from_name = {}


model_from_name["fcn_8"] = fcn.fcn_8
model_from_name["fcn_32"] = fcn.fcn_32
model_from_name["fcn_8_vgg"] = fcn.fcn_8_vgg
model_from_name["fcn_32_vgg"] = fcn.fcn_32_vgg
model_from_name["fcn_8_resnet50"] = fcn.fcn_8_resnet50
model_from_name["fcn_32_resnet50"] = fcn.fcn_32_resnet50
model_from_name["fcn_8_mobilenet"] = fcn.fcn_8_mobilenet
model_from_name["fcn_32_mobilenet"] = fcn.fcn_32_mobilenet


model_from_name["pspnet"] = pspnet.pspnet
model_from_name["vgg_pspnet"] = pspnet.vgg_pspnet
model_from_name["resnet50_pspnet"] = pspnet.resnet50_pspnet

model_from_name["vgg_pspnet"] = pspnet.vgg_pspnet
model_from_name["resnet50_pspnet"] = pspnet.resnet50_pspnet

model_from_name["pspnet_50"] = pspnet.pspnet_50
model_from_name["pspnet_101"] = pspnet.pspnet_101


# model_from_name["mobilenet_pspnet"] = pspnet.mobilenet_pspnet


model_from_name["unet_mini"] = unet.unet_mini
model_from_name["unet"] = unet.unet
model_from_name["vgg_unet"] = unet.vgg_unet
model_from_name["resnet50_unet"] = unet.resnet50_unet
model_from_name["mobilenet_unet"] = unet.mobilenet_unet


model_from_name["segnet"] = segnet.segnet
model_from_name["vgg_segnet"] = segnet.vgg_segnet
model_from_name["resnet50_segnet"] = segnet.resnet50_segnet
model_from_name["mobilenet_segnet"] = segnet.mobilenet_segnet


================================================
FILE: axelerate/networks/segnet/models/basic_models.py
================================================
from keras.models import *
from keras.layers import *
import keras.backend as K

from .config import IMAGE_ORDERING

def vanilla_encoder(input_height=224,  input_width=224):

    kernel = 3
    filter_size = 64
    pad = 1
    pool_size = 2

    if IMAGE_ORDERING == 'channels_first':
        img_input = Input(shape=(3, input_height, input_width))
    elif IMAGE_ORDERING == 'channels_last':
        img_input = Input(shape=(input_height, input_width, 3))

    x = img_input
    levels = []

    x = (ZeroPadding2D((pad, pad), data_format=IMAGE_ORDERING))(x)
    x = (Conv2D(filter_size, (kernel, kernel),
                data_format=IMAGE_ORDERING, padding='valid'))(x)
    x = (BatchNormalization())(x)
    x = (Activation('relu'))(x)
    x = (MaxPooling2D((pool_size, pool_size), data_format=IMAGE_ORDERING))(x)
    levels.append(x)

    x = (ZeroPadding2D((pad, pad), data_format=IMAGE_ORDERING))(x)
    x = (Conv2D(128, (kernel, kernel), data_format=IMAGE_ORDERING,
         padding='valid'))(x)
    x = (BatchNormalization())(x)
    x = (Activation('relu'))(x)
    x = (MaxPooling2D((pool_size, pool_size), data_format=IMAGE_ORDERING))(x)
    levels.append(x)

    for _ in range(3):
        x = (ZeroPadding2D((pad, pad), data_format=IMAGE_ORDERING))(x)
        x = (Conv2D(256, (kernel, kernel),
                    data_format=IMAGE_ORDERING, padding='valid'))(x)
        x = (BatchNormalization())(x)
        x = (Activation('relu'))(x)
        x = (MaxPooling2D((pool_size, pool_size),
             data_format=IMAGE_ORDERING))(x)
        levels.append(x)

    return img_input, levels


================================================
FILE: axelerate/networks/segnet/models/config.py
================================================
IMAGE_ORDERING_CHANNELS_LAST = "channels_last"
IMAGE_ORDERING_CHANNELS_FIRST = "channels_first"

# Default IMAGE_ORDERING = channels_last
IMAGE_ORDERING = IMAGE_ORDERING_CHANNELS_LAST

================================================
FILE: axelerate/networks/segnet/models/fcn.py
================================================
from keras.models import *
from keras.layers import *

from .config import IMAGE_ORDERING
from .model_utils import get_segmentation_model
from .vgg16 import get_vgg_encoder
from .mobilenet import get_mobilenet_encoder
from .basic_models import vanilla_encoder
from .resnet50 import get_resnet50_encoder


# crop o1 wrt o2
def crop(o1, o2, i):
    o_shape2 = Model(i, o2).output_shape

    if IMAGE_ORDERING == 'channels_first':
        output_height2 = o_shape2[2]
        output_width2 = o_shape2[3]
    else:
        output_height2 = o_shape2[1]
        output_width2 = o_shape2[2]

    o_shape1 = Model(i, o1).output_shape
    if IMAGE_ORDERING == 'channels_first':
        output_height1 = o_shape1[2]
        output_width1 = o_shape1[3]
    else:
        output_height1 = o_shape1[1]
        output_width1 = o_shape1[2]

    cx = abs(output_width1 - output_width2)
    cy = abs(output_height2 - output_height1)

    if output_width1 > output_width2:
        o1 = Cropping2D(cropping=((0, 0),  (0, cx)),
                        data_format=IMAGE_ORDERING)(o1)
    else:
        o2 = Cropping2D(cropping=((0, 0),  (0, cx)),
                        data_format=IMAGE_ORDERING)(o2)

    if output_height1 > output_height2:
        o1 = Cropping2D(cropping=((0, cy),  (0, 0)),
                        data_format=IMAGE_ORDERING)(o1)
    else:
        o2 = Cropping2D(cropping=((0, cy),  (0, 0)),
                        data_format=IMAGE_ORDERING)(o2)

    return o1, o2


def fcn_8(n_classes, encoder=vanilla_encoder, input_height=416,
          input_width=608):

    img_input, levels = encoder(
        input_height=input_height,  input_width=input_width)
    [f1, f2, f3, f4, f5] = levels

    o = f5

    o = (Conv2D(4096, (7, 7), activation='relu',
                padding='same', data_format=IMAGE_ORDERING))(o)
    o = Dropout(0.5)(o)
    o = (Conv2D(4096, (1, 1), activation='relu',
                padding='same', data_format=IMAGE_ORDERING))(o)
    o = Dropout(0.5)(o)

    o = (Conv2D(n_classes,  (1, 1), kernel_initializer='he_normal',
                data_format=IMAGE_ORDERING))(o)
    o = Conv2DTranspose(n_classes, kernel_size=(4, 4),  strides=(
        2, 2), use_bias=False, data_format=IMAGE_ORDERING)(o)

    o2 = f4
    o2 = (Conv2D(n_classes,  (1, 1), kernel_initializer='he_normal',
                 data_format=IMAGE_ORDERING))(o2)

    o, o2 = crop(o, o2, img_input)

    o = Add()([o, o2])

    o = Conv2DTranspose(n_classes, kernel_size=(4, 4),  strides=(
        2, 2), use_bias=False, data_format=IMAGE_ORDERING)(o)
    o2 = f3
    o2 = (Conv2D(n_classes,  (1, 1), kernel_initializer='he_normal',
                 data_format=IMAGE_ORDERING))(o2)
    o2, o = crop(o2, o, img_input)
    o = Add()([o2, o])

    o = Conv2DTranspose(n_classes, kernel_size=(16, 16),  strides=(
        8, 8), use_bias=False, data_format=IMAGE_ORDERING)(o)

    model = get_segmentation_model(img_input, o)
    model.model_name = "fcn_8"
    return model


def fcn_32(n_classes, encoder=vanilla_encoder, input_height=416,
           input_width=608):

    img_input, levels = encoder(
        input_height=input_height,  input_width=input_width)
    [f1, f2, f3, f4, f5] = levels

    o = f5

    o = (Conv2D(4096, (7, 7), activation='relu',
                padding='same', data_format=IMAGE_ORDERING))(o)
    o = Dropout(0.5)(o)
    o = (Conv2D(4096, (1, 1), activation='relu',
                padding='same', data_format=IMAGE_ORDERING))(o)
    o = Dropout(0.5)(o)

    o = (Conv2D(n_classes,  (1, 1), kernel_initializer='he_normal',
                data_format=IMAGE_ORDERING))(o)
    o = Conv2DTranspose(n_classes, kernel_size=(64, 64),  strides=(
        32, 32), use_bias=False,  data_format=IMAGE_ORDERING)(o)

    model = get_segmentation_model(img_input, o)
    model.model_name = "fcn_32"
    return model


def fcn_8_vgg(n_classes,  input_height=416, input_width=608):
    model = fcn_8(n_classes, get_vgg_encoder,
                  input_height=input_height, input_width=input_width)
    model.model_name = "fcn_8_vgg"
    return model


def fcn_32_vgg(n_classes,  input_height=416, input_width=608):
    model = fcn_32(n_classes, get_vgg_encoder,
                   input_height=input_height, input_width=input_width)
    model.model_name = "fcn_32_vgg"
    return model


def fcn_8_resnet50(n_classes,  input_height=416, input_width=608):
    model = fcn_8(n_classes, get_resnet50_encoder,
                  input_height=input_height, input_width=input_width)
    model.model_name = "fcn_8_resnet50"
    return model


def fcn_32_resnet50(n_classes,  input_height=416, input_width=608):
    model = fcn_32(n_classes, get_resnet50_encoder,
                   input_height=input_height, input_width=input_width)
    model.model_name = "fcn_32_resnet50"
    return model


def fcn_8_mobilenet(n_classes,  input_height=416, input_width=608):
    model = fcn_8(n_classes, get_mobilenet_encoder,
                  input_height=input_height, input_width=input_width)
    model.model_name = "fcn_8_mobilenet"
    return model


def fcn_32_mobilenet(n_classes,  input_height=416, input_width=608):
    model = fcn_32(n_classes, get_mobilenet_encoder,
                   input_height=input_height, input_width=input_width)
    model.model_name = "fcn_32_mobilenet"
    return model


if __name__ == '__main__':
    m = fcn_8(101)
    m = fcn_32(101)


================================================
FILE: axelerate/networks/segnet/models/model.py
================================================
""" Definition for the generic Model class """

class Model:
    def __init__(self, n_classes, input_height=None, input_width=None):
        pass



================================================
FILE: axelerate/networks/segnet/models/model_utils.py
================================================
from types import MethodType

from tensorflow.keras.models import *
from tensorflow.keras.layers import *
import tensorflow.keras.backend as K
from tqdm import tqdm

from .config im
Download .txt
gitextract_o2hqtp1u/

├── .github/
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   └── feature_request.yml
│   └── workflows/
│       └── python-publish.yml
├── .gitignore
├── LICENSE
├── README.md
├── axelerate/
│   ├── __init__.py
│   ├── evaluate.py
│   ├── infer.py
│   ├── networks/
│   │   ├── __init__.py
│   │   ├── classifier/
│   │   │   ├── __init__.py
│   │   │   ├── batch_gen.py
│   │   │   ├── directory_iterator.py
│   │   │   ├── frontend_classifier.py
│   │   │   ├── iterator.py
│   │   │   └── utils.py
│   │   ├── common_utils/
│   │   │   ├── __init__.py
│   │   │   ├── augment.py
│   │   │   ├── callbacks.py
│   │   │   ├── convert.py
│   │   │   ├── feature.py
│   │   │   ├── fit.py
│   │   │   ├── install_edge_tpu_compiler.sh
│   │   │   ├── install_openvino.sh
│   │   │   └── mobilenet_sipeed/
│   │   │       ├── __init__.py
│   │   │       ├── imagenet_utils.py
│   │   │       └── mobilenet.py
│   │   ├── segnet/
│   │   │   ├── __init__.py
│   │   │   ├── data_utils/
│   │   │   │   ├── __init__.py
│   │   │   │   └── data_loader.py
│   │   │   ├── frontend_segnet.py
│   │   │   ├── metrics.py
│   │   │   ├── models/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── _pspnet_2.py
│   │   │   │   ├── all_models.py
│   │   │   │   ├── basic_models.py
│   │   │   │   ├── config.py
│   │   │   │   ├── fcn.py
│   │   │   │   ├── model.py
│   │   │   │   ├── model_utils.py
│   │   │   │   ├── pspnet.py
│   │   │   │   ├── segnet.py
│   │   │   │   └── unet.py
│   │   │   ├── predict.py
│   │   │   └── train.py
│   │   └── yolo/
│   │       ├── __init__.py
│   │       ├── backend/
│   │       │   ├── __init__.py
│   │       │   ├── batch_gen.py
│   │       │   ├── decoder.py
│   │       │   ├── loss.py
│   │       │   ├── network.py
│   │       │   └── utils/
│   │       │       ├── __init__.py
│   │       │       ├── annotation.py
│   │       │       ├── box.py
│   │       │       ├── custom.py
│   │       │       └── eval/
│   │       │           ├── __init__.py
│   │       │           ├── _box_match.py
│   │       │           └── fscore.py
│   │       └── frontend.py
│   └── train.py
├── configs/
│   ├── classifier.json
│   ├── detector.json
│   ├── dogs_classifier.json
│   ├── face_detector.json
│   ├── kangaroo_detector.json
│   ├── lego_detector.json
│   ├── pascal_20_detector.json
│   ├── pascal_20_detector_2.json
│   ├── pascal_20_segnet.json
│   ├── person_detector.json
│   ├── raccoon_detector.json
│   ├── santa_uno.json
│   └── segmentation.json
├── example_scripts/
│   ├── arm_nn/
│   │   ├── README.md
│   │   ├── box.py
│   │   ├── cv_utils.py
│   │   ├── network_executor.py
│   │   ├── run_video_file.py
│   │   ├── run_video_stream.py
│   │   └── yolov2.py
│   ├── edge_tpu/
│   │   └── detector/
│   │       ├── box.py
│   │       └── detector_video.py
│   ├── k210/
│   │   ├── classifier/
│   │   │   └── santa_uno.py
│   │   ├── detector/
│   │   │   ├── yolov2/
│   │   │   │   ├── person_detector_v4.py
│   │   │   │   ├── raccoon_detector.py
│   │   │   │   └── raccoon_detector_uart.py
│   │   │   └── yolov3/
│   │   │       └── raccoon_detector.py
│   │   └── segnet/
│   │       └── segnet-support-is-WIP-contributions-welcome
│   ├── oak/
│   │   └── yolov2/
│   │       ├── YOLO_best_mAP.json
│   │       ├── box.py
│   │       ├── yolo.py
│   │       └── yolo_alt.py
│   └── tensorflow_lite/
│       ├── classifier/
│       │   ├── base_camera.py
│       │   ├── camera_opencv.py
│       │   ├── camera_pi.py
│       │   ├── classifier_file.py
│       │   ├── classifier_stream.py
│       │   ├── cv_utils.py
│       │   └── templates/
│       │       └── index.html
│       ├── detector/
│       │   ├── base_camera.py
│       │   ├── camera_opencv.py
│       │   ├── camera_pi.py
│       │   ├── cv_utils.py
│       │   ├── detector_file.py
│       │   ├── detector_stream.py
│       │   └── templates/
│       │       └── index.html
│       └── segnet/
│           ├── base_camera.py
│           ├── camera_opencv.py
│           ├── camera_pi.py
│           ├── cv_utils.py
│           ├── segnet_file.py
│           ├── segnet_stream.py
│           └── templates/
│               └── index.html
├── resources/
│   ├── aXeleRate_face_detector.ipynb
│   ├── aXeleRate_human_segmentation.ipynb
│   ├── aXeleRate_mark_detector.ipynb
│   ├── aXeleRate_pascal20_detector.ipynb
│   ├── aXeleRate_person_detector.ipynb
│   └── aXeleRate_standford_dog_classifier.ipynb
├── sample_datasets/
│   └── detector/
│       ├── anns/
│       │   ├── 2007_000032.xml
│       │   └── 2007_000033.xml
│       └── anns_validation/
│           ├── 2007_000243.xml
│           ├── 2007_000250.xml
│           ├── 2007_000645.xml
│           ├── 2007_001595.xml
│           ├── 2007_001834.xml
│           ├── 2007_003131.xml
│           ├── 2007_003201.xml
│           ├── 2007_003593.xml
│           ├── 2007_004627.xml
│           └── 2007_005803.xml
├── setup.py
└── tests_training_and_inference.py
Download .txt
SYMBOL INDEX (604 symbols across 69 files)

FILE: axelerate/evaluate.py
  function save_report (line 23) | def save_report(config, report, report_file):
  function show_image (line 30) | def show_image(filename):
  function prepare_image (line 39) | def prepare_image(img_path, network):
  function setup_evaluation (line 47) | def setup_evaluation(config, weights, threshold = None):

FILE: axelerate/infer.py
  function show_image (line 22) | def show_image(filename):
  function prepare_image (line 31) | def prepare_image(img_path, network, input_size):
  function find_imgs (line 39) | def find_imgs(folder):
  function setup_inference (line 46) | def setup_inference(config, weights, threshold = None, folder = None):

FILE: axelerate/networks/classifier/batch_gen.py
  function create_datagen (line 12) | def create_datagen(img_folder, batch_size, input_size, project_folder, a...
  class ImageDataAugmentor (line 35) | class ImageDataAugmentor(Sequence):
    method __init__ (line 56) | def __init__(self,
    method flow_from_directory (line 82) | def flow_from_directory(self,
    method transform_image (line 189) | def transform_image(self, image, desired_w, desired_h):

FILE: axelerate/networks/classifier/directory_iterator.py
  class DirectoryIterator (line 18) | class DirectoryIterator(BatchFromFilesMixin, Iterator):
    method __init__ (line 68) | def __init__(self,
    method filepaths (line 149) | def filepaths(self):
    method labels (line 153) | def labels(self):
    method sample_weight (line 157) | def sample_weight(self):

FILE: axelerate/networks/classifier/frontend_classifier.py
  function get_labels (line 15) | def get_labels(directory):
  function create_classifier (line 19) | def create_classifier(architecture, labels, input_size, layers, dropout,...
  class Classifier (line 38) | class Classifier(object):
    method __init__ (line 39) | def __init__(self,
    method load_weights (line 51) | def load_weights(self, weight_path, by_name=False):
    method save_bottleneck (line 58) | def save_bottleneck(self, model_path, bottleneck_layer):
    method predict (line 67) | def predict(self, img):
    method evaluate (line 80) | def evaluate(self, img_folder, batch_size):
    method train (line 100) | def train(self,

FILE: axelerate/networks/classifier/iterator.py
  class Iterator (line 23) | class Iterator(IteratorType):
    method __init__ (line 37) | def __init__(self, n, batch_size, shuffle, seed):
    method _set_index_array (line 48) | def _set_index_array(self):
    method __getitem__ (line 53) | def __getitem__(self, idx):
    method __len__ (line 68) | def __len__(self):
    method on_epoch_end (line 71) | def on_epoch_end(self):
    method reset (line 74) | def reset(self):
    method _flow_index (line 77) | def _flow_index(self):
    method __iter__ (line 99) | def __iter__(self):
    method __next__ (line 104) | def __next__(self, *args, **kwargs):
    method next (line 107) | def next(self):
    method _get_batches_of_transformed_samples (line 119) | def _get_batches_of_transformed_samples(self, index_array):
  class BatchFromFilesMixin (line 131) | class BatchFromFilesMixin():
    method set_processing_attrs (line 137) | def set_processing_attrs(self,
    method _get_batch_of_samples (line 214) | def _get_batch_of_samples(self, index_array, apply_standardization=True):
    method _get_batches_of_transformed_samples (line 274) | def _get_batches_of_transformed_samples(self, index_array):
    method show_batch (line 278) | def show_batch(self, rows:int=5, apply_standardization:bool=False, **p...
    method filepaths (line 312) | def filepaths(self):
    method labels (line 320) | def labels(self):
    method sample_weight (line 328) | def sample_weight(self):

FILE: axelerate/networks/classifier/utils.py
  function validate_filename (line 36) | def validate_filename(filename, white_list_formats):
  function save_img (line 50) | def save_img(path,
  function load_img (line 78) | def load_img(fname, color_mode='rgb', target_size=None, interpolation=cv...
  function list_pictures (line 106) | def list_pictures(directory, ext=('jpg', 'jpeg', 'bmp', 'png', 'ppm', 't...
  function _iter_valid_files (line 123) | def _iter_valid_files(directory, white_list_formats, follow_links):
  function _list_valid_filenames_in_directory (line 149) | def _list_valid_filenames_in_directory(directory, white_list_formats, sp...
  function array_to_img (line 196) | def array_to_img(x, data_format='channels_last', scale=True, dtype='floa...
  function img_to_array (line 249) | def img_to_array(img, data_format='channels_last', dtype='float32'):

FILE: axelerate/networks/common_utils/augment.py
  class ImgAugment (line 13) | class ImgAugment(object):
    method __init__ (line 14) | def __init__(self, w, h, jitter):
    method imread (line 25) | def imread(self, img_file, boxes, labels):
  function _to_bbs (line 53) | def _to_bbs(boxes, labels, shape):
  function _to_array (line 62) | def _to_array(bbs):
  function process_image_detection (line 76) | def process_image_detection(image, boxes, labels, desired_w, desired_h, ...
  function process_image_classification (line 102) | def process_image_classification(image, desired_w, desired_h, augment):
  function process_image_segmentation (line 117) | def process_image_segmentation(image, segmap, input_w, input_h, output_w...
  function _create_augment_pipeline (line 134) | def _create_augment_pipeline():
  function visualize_detection_dataset (line 168) | def visualize_detection_dataset(img_folder, ann_folder, num_imgs = None,...
  function visualize_segmentation_dataset (line 202) | def visualize_segmentation_dataset(images_path, segs_path, num_imgs = No...
  function visualize_classification_dataset (line 252) | def visualize_classification_dataset(img_folder, num_imgs = None, img_si...

FILE: axelerate/networks/common_utils/callbacks.py
  function cosine_decay_with_warmup (line 5) | def cosine_decay_with_warmup(global_step,
  class WarmUpCosineDecayScheduler (line 55) | class WarmUpCosineDecayScheduler(keras.callbacks.Callback):
    method __init__ (line 59) | def __init__(self,
    method on_epoch_end (line 91) | def on_epoch_end(self, epoch, logs={}):
    method on_batch_end (line 95) | def on_batch_end(self, batch, logs=None):
    method on_batch_begin (line 100) | def on_batch_begin(self, batch, logs=None):

FILE: axelerate/networks/common_utils/convert.py
  function run_command (line 18) | def run_command(cmd, cwd=None):
  class Converter (line 28) | class Converter(object):
    method __init__ (line 29) | def __init__(self, converter_type, backend=None, dataset_path=None):
    method edgetpu_dataset_gen (line 77) | def edgetpu_dataset_gen(self):
    method k210_dataset_gen (line 93) | def k210_dataset_gen(self):
    method convert_edgetpu (line 114) | def convert_edgetpu(self, model_path):
    method convert_k210 (line 122) | def convert_k210(self, model_path):
    method convert_ir (line 133) | def convert_ir(self, model_path, model_layers):
    method convert_oak (line 142) | def convert_oak(self, model_path):
    method convert_onnx (line 149) | def convert_onnx(self, model):
    method convert_tflite (line 154) | def convert_tflite(self, model, model_layers, target=None):
    method convert_model (line 194) | def convert_model(self, model_path):

FILE: axelerate/networks/common_utils/feature.py
  function create_feature_extractor (line 12) | def create_feature_extractor(architecture, input_size, weights = None):
  class BaseFeatureExtractor (line 47) | class BaseFeatureExtractor(object):
    method __init__ (line 51) | def __init__(self, input_size):
    method normalize (line 55) | def normalize(self, image):
    method get_input_size (line 58) | def get_input_size(self):
    method get_output_size (line 63) | def get_output_size(self, layer = None):
    method get_output_tensor (line 69) | def get_output_tensor(self, layer):
    method extract (line 72) | def extract(self, input_image):
  class FullYoloFeature (line 75) | class FullYoloFeature(BaseFeatureExtractor):
    method __init__ (line 77) | def __init__(self, input_size, weights=None):
    method normalize (line 215) | def normalize(self, image):
  class TinyYoloFeature (line 218) | class TinyYoloFeature(BaseFeatureExtractor):
    method __init__ (line 220) | def __init__(self, input_size, weights):
    method normalize (line 259) | def normalize(self, image):
  class MobileNetFeature (line 262) | class MobileNetFeature(BaseFeatureExtractor):
    method __init__ (line 264) | def __init__(self, input_size, weights, alpha):
    method normalize (line 284) | def normalize(self, image):
  class SqueezeNetFeature (line 291) | class SqueezeNetFeature(BaseFeatureExtractor):
    method __init__ (line 293) | def __init__(self, input_size, weights):
    method normalize (line 347) | def normalize(self, image):
  class DenseNet121Feature (line 357) | class DenseNet121Feature(BaseFeatureExtractor):
    method __init__ (line 359) | def __init__(self, input_size, weights):
    method normalize (line 373) | def normalize(self, image):
  class NASNetMobileFeature (line 377) | class NASNetMobileFeature(BaseFeatureExtractor):
    method __init__ (line 379) | def __init__(self, input_size, weights):
    method normalize (line 392) | def normalize(self, image):
  class ResNet50Feature (line 396) | class ResNet50Feature(BaseFeatureExtractor):
    method __init__ (line 398) | def __init__(self, input_size, weights):
    method normalize (line 412) | def normalize(self, image):

FILE: axelerate/networks/common_utils/fit.py
  function train (line 14) | def train(model,
  function _print_time (line 141) | def _print_time(process_time):

FILE: axelerate/networks/common_utils/mobilenet_sipeed/__init__.py
  function set_keras_submodules (line 13) | def set_keras_submodules(backend=None,
  function get_keras_submodule (line 29) | def get_keras_submodule(name):
  function get_submodules_from_kwargs (line 58) | def get_submodules_from_kwargs(kwargs):
  function correct_pad (line 69) | def correct_pad(backend, inputs, kernel_size):

FILE: axelerate/networks/common_utils/mobilenet_sipeed/imagenet_utils.py
  function _preprocess_numpy_input (line 21) | def _preprocess_numpy_input(x, data_format, mode, **kwargs):
  function _preprocess_symbolic_input (line 96) | def _preprocess_symbolic_input(x, data_format, mode, **kwargs):
  function preprocess_input (line 157) | def preprocess_input(x, data_format=None, mode='caffe', **kwargs):
  function decode_predictions (line 198) | def decode_predictions(preds, top=5, **kwargs):
  function _obtain_input_shape (line 240) | def _obtain_input_shape(input_shape,

FILE: axelerate/networks/common_utils/mobilenet_sipeed/mobilenet.py
  function preprocess_input (line 75) | def preprocess_input(x, **kwargs):
  function MobileNet (line 87) | def MobileNet(input_shape=None,
  function _conv_block (line 329) | def _conv_block(inputs, filters, alpha, kernel=(3, 3), strides=(1, 1)):
  function _depthwise_conv_block (line 390) | def _depthwise_conv_block(inputs, pointwise_conv_filters, alpha,

FILE: axelerate/networks/segnet/data_utils/data_loader.py
  function tqdm (line 16) | def tqdm(iter):
  class DataLoaderError (line 27) | class DataLoaderError(Exception):
  function get_pairs_from_paths (line 30) | def get_pairs_from_paths(images_path, segs_path, ignore_non_matching=True):
  function get_image_array (line 70) | def get_image_array(image_input, norm, ordering='channels_first'):
  function get_segmentation_array (line 90) | def get_segmentation_array(image_input, nClasses, no_reshape=True):
  function verify_segmentation_dataset (line 116) | def verify_segmentation_dataset(images_path, segs_path, n_classes, show_...
  function create_batch_generator (line 150) | def create_batch_generator(images_path, segs_path,
  class BatchGenerator (line 165) | class BatchGenerator(Sequence):
    method __init__ (line 166) | def __init__(self,
    method __len__ (line 182) | def __len__(self):
    method __getitem__ (line 185) | def __getitem__(self, idx):
    method on_epoch_end (line 207) | def on_epoch_end(self):

FILE: axelerate/networks/segnet/frontend_segnet.py
  function masked_categorical_crossentropy (line 12) | def masked_categorical_crossentropy(gt , pr ):
  function create_segnet (line 17) | def create_segnet(architecture, input_size, n_classes, weights = None):
  class Segnet (line 39) | class Segnet(object):
    method __init__ (line 40) | def __init__(self,
    method load_weights (line 52) | def load_weights(self, weight_path, by_name=False):
    method predict (line 59) | def predict(self, image):
    method evaluate (line 70) | def evaluate(self, img_folder, ann_folder, batch_size):
    method train (line 100) | def train(self,

FILE: axelerate/networks/segnet/metrics.py
  function get_iou (line 5) | def get_iou(gt, pr, n_classes):

FILE: axelerate/networks/segnet/models/_pspnet_2.py
  function BN (line 22) | def BN(name=""):
  class Interp (line 26) | class Interp(layers.Layer):
    method __init__ (line 28) | def __init__(self, new_size, **kwargs):
    method build (line 32) | def build(self, input_shape):
    method call (line 35) | def call(self, inputs, **kwargs):
    method compute_output_shape (line 44) | def compute_output_shape(self, input_shape):
    method get_config (line 50) | def get_config(self):
  function residual_conv (line 63) | def residual_conv(prev, level, pad=1, lvl=1, sub_lvl=1, modify_stride=Fa...
  function short_convolution_branch (line 94) | def short_convolution_branch(prev, level, lvl=1, sub_lvl=1,
  function empty_branch (line 112) | def empty_branch(prev):
  function residual_short (line 116) | def residual_short(prev_layer, level, pad=1, lvl=1, sub_lvl=1,
  function residual_empty (line 130) | def residual_empty(prev_layer, level, pad=1, lvl=1, sub_lvl=1):
  function ResNet (line 140) | def ResNet(inp, layers):
  function interp_block (line 207) | def interp_block(prev_layer, level, feature_map_shape, input_shape):
  function build_pyramid_pooling_module (line 240) | def build_pyramid_pooling_module(res, input_shape):
  function _build_pspnet (line 261) | def _build_pspnet(nb_classes, resnet_layers, input_shape,

FILE: axelerate/networks/segnet/models/basic_models.py
  function vanilla_encoder (line 7) | def vanilla_encoder(input_height=224,  input_width=224):

FILE: axelerate/networks/segnet/models/fcn.py
  function crop (line 13) | def crop(o1, o2, i):
  function fcn_8 (line 51) | def fcn_8(n_classes, encoder=vanilla_encoder, input_height=416,
  function fcn_32 (line 96) | def fcn_32(n_classes, encoder=vanilla_encoder, input_height=416,
  function fcn_8_vgg (line 122) | def fcn_8_vgg(n_classes,  input_height=416, input_width=608):
  function fcn_32_vgg (line 129) | def fcn_32_vgg(n_classes,  input_height=416, input_width=608):
  function fcn_8_resnet50 (line 136) | def fcn_8_resnet50(n_classes,  input_height=416, input_width=608):
  function fcn_32_resnet50 (line 143) | def fcn_32_resnet50(n_classes,  input_height=416, input_width=608):
  function fcn_8_mobilenet (line 150) | def fcn_8_mobilenet(n_classes,  input_height=416, input_width=608):
  function fcn_32_mobilenet (line 157) | def fcn_32_mobilenet(n_classes,  input_height=416, input_width=608):

FILE: axelerate/networks/segnet/models/model.py
  class Model (line 3) | class Model:
    method __init__ (line 4) | def __init__(self, n_classes, input_height=None, input_width=None):

FILE: axelerate/networks/segnet/models/model_utils.py
  function transfer_weights (line 14) | def transfer_weights(m1, m2, verbose=True):
  function resize_image (line 43) | def resize_image(inp,  s, data_format):
  function get_segmentation_model (line 67) | def get_segmentation_model(input, output):

FILE: axelerate/networks/segnet/models/pspnet.py
  function pool_block (line 21) | def pool_block(feats, pool_factor):
  function _pspnet (line 46) | def _pspnet(n_classes, encoder,  input_height=384, input_width=576):
  function pspnet (line 78) | def pspnet(n_classes,  input_height=384, input_width=576):
  function vgg_pspnet (line 86) | def vgg_pspnet(n_classes,  input_height=384, input_width=576):
  function resnet50_pspnet (line 94) | def resnet50_pspnet(n_classes,  input_height=384, input_width=576):
  function pspnet_50 (line 102) | def pspnet_50(n_classes,  input_height=473, input_width=473):
  function pspnet_101 (line 115) | def pspnet_101(n_classes,  input_height=473, input_width=473):

FILE: axelerate/networks/segnet/models/segnet.py
  function chopper (line 18) | def chopper(model, model_name, f):
  function segnet_decoder (line 21) | def segnet_decoder(f, n_classes, n_up=3):
  function _segnet (line 53) | def _segnet(n_classes, encoder_input, encoder_output,  input_height=416,...
  function full_yolo_segnet (line 60) | def full_yolo_segnet(n_classes, input_size, encoder_level, weights):
  function tiny_yolo_segnet (line 72) | def tiny_yolo_segnet(n_classes, input_size, encoder_level, weights):
  function squeezenet_segnet (line 84) | def squeezenet_segnet(n_classes, input_size, encoder_level, weights):
  function densenet121_segnet (line 95) | def densenet121_segnet(n_classes, input_size, encoder_level, weights):
  function nasnetmobile_segnet (line 106) | def nasnetmobile_segnet(n_classes, input_size, encoder_level, weights):
  function resnet50_segnet (line 117) | def resnet50_segnet(n_classes, input_size, encoder_level, weights):
  function mobilenet_segnet (line 129) | def mobilenet_segnet(n_classes, input_size, encoder_level, weights, arch...

FILE: axelerate/networks/segnet/models/unet.py
  function unet_mini (line 18) | def unet_mini(n_classes, input_height=360, input_width=480):
  function _unet (line 69) | def _unet(n_classes, encoder, l1_skip_conn=True, input_height=416,
  function unet (line 111) | def unet(n_classes, input_height=416, input_width=608, encoder_level=3):
  function vgg_unet (line 119) | def vgg_unet(n_classes, input_height=416, input_width=608, encoder_level...
  function resnet50_unet (line 127) | def resnet50_unet(n_classes, input_height=416, input_width=608,
  function mobilenet_unet (line 136) | def mobilenet_unet(n_classes, input_height=224, input_width=224,

FILE: axelerate/networks/segnet/predict.py
  function model_from_checkpoint_path (line 20) | def model_from_checkpoint_path(checkpoints_path):
  function get_colored_segmentation_image (line 36) | def get_colored_segmentation_image(seg_arr, n_classes, colors=class_colo...
  function get_legends (line 47) | def get_legends(class_names,  colors=class_colors):
  function overlay_seg_image (line 62) | def overlay_seg_image(inp_img , seg_img):
  function concat_lenends (line 70) | def concat_lenends(  seg_img , legend_img  ):
  function visualize_segmentation (line 82) | def visualize_segmentation(seg_arr, inp_img=None, n_classes=None,
  function predict (line 115) | def predict(model=None, inp=None, out_fname=None, image = None, overlay_...
  function predict_multiple (line 134) | def predict_multiple(model=None, inps=None, inp_dir=None, out_dir=None,
  function evaluate (line 167) | def evaluate(model=None, inp_images=None, annotations=None, inp_images_d...

FILE: axelerate/networks/segnet/train.py
  function find_latest_checkpoint (line 8) | def find_latest_checkpoint(checkpoints_path, fail_safe=True):
  function masked_categorical_crossentropy (line 30) | def masked_categorical_crossentropy(gt , pr ):
  function train (line 38) | def train(model,

FILE: axelerate/networks/yolo/backend/batch_gen.py
  function create_batch_generator (line 12) | def create_batch_generator(annotations,
  class BatchGenerator (line 42) | class BatchGenerator(Sequence):
    method __init__ (line 43) | def __init__(self,
    method __len__ (line 67) | def __len__(self):
    method __getitem__ (line 70) | def __getitem__(self, idx):
    method on_epoch_end (line 116) | def on_epoch_end(self):
  class _YoloBox (line 120) | class _YoloBox(object):
    method __init__ (line 122) | def __init__(self, input_size, grid_size):
    method trans (line 126) | def trans(self, boxes):
  class _NetinGen (line 145) | class _NetinGen(object):
    method __init__ (line 146) | def __init__(self, input_size, norm):
    method run (line 150) | def run(self, image):
    method _set_norm (line 153) | def _set_norm(self, norm):
  class _NetoutGen (line 159) | class _NetoutGen(object):
    method __init__ (line 160) | def __init__(self,
    method run (line 168) | def run(self, norm_boxes, labels):
    method _set_tensor_shape (line 186) | def _set_tensor_shape(self, grid_size, nb_classes):
    method _xy_grid_index (line 190) | def _xy_grid_index(self, box_xy: np.ndarray, layer: int):
    method _fake_iou (line 211) | def _fake_iou(a: np.ndarray, b: np.ndarray) -> float:
    method _get_anchor_index (line 242) | def _get_anchor_index(self, wh: np.ndarray) -> np.ndarray:
    method box_to_label (line 259) | def box_to_label(self, true_box: np.ndarray) -> tuple:

FILE: axelerate/networks/yolo/backend/decoder.py
  class YoloDecoder (line 5) | class YoloDecoder(object):
    method __init__ (line 7) | def __init__(self,
    method run (line 18) | def run(self, netout, obj_threshold):
  function _sigmoid (line 53) | def _sigmoid(x):

FILE: axelerate/networks/yolo/backend/loss.py
  function tf_xywh_to_all (line 10) | def tf_xywh_to_all(grid_pred_xy, grid_pred_wh, layer, params):
  function tf_xywh_to_grid (line 37) | def tf_xywh_to_grid(all_true_xy, all_true_wh, layer, params):
  function tf_reshape_box (line 62) | def tf_reshape_box(true_xy_A: tf.Tensor, true_wh_A: tf.Tensor, p_xy_A: t...
  function tf_iou (line 104) | def tf_iou(pred_xy: tf.Tensor, pred_wh: tf.Tensor, vaild_xy: tf.Tensor, ...
  function calc_ignore_mask (line 149) | def calc_ignore_mask(t_xy_A: tf.Tensor, t_wh_A: tf.Tensor, p_xy: tf.Tens...
  class Params (line 188) | class Params:
    method __init__ (line 190) | def __init__(self, obj_thresh, iou_thresh, obj_weight, noobj_weight, w...
    method _coordinate_offset (line 209) | def _coordinate_offset(anchors: np.ndarray, out_hw: np.ndarray) -> np....
    method _anchor_scale (line 232) | def _anchor_scale(anchors: np.ndarray, grid_wh: np.ndarray) -> np.array:
  function create_loss_fn (line 250) | def create_loss_fn(params, layer, batch_size):

FILE: axelerate/networks/yolo/backend/network.py
  function create_yolo_network (line 9) | def create_yolo_network(architecture,
  class YoloNetwork (line 23) | class YoloNetwork(object):
    method __init__ (line 25) | def __init__(self,
    method _init_layers (line 82) | def _init_layers(self, layers):
    method load_weights (line 93) | def load_weights(self, weight_path, by_name):
    method forward (line 96) | def forward(self, image):
    method get_model (line 100) | def get_model(self, first_trainable_layer=None):
    method get_grid_size (line 103) | def get_grid_size(self):
    method get_normalize_func (line 109) | def get_normalize_func(self):

FILE: axelerate/networks/yolo/backend/utils/annotation.py
  function get_unique_labels (line 8) | def get_unique_labels(files):
  function get_train_annotations (line 18) | def get_train_annotations(labels,
  class PascalVocXmlParser (line 61) | class PascalVocXmlParser(object):
    method __init__ (line 64) | def __init__(self):
    method get_fname (line 67) | def get_fname(self, annotation_file):
    method get_path (line 80) | def get_path(self, annotation_file):
    method get_width (line 97) | def get_width(self, annotation_file):
    method get_height (line 111) | def get_height(self, annotation_file):
    method get_labels (line 125) | def get_labels(self, annotation_file):
    method get_boxes (line 142) | def get_boxes(self, annotation_file):
    method _root_tag (line 166) | def _root_tag(self, fname):
    method _tree (line 171) | def _tree(self, fname):
  function parse_annotation (line 175) | def parse_annotation(ann_dir, img_dir, labels_naming=[], is_only_detect=...
  class Annotation (line 219) | class Annotation(object):
    method __init__ (line 226) | def __init__(self, filename):
    method add_object (line 231) | def add_object(self, x1, y1, x2, y2, name):
  class Annotations (line 239) | class Annotations(object):
    method __init__ (line 240) | def __init__(self, label_namings):
    method n_classes (line 244) | def n_classes(self):
    method add (line 247) | def add(self, annotation):
    method shuffle (line 250) | def shuffle(self):
    method fname (line 253) | def fname(self, i):
    method boxes (line 257) | def boxes(self, i):
    method labels (line 261) | def labels(self, i):
    method code_labels (line 269) | def code_labels(self, i):
    method _valid_index (line 280) | def _valid_index(self, i):
    method __len__ (line 284) | def __len__(self):
    method __getitem__ (line 287) | def __getitem__(self, idx):

FILE: axelerate/networks/yolo/backend/utils/box.py
  class BoundBox (line 4) | class BoundBox:
    method __init__ (line 5) | def __init__(self, x, y, w, h, c = None, classes = None):
    method get_label (line 14) | def get_label(self):
    method get_score (line 17) | def get_score(self):
    method iou (line 20) | def iou(self, bound_box):
    method as_centroid (line 25) | def as_centroid(self):
  function boxes_to_array (line 29) | def boxes_to_array(bound_boxes):
  function nms_boxes (line 46) | def nms_boxes(boxes, n_classes, nms_threshold=0.3, obj_threshold=0.3):
  function draw_scaled_boxes (line 75) | def draw_scaled_boxes(image, boxes, probs, labels, desired_size=400):
  function draw_boxes (line 92) | def draw_boxes(image, boxes, scores, classes, labels):
  function centroid_box_iou (line 124) | def centroid_box_iou(box1, box2):
  function to_centroid (line 153) | def to_centroid(minmax_boxes):
  function to_minmax (line 173) | def to_minmax(centroid_boxes):
  function create_anchor_boxes (line 188) | def create_anchor_boxes(anchors):
  function find_match_box (line 202) | def find_match_box(centroid_box, centroid_boxes):

FILE: axelerate/networks/yolo/backend/utils/custom.py
  class Yolo_Precision (line 15) | class Yolo_Precision(Metric):
    method __init__ (line 16) | def __init__(self, thresholds=None, name=None, dtype=None):
    method update_state (line 30) | def update_state(self, y_true, y_pred, sample_weight=None):
    method result (line 44) | def result(self):
  class Yolo_Recall (line 48) | class Yolo_Recall(Metric):
    method __init__ (line 49) | def __init__(self, thresholds=None, name=None, dtype=None):
    method update_state (line 62) | def update_state(self, y_true, y_pred, sample_weight=None):
    method result (line 76) | def result(self):
  class MergeMetrics (line 79) | class MergeMetrics(tensorflow.keras.callbacks.Callback):
    method __init__ (line 81) | def __init__(self,
    method on_epoch_end (line 112) | def on_epoch_end(self, epoch, logs={}):

FILE: axelerate/networks/yolo/backend/utils/eval/_box_match.py
  class BoxMatcher (line 5) | class BoxMatcher(object):
    method __init__ (line 15) | def __init__(self, boxes1, boxes2, labels1=None, labels2=None):
    method match_idx_of_box1_idx (line 33) | def match_idx_of_box1_idx(self, box1_idx):
    method match_idx_of_box2_idx (line 57) | def match_idx_of_box2_idx(self, box2_idx):
    method _find (line 81) | def _find(self, input_idx, input_idx_list, output_idx_list):
    method _calc_maximun_ious (line 89) | def _calc_maximun_ious(self):
    method _calc (line 94) | def _calc(self, boxes, true_boxes, labels, true_labels):

FILE: axelerate/networks/yolo/backend/utils/eval/fscore.py
  function count_true_positives (line 4) | def count_true_positives(detect_boxes, true_boxes, detect_labels=None, t...
  function calc_score (line 23) | def calc_score(n_true_positives, n_truth, n_pred):

FILE: axelerate/networks/yolo/frontend.py
  function get_object_labels (line 20) | def get_object_labels(ann_directory):
  function create_yolo (line 25) | def create_yolo(architecture,
  class YOLO (line 51) | class YOLO(object):
    method __init__ (line 52) | def __init__(self,
    method load_weights (line 71) | def load_weights(self, weight_path, by_name=True):
    method predict (line 78) | def predict(self, image, height, width, threshold=0.3):
    method evaluate (line 108) | def evaluate(self, img_folder, ann_folder, batch_size):
    method train (line 121) | def train(self,
    method _get_loss_func (line 167) | def _get_loss_func(self, batch_size):
    method _get_batch_generator (line 170) | def _get_batch_generator(self, annotations, batch_size, repeat_times, ...

FILE: axelerate/train.py
  function train_from_config (line 30) | def train_from_config(config,project_folder):
  function setup_training (line 149) | def setup_training(config_file=None, config_dict=None):

FILE: example_scripts/arm_nn/box.py
  class BoundBox (line 6) | class BoundBox:
    method __init__ (line 7) | def __init__(self, x, y, w, h, c = None, classes = None):
    method get_label (line 16) | def get_label(self):
    method get_score (line 19) | def get_score(self):
    method iou (line 22) | def iou(self, bound_box):
    method as_centroid (line 27) | def as_centroid(self):
  function boxes_to_array (line 31) | def boxes_to_array(bound_boxes):
  function nms_boxes (line 48) | def nms_boxes(boxes, n_classes, nms_threshold=0.3, obj_threshold=0.3):
  function draw_scaled_boxes (line 77) | def draw_scaled_boxes(image, boxes, probs, labels, desired_size=400):
  function draw_boxes (line 94) | def draw_boxes(image, boxes, probs, labels):
  function centroid_box_iou (line 107) | def centroid_box_iou(box1, box2):
  function to_centroid (line 136) | def to_centroid(minmax_boxes):
  function to_minmax (line 154) | def to_minmax(centroid_boxes):
  function create_anchor_boxes (line 169) | def create_anchor_boxes(anchors):
  function find_match_box (line 183) | def find_match_box(centroid_box, centroid_boxes):

FILE: example_scripts/arm_nn/cv_utils.py
  function preprocess (line 17) | def preprocess(frame: np.ndarray, input_binding_info: tuple):
  function resize_with_aspect_ratio (line 46) | def resize_with_aspect_ratio(frame: np.ndarray, input_binding_info: tuple):
  function create_video_writer (line 74) | def create_video_writer(video: cv2.VideoCapture, video_path: str, output...
  function init_video_file_capture (line 104) | def init_video_file_capture(video_path: str, output_path: str):
  function init_video_stream_capture (line 127) | def init_video_stream_capture(video_source: int):
  function draw_bounding_boxes (line 144) | def draw_bounding_boxes(frame: np.ndarray, detections: list, resize_fact...
  function get_source_encoding_int (line 186) | def get_source_encoding_int(video_capture):

FILE: example_scripts/arm_nn/network_executor.py
  function create_network (line 11) | def create_network(model_file: str, backends: list, input_names: Tuple[s...
  function execute_network (line 68) | def execute_network(input_tensors: list, output_tensors: list, runtime, ...
  class ArmnnNetworkExecutor (line 86) | class ArmnnNetworkExecutor:
    method __init__ (line 88) | def __init__(self, model_file: str, backends: list):
    method run (line 100) | def run(self, input_tensors: list) -> List[np.ndarray]:

FILE: example_scripts/arm_nn/run_video_file.py
  function preprocess (line 30) | def preprocess(frame: np.ndarray, input_binding_info: tuple):
  function process_faces (line 58) | def process_faces(frame, detections, executor_kp, resize_factor):
  function draw_bounding_boxes (line 92) | def draw_bounding_boxes(frame: np.ndarray, detections: list, resize_fact...
  function main (line 138) | def main(args):

FILE: example_scripts/arm_nn/run_video_stream.py
  function preprocess (line 24) | def preprocess(frame: np.ndarray, input_binding_info: tuple):
  function process_faces (line 52) | def process_faces(frame, detections, executor_kp, resize_factor):
  function draw_bounding_boxes (line 86) | def draw_bounding_boxes(frame: np.ndarray, detections: list, resize_fact...
  function main (line 131) | def main(args):

FILE: example_scripts/arm_nn/yolov2.py
  function yolo_processing (line 13) | def yolo_processing(netout):
  function _sigmoid (line 74) | def _sigmoid(x):
  function _softmax (line 77) | def _softmax(x, axis=-1, t=-100.):
  function yolo_resize_factor (line 84) | def yolo_resize_factor(video: cv2.VideoCapture, input_binding_info: tuple):

FILE: example_scripts/edge_tpu/detector/box.py
  class BoundBox (line 7) | class BoundBox:
    method __init__ (line 8) | def __init__(self, x, y, w, h, c = None, classes = None):
    method get_label (line 17) | def get_label(self):
    method get_score (line 20) | def get_score(self):
    method iou (line 23) | def iou(self, bound_box):
    method as_centroid (line 28) | def as_centroid(self):
  function boxes_to_array (line 32) | def boxes_to_array(bound_boxes):
  function nms_boxes (line 49) | def nms_boxes(boxes, n_classes, nms_threshold=0.3, obj_threshold=0.3):
  function draw_scaled_boxes (line 78) | def draw_scaled_boxes(image, boxes, probs, labels, desired_size=400):
  function draw_boxes (line 95) | def draw_boxes(image, boxes, probs, labels):
  function centroid_box_iou (line 108) | def centroid_box_iou(box1, box2):
  function to_centroid (line 137) | def to_centroid(minmax_boxes):
  function to_minmax (line 155) | def to_minmax(centroid_boxes):
  function create_anchor_boxes (line 170) | def create_anchor_boxes(anchors):
  function find_match_box (line 184) | def find_match_box(centroid_box, centroid_boxes):

FILE: example_scripts/edge_tpu/detector/detector_video.py
  class Detector (line 11) | class Detector(object):
    method __init__ (line 13) | def __init__(self, label_file, model_file, threshold):
    method load_labels (line 21) | def load_labels(self, path):
    method preprocess (line 25) | def preprocess(self, img):
    method get_output_tensor (line 35) | def get_output_tensor(self, index):
    method detect_objects (line 41) | def detect_objects(self, image):
    method detect (line 52) | def detect(self, original_image):
    method run (line 76) | def run(self, netout):
  function _sigmoid (line 120) | def _sigmoid(x):
  function _softmax (line 123) | def _softmax(x, axis=-1, t=-100.):

FILE: example_scripts/oak/yolov2/box.py
  class BoundBox (line 7) | class BoundBox:
    method __init__ (line 8) | def __init__(self, x, y, w, h, c = None, classes = None):
    method get_label (line 17) | def get_label(self):
    method get_score (line 20) | def get_score(self):
    method iou (line 23) | def iou(self, bound_box):
    method as_centroid (line 28) | def as_centroid(self):
  function boxes_to_array (line 32) | def boxes_to_array(bound_boxes):
  function nms_boxes (line 49) | def nms_boxes(boxes, n_classes, nms_threshold=0.3, obj_threshold=0.3):
  function draw_scaled_boxes (line 78) | def draw_scaled_boxes(image, boxes, probs, labels, desired_size=400):
  function draw_boxes (line 95) | def draw_boxes(image, boxes, probs, labels):
  function centroid_box_iou (line 108) | def centroid_box_iou(box1, box2):
  function to_centroid (line 137) | def to_centroid(minmax_boxes):
  function to_minmax (line 155) | def to_minmax(centroid_boxes):
  function create_anchor_boxes (line 170) | def create_anchor_boxes(anchors):
  function find_match_box (line 184) | def find_match_box(centroid_box, centroid_boxes):

FILE: example_scripts/oak/yolov2/yolo.py
  function sigmoid (line 16) | def sigmoid(x):
  function calculate_overlap (line 20) | def calculate_overlap(x1, w1, x2, w2):
  function calculate_iou (line 27) | def calculate_iou(box, truth):
  function apply_nms (line 41) | def apply_nms(boxes):
  function post_processing (line 66) | def post_processing(output, label_list, threshold):
  function show_tiny_yolo (line 147) | def show_tiny_yolo(results, original_img, is_depth=0):

FILE: example_scripts/oak/yolov2/yolo_alt.py
  class Detector (line 9) | class Detector(object):
    method __init__ (line 11) | def __init__(self, label_file, model_file, threshold):
    method load_labels (line 16) | def load_labels(self, path):
    method parse (line 20) | def parse(self, original_image, tensor):
    method run (line 43) | def run(self, netout):
  function _sigmoid (line 87) | def _sigmoid(x):
  function _softmax (line 90) | def _softmax(x, axis=-1, t=-100.):

FILE: example_scripts/tensorflow_lite/classifier/base_camera.py
  class CameraEvent (line 12) | class CameraEvent(object):
    method __init__ (line 16) | def __init__(self):
    method wait (line 19) | def wait(self):
    method set (line 29) | def set(self):
    method clear (line 49) | def clear(self):
  class BaseCamera (line 54) | class BaseCamera(object):
    method __init__ (line 60) | def __init__(self):
    method get_frame (line 73) | def get_frame(self):
    method frames (line 84) | def frames():
    method _thread (line 89) | def _thread(cls):

FILE: example_scripts/tensorflow_lite/classifier/camera_opencv.py
  class Camera (line 5) | class Camera(BaseCamera):
    method set_video_source (line 9) | def set_video_source(source):
    method frames (line 13) | def frames():

FILE: example_scripts/tensorflow_lite/classifier/camera_pi.py
  class Camera (line 9) | class Camera(BaseCamera):
    method set_video_source (line 13) | def set_video_source(source):
    method frames (line 17) | def frames():

FILE: example_scripts/tensorflow_lite/classifier/classifier_file.py
  function load_labels (line 11) | def load_labels(path):
  class NetworkExecutor (line 15) | class NetworkExecutor(object):
    method __init__ (line 17) | def __init__(self, model_file):
    method get_output_tensors (line 24) | def get_output_tensors(self):
    method run (line 36) | def run(self, image):
  function main (line 44) | def main(args):

FILE: example_scripts/tensorflow_lite/classifier/classifier_stream.py
  function load_labels (line 13) | def load_labels(path):
  class NetworkExecutor (line 17) | class NetworkExecutor(object):
    method __init__ (line 19) | def __init__(self, model_file):
    method get_output_tensors (line 26) | def get_output_tensors(self):
    method run (line 38) | def run(self, image):
  class Classifier (line 46) | class Classifier(NetworkExecutor):
    method __init__ (line 48) | def __init__(self, label_file, model_file, top_k):
    method classify (line 57) | def classify(self, frame):
  function index (line 72) | def index():
  function gen (line 75) | def gen(camera):
  function video_feed (line 82) | def video_feed():

FILE: example_scripts/tensorflow_lite/classifier/cv_utils.py
  function preprocess (line 13) | def preprocess(img):
  function decode_yolov2 (line 23) | def decode_yolov2(netout,
  function decode_yolov3 (line 64) | def decode_yolov3(netout,
  function decode_classifier (line 106) | def decode_classifier(netout, top_k=3):
  function decode_segnet (line 112) | def decode_segnet(netout, labels, class_colors):
  function get_legends (line 126) | def get_legends(class_names, colors):
  function overlay_seg_image (line 138) | def overlay_seg_image(inp_img, seg_img):
  function concat_lenends (line 146) | def concat_lenends(seg_img, legend_img):
  function _sigmoid (line 152) | def _sigmoid(x):
  function _softmax (line 155) | def _softmax(x, axis=-1, t=-100.):
  function resize_with_aspect_ratio (line 162) | def resize_with_aspect_ratio(frame: np.ndarray, input_binding_info: tuple):
  function create_video_writer (line 190) | def create_video_writer(video, video_path, output_name):
  function init_video_file_capture (line 217) | def init_video_file_capture(video_path, output_name):
  function draw_bounding_boxes (line 240) | def draw_bounding_boxes(frame, detections, labels=None, processing_funct...
  function draw_classification (line 301) | def draw_classification(frame, classifications, labels):
  function get_source_encoding_int (line 308) | def get_source_encoding_int(video_capture):
  class BoundBox (line 311) | class BoundBox:
    method __init__ (line 312) | def __init__(self, x, y, w, h, c = None, classes = None):
    method get_label (line 321) | def get_label(self):
    method get_score (line 324) | def get_score(self):
    method iou (line 327) | def iou(self, bound_box):
    method as_centroid (line 332) | def as_centroid(self):
  function boxes_to_array (line 336) | def boxes_to_array(bound_boxes):
  function nms_boxes (line 352) | def nms_boxes(boxes, n_classes, nms_threshold=0.3, obj_threshold=0.3):
  function centroid_box_iou (line 380) | def centroid_box_iou(box1, box2):
  function to_minmax (line 408) | def to_minmax(centroid_boxes):

FILE: example_scripts/tensorflow_lite/detector/base_camera.py
  class CameraEvent (line 12) | class CameraEvent(object):
    method __init__ (line 16) | def __init__(self):
    method wait (line 19) | def wait(self):
    method set (line 29) | def set(self):
    method clear (line 49) | def clear(self):
  class BaseCamera (line 54) | class BaseCamera(object):
    method __init__ (line 60) | def __init__(self):
    method get_frame (line 73) | def get_frame(self):
    method frames (line 84) | def frames():
    method _thread (line 89) | def _thread(cls):

FILE: example_scripts/tensorflow_lite/detector/camera_opencv.py
  class Camera (line 5) | class Camera(BaseCamera):
    method set_video_source (line 9) | def set_video_source(source):
    method frames (line 13) | def frames():

FILE: example_scripts/tensorflow_lite/detector/camera_pi.py
  class Camera (line 9) | class Camera(BaseCamera):
    method set_video_source (line 13) | def set_video_source(source):
    method frames (line 17) | def frames():

FILE: example_scripts/tensorflow_lite/detector/cv_utils.py
  function preprocess (line 13) | def preprocess(img):
  function decode_yolov2 (line 23) | def decode_yolov2(netout,
  function decode_yolov3 (line 64) | def decode_yolov3(netout,
  function decode_classifier (line 106) | def decode_classifier(netout, top_k=3):
  function decode_segnet (line 112) | def decode_segnet(netout, labels, class_colors):
  function get_legends (line 126) | def get_legends(class_names, colors):
  function overlay_seg_image (line 138) | def overlay_seg_image(inp_img, seg_img):
  function concat_lenends (line 146) | def concat_lenends(seg_img, legend_img):
  function _sigmoid (line 152) | def _sigmoid(x):
  function _softmax (line 155) | def _softmax(x, axis=-1, t=-100.):
  function resize_with_aspect_ratio (line 162) | def resize_with_aspect_ratio(frame: np.ndarray, input_binding_info: tuple):
  function create_video_writer (line 190) | def create_video_writer(video, video_path, output_name):
  function init_video_file_capture (line 217) | def init_video_file_capture(video_path, output_name):
  function draw_bounding_boxes (line 240) | def draw_bounding_boxes(frame, detections, labels=None, processing_funct...
  function draw_classification (line 301) | def draw_classification(frame, classifications, labels):
  function get_source_encoding_int (line 308) | def get_source_encoding_int(video_capture):
  class BoundBox (line 311) | class BoundBox:
    method __init__ (line 312) | def __init__(self, x, y, w, h, c = None, classes = None):
    method get_label (line 321) | def get_label(self):
    method get_score (line 324) | def get_score(self):
    method iou (line 327) | def iou(self, bound_box):
    method as_centroid (line 332) | def as_centroid(self):
  function boxes_to_array (line 336) | def boxes_to_array(bound_boxes):
  function nms_boxes (line 352) | def nms_boxes(boxes, n_classes, nms_threshold=0.3, obj_threshold=0.3):
  function centroid_box_iou (line 380) | def centroid_box_iou(box1, box2):
  function to_minmax (line 408) | def to_minmax(centroid_boxes):

FILE: example_scripts/tensorflow_lite/detector/detector_file.py
  function load_labels (line 11) | def load_labels(path):
  class NetworkExecutor (line 15) | class NetworkExecutor(object):
    method __init__ (line 17) | def __init__(self, model_file):
    method get_output_tensors (line 24) | def get_output_tensors(self):
    method run (line 36) | def run(self, image):
  function main (line 44) | def main(args, detector):

FILE: example_scripts/tensorflow_lite/detector/detector_stream.py
  function load_labels (line 13) | def load_labels(path):
  class NetworkExecutor (line 17) | class NetworkExecutor(object):
    method __init__ (line 19) | def __init__(self, model_file):
    method get_output_tensors (line 26) | def get_output_tensors(self):
    method run (line 38) | def run(self, image):
  class Detector (line 46) | class Detector(NetworkExecutor):
    method __init__ (line 48) | def __init__(self, label_file, model_file, threshold):
    method detect (line 57) | def detect(self, original_image):
  function index (line 71) | def index():
  function gen (line 74) | def gen(camera):
  function video_feed (line 81) | def video_feed():

FILE: example_scripts/tensorflow_lite/segnet/base_camera.py
  class CameraEvent (line 12) | class CameraEvent(object):
    method __init__ (line 16) | def __init__(self):
    method wait (line 19) | def wait(self):
    method set (line 29) | def set(self):
    method clear (line 49) | def clear(self):
  class BaseCamera (line 54) | class BaseCamera(object):
    method __init__ (line 60) | def __init__(self):
    method get_frame (line 73) | def get_frame(self):
    method frames (line 84) | def frames():
    method _thread (line 89) | def _thread(cls):

FILE: example_scripts/tensorflow_lite/segnet/camera_opencv.py
  class Camera (line 5) | class Camera(BaseCamera):
    method set_video_source (line 9) | def set_video_source(source):
    method frames (line 13) | def frames():

FILE: example_scripts/tensorflow_lite/segnet/camera_pi.py
  class Camera (line 9) | class Camera(BaseCamera):
    method frames (line 11) | def frames():

FILE: example_scripts/tensorflow_lite/segnet/cv_utils.py
  function preprocess (line 13) | def preprocess(img):
  function decode_yolov2 (line 23) | def decode_yolov2(netout,
  function decode_yolov3 (line 64) | def decode_yolov3(netout,
  function decode_classifier (line 106) | def decode_classifier(netout, top_k=3):
  function decode_segnet (line 112) | def decode_segnet(netout, labels, class_colors):
  function get_legends (line 126) | def get_legends(class_names, colors):
  function overlay_seg_image (line 138) | def overlay_seg_image(inp_img, seg_img):
  function concat_lenends (line 146) | def concat_lenends(seg_img, legend_img):
  function _sigmoid (line 152) | def _sigmoid(x):
  function _softmax (line 155) | def _softmax(x, axis=-1, t=-100.):
  function resize_with_aspect_ratio (line 162) | def resize_with_aspect_ratio(frame: np.ndarray, input_binding_info: tuple):
  function create_video_writer (line 190) | def create_video_writer(video, video_path, output_name):
  function init_video_file_capture (line 217) | def init_video_file_capture(video_path, output_name):
  function draw_bounding_boxes (line 240) | def draw_bounding_boxes(frame, detections, labels=None, processing_funct...
  function draw_classification (line 301) | def draw_classification(frame, classifications, labels):
  function get_source_encoding_int (line 308) | def get_source_encoding_int(video_capture):
  class BoundBox (line 311) | class BoundBox:
    method __init__ (line 312) | def __init__(self, x, y, w, h, c = None, classes = None):
    method get_label (line 321) | def get_label(self):
    method get_score (line 324) | def get_score(self):
    method iou (line 327) | def iou(self, bound_box):
    method as_centroid (line 332) | def as_centroid(self):
  function boxes_to_array (line 336) | def boxes_to_array(bound_boxes):
  function nms_boxes (line 352) | def nms_boxes(boxes, n_classes, nms_threshold=0.3, obj_threshold=0.3):
  function centroid_box_iou (line 380) | def centroid_box_iou(box1, box2):
  function to_minmax (line 408) | def to_minmax(centroid_boxes):

FILE: example_scripts/tensorflow_lite/segnet/segnet_file.py
  function load_labels (line 14) | def load_labels(path):
  class NetworkExecutor (line 18) | class NetworkExecutor(object):
    method __init__ (line 20) | def __init__(self, model_file):
    method get_output_tensors (line 27) | def get_output_tensors(self):
    method run (line 39) | def run(self, image):
  function main (line 47) | def main(args):

FILE: example_scripts/tensorflow_lite/segnet/segnet_stream.py
  function load_labels (line 17) | def load_labels(path):
  class NetworkExecutor (line 21) | class NetworkExecutor(object):
    method __init__ (line 23) | def __init__(self, model_file):
    method get_output_tensors (line 30) | def get_output_tensors(self):
    method run (line 42) | def run(self, image):
  class Segnet (line 50) | class Segnet(NetworkExecutor):
    method __init__ (line 52) | def __init__(self, label_file, model_file, overlay):
    method segment (line 64) | def segment(self, frame):
  function index (line 82) | def index():
  function gen (line 85) | def gen(camera):
  function video_feed (line 92) | def video_feed():

FILE: tests_training_and_inference.py
  function configs (line 9) | def configs(network_type):
Condensed preview — 135 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (626K chars).
[
  {
    "path": ".github/FUNDING.yml",
    "chars": 683,
    "preview": "# These are supported funding model platforms\n\ngithub: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [u"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.yml",
    "chars": 2014,
    "preview": "name: Bug Report\ndescription: File a bug report\ntitle: \"[Bug]: \"\nlabels: [bug, triage]\nassignees:\n  - AIWintermuteAI\nbod"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "chars": 240,
    "preview": "blank_issues_enabled: false\ncontact_links:\n  - name: Google\n    url: https://google.com/\n    about: Please find answers "
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.yml",
    "chars": 1371,
    "preview": "name: Feature request\ndescription: Suggest an idea for this project\ntitle: \"[Feature request]: \"\nlabels: [enhancement, h"
  },
  {
    "path": ".github/workflows/python-publish.yml",
    "chars": 865,
    "preview": "# This workflows will upload a Python Package using Twine when a release is created\n# For more information see: https://"
  },
  {
    "path": ".gitignore",
    "chars": 208,
    "preview": "__pycache__/\naxelerate/networks/common_utils/ncc\naxelerate/networks/common_utils/ncc_linux_x86_64.tar.xz\naxelerate.egg-i"
  },
  {
    "path": "LICENSE",
    "chars": 1070,
    "preview": "MIT License\n\nCopyright (c) 2020 Dmitry Maslov\n\nPermission is hereby granted, free of charge, to any person obtaining a c"
  },
  {
    "path": "README.md",
    "chars": 6843,
    "preview": "<h1 align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/AIWintermuteAI/aXeleRate/master/resources/logo.png\" al"
  },
  {
    "path": "axelerate/__init__.py",
    "chars": 108,
    "preview": "from .train import setup_training\nfrom .infer import setup_inference\nfrom .evaluate import setup_evaluation\n"
  },
  {
    "path": "axelerate/evaluate.py",
    "chars": 7151,
    "preview": "import os\r\nimport argparse\r\nimport json\r\nimport cv2\r\nimport numpy as np\r\nimport matplotlib\r\nimport matplotlib.pyplot as "
  },
  {
    "path": "axelerate/infer.py",
    "chars": 8532,
    "preview": "import glob\r\nimport os\r\nimport argparse\r\nimport json\r\nimport cv2\r\nimport numpy as np\r\nimport matplotlib\r\nimport matplotl"
  },
  {
    "path": "axelerate/networks/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/classifier/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/classifier/batch_gen.py",
    "chars": 9321,
    "preview": "## Code heavily adapted from:\n## *https://github.com/keras-team/keras-preprocessing/blob/master/keras_preprocessing/\n\n\"\""
  },
  {
    "path": "axelerate/networks/classifier/directory_iterator.py",
    "chars": 6929,
    "preview": "\"\"\"Utilities for real-time data augmentation on image data.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ i"
  },
  {
    "path": "axelerate/networks/classifier/frontend_classifier.py",
    "chars": 5348,
    "preview": "import time\nimport os\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.metrics import classification_rep"
  },
  {
    "path": "axelerate/networks/classifier/iterator.py",
    "chars": 12776,
    "preview": "\"\"\"Utilities for real-time data augmentation on image data.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ i"
  },
  {
    "path": "axelerate/networks/classifier/utils.py",
    "chars": 10238,
    "preview": "\"\"\"Utilities for real-time data augmentation on image data.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ i"
  },
  {
    "path": "axelerate/networks/common_utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/common_utils/augment.py",
    "chars": 10985,
    "preview": "# -*- coding: utf-8 -*-\r\nimport numpy as np\r\nnp.random.seed(1337)\r\nimport imgaug as ia\r\nfrom imgaug import augmenters as"
  },
  {
    "path": "axelerate/networks/common_utils/callbacks.py",
    "chars": 5233,
    "preview": "import numpy as np\nfrom tensorflow import keras\nfrom tensorflow.keras import backend as K\n\ndef cosine_decay_with_warmup("
  },
  {
    "path": "axelerate/networks/common_utils/convert.py",
    "chars": 11007,
    "preview": "import tensorflow as tf\nimport tensorflow.keras.backend as k\nimport subprocess\nimport os\nimport cv2\nimport argparse\nimpo"
  },
  {
    "path": "axelerate/networks/common_utils/feature.py",
    "chars": 17213,
    "preview": "import tensorflow\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Reshape, Activation, Con"
  },
  {
    "path": "axelerate/networks/common_utils/fit.py",
    "chars": 5747,
    "preview": "import shutil\nimport os\nimport time\nimport tensorflow as tf\nimport numpy as np\nimport warnings\n\nfrom axelerate.networks."
  },
  {
    "path": "axelerate/networks/common_utils/install_edge_tpu_compiler.sh",
    "chars": 307,
    "preview": "wget https://packages.cloud.google.com/apt/doc/apt-key.gpg \n\nsudo apt-key add apt-key.gpg &&\n\necho \"deb https://packages"
  },
  {
    "path": "axelerate/networks/common_utils/install_openvino.sh",
    "chars": 366,
    "preview": "sudo apt-get install -y pciutils cpio &&\nwget http://registrationcenter-download.intel.com/akdlm/irc_nas/16345/l_openvin"
  },
  {
    "path": "axelerate/networks/common_utils/mobilenet_sipeed/__init__.py",
    "chars": 3143,
    "preview": "\"\"\"Enables dynamic setting of underlying Keras module.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import"
  },
  {
    "path": "axelerate/networks/common_utils/mobilenet_sipeed/imagenet_utils.py",
    "chars": 12633,
    "preview": "\"\"\"Utilities for ImageNet data preprocessing & prediction decoding.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __fu"
  },
  {
    "path": "axelerate/networks/common_utils/mobilenet_sipeed/mobilenet.py",
    "chars": 20406,
    "preview": "\"\"\"MobileNet v1 models for Keras.\n\nMobileNet is a general architecture and can be used for multiple use cases.\nDepending"
  },
  {
    "path": "axelerate/networks/segnet/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/segnet/data_utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/segnet/data_utils/data_loader.py",
    "chars": 7967,
    "preview": "import os\nimport numpy as np\nnp.random.seed(1337)\nfrom tensorflow.keras.utils import Sequence\nfrom axelerate.networks.co"
  },
  {
    "path": "axelerate/networks/segnet/frontend_segnet.py",
    "chars": 5940,
    "preview": "import os\nimport numpy as np\nimport cv2\nimport time\nfrom tqdm import tqdm\n\nfrom axelerate.networks.segnet.data_utils.dat"
  },
  {
    "path": "axelerate/networks/segnet/metrics.py",
    "chars": 345,
    "preview": "import numpy as np\n\nEPS = 1e-12\n\ndef get_iou(gt, pr, n_classes):\n    class_wise = np.zeros(n_classes)\n    for cl in rang"
  },
  {
    "path": "axelerate/networks/segnet/models/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/segnet/models/_pspnet_2.py",
    "chars": 9999,
    "preview": "# This code is proveded by Vladkryvoruchko and small modifications done by me .\n\nfrom math import ceil\nfrom sys import e"
  },
  {
    "path": "axelerate/networks/segnet/models/all_models.py",
    "chars": 1382,
    "preview": "from . import pspnet\nfrom . import unet\nfrom . import segnet\nfrom . import fcn\nmodel_from_name = {}\n\n\nmodel_from_name[\"f"
  },
  {
    "path": "axelerate/networks/segnet/models/basic_models.py",
    "chars": 1598,
    "preview": "from keras.models import *\nfrom keras.layers import *\nimport keras.backend as K\n\nfrom .config import IMAGE_ORDERING\n\ndef"
  },
  {
    "path": "axelerate/networks/segnet/models/config.py",
    "chars": 183,
    "preview": "IMAGE_ORDERING_CHANNELS_LAST = \"channels_last\"\nIMAGE_ORDERING_CHANNELS_FIRST = \"channels_first\"\n\n# Default IMAGE_ORDERIN"
  },
  {
    "path": "axelerate/networks/segnet/models/fcn.py",
    "chars": 5364,
    "preview": "from keras.models import *\nfrom keras.layers import *\n\nfrom .config import IMAGE_ORDERING\nfrom .model_utils import get_s"
  },
  {
    "path": "axelerate/networks/segnet/models/model.py",
    "chars": 147,
    "preview": "\"\"\" Definition for the generic Model class \"\"\"\n\nclass Model:\n    def __init__(self, n_classes, input_height=None, input_"
  },
  {
    "path": "axelerate/networks/segnet/models/model_utils.py",
    "chars": 3144,
    "preview": "from types import MethodType\n\nfrom tensorflow.keras.models import *\nfrom tensorflow.keras.layers import *\nimport tensorf"
  },
  {
    "path": "axelerate/networks/segnet/models/pspnet.py",
    "chars": 4118,
    "preview": "import numpy as np\nimport keras\nfrom keras.models import *\nfrom keras.layers import *\nimport keras.backend as K\n\nfrom .c"
  },
  {
    "path": "axelerate/networks/segnet/models/segnet.py",
    "chars": 5596,
    "preview": "import os\n\nfrom tensorflow.keras.models import *\nfrom tensorflow.keras.layers import *\n\nfrom .config import IMAGE_ORDERI"
  },
  {
    "path": "axelerate/networks/segnet/models/unet.py",
    "chars": 5379,
    "preview": "from keras.models import *\nfrom keras.layers import *\n\nfrom .config import IMAGE_ORDERING\nfrom .model_utils import get_s"
  },
  {
    "path": "axelerate/networks/segnet/predict.py",
    "chars": 7937,
    "preview": "import glob\nimport random\nimport json\nimport os\n\nimport cv2\nimport numpy as np\nnp.set_printoptions(threshold=np.inf)\nfro"
  },
  {
    "path": "axelerate/networks/segnet/train.py",
    "chars": 5699,
    "preview": "import argparse\nimport json\nfrom .data_utils.data_loader import create_batch_generator, verify_segmentation_dataset\nimpo"
  },
  {
    "path": "axelerate/networks/yolo/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/yolo/backend/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/yolo/backend/batch_gen.py",
    "chars": 8858,
    "preview": "import cv2\nimport os\nimport numpy as np\nnp.random.seed(1337)\n\nfrom tensorflow.keras.utils import Sequence\nfrom axelerate"
  },
  {
    "path": "axelerate/networks/yolo/backend/decoder.py",
    "chars": 2232,
    "preview": "import numpy as np\r\nfrom axelerate.networks.yolo.backend.utils.box import BoundBox\r\nfrom axelerate.networks.yolo.backend"
  },
  {
    "path": "axelerate/networks/yolo/backend/loss.py",
    "chars": 10752,
    "preview": "import tensorflow as tf\r\nimport tensorflow.python.keras.backend as K\r\nfrom tensorflow import map_fn\r\nimport numpy as np\r"
  },
  {
    "path": "axelerate/networks/yolo/backend/network.py",
    "chars": 4403,
    "preview": "# -*- coding: utf-8 -*-\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.models import Model\nfrom tensor"
  },
  {
    "path": "axelerate/networks/yolo/backend/utils/__init__.py",
    "chars": 173,
    "preview": "# All modules in utils package can be run independently and have no dependencies on other modules in the project.\r\n# Thi"
  },
  {
    "path": "axelerate/networks/yolo/backend/utils/annotation.py",
    "chars": 8208,
    "preview": "# -*- coding: utf-8 -*-\r\n\r\nimport os\r\nimport numpy as np\r\nfrom xml.etree.ElementTree import parse\r\n\r\n\r\ndef get_unique_la"
  },
  {
    "path": "axelerate/networks/yolo/backend/utils/box.py",
    "chars": 6802,
    "preview": "import numpy as np\r\nimport cv2\r\n\r\nclass BoundBox:\r\n    def __init__(self, x, y, w, h, c = None, classes = None):\r\n      "
  },
  {
    "path": "axelerate/networks/yolo/backend/utils/custom.py",
    "chars": 6004,
    "preview": "from tensorflow.python import keras\nfrom tensorflow.python.ops import init_ops\nfrom tensorflow.python.ops import math_op"
  },
  {
    "path": "axelerate/networks/yolo/backend/utils/eval/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "axelerate/networks/yolo/backend/utils/eval/_box_match.py",
    "chars": 4987,
    "preview": "# -*- coding: utf-8 -*-\r\nimport numpy as np\r\nfrom scipy.optimize import linear_sum_assignment as linear_assignment \r\n \r\n"
  },
  {
    "path": "axelerate/networks/yolo/backend/utils/eval/fscore.py",
    "chars": 1516,
    "preview": "# -*- coding: utf-8 -*-\r\nfrom ._box_match import BoxMatcher\r\n\r\ndef count_true_positives(detect_boxes, true_boxes, detect"
  },
  {
    "path": "axelerate/networks/yolo/frontend.py",
    "chars": 7937,
    "preview": "# -*- coding: utf-8 -*-\n# This module is responsible for communicating with the outside of the yolo package.\n# Outside t"
  },
  {
    "path": "axelerate/train.py",
    "chars": 8035,
    "preview": "import shutil\nimport numpy as np\nnp.random.seed(111)\nimport argparse\nimport os\nimport time\nimport sys\nimport json\nimport"
  },
  {
    "path": "configs/classifier.json",
    "chars": 1018,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Classifier\",\r\n        \"architecture\":         \"MobileNet7_5\",\r\n    "
  },
  {
    "path": "configs/detector.json",
    "chars": 1433,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Detector\",\r\n        \"architecture\":         \"MobileNet7_5\",\r\n      "
  },
  {
    "path": "configs/dogs_classifier.json",
    "chars": 1047,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Classifier\",\r\n        \"architecture\":         \"NASNetMobile\",\r\n    "
  },
  {
    "path": "configs/face_detector.json",
    "chars": 1536,
    "preview": "{\r\n        \"model\":{\r\n            \"type\":                 \"Detector\",\r\n            \"architecture\":         \"MobileNet2_5"
  },
  {
    "path": "configs/kangaroo_detector.json",
    "chars": 1426,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Detector\",\r\n        \"architecture\":         \"MobileNet2_5\",\r\n      "
  },
  {
    "path": "configs/lego_detector.json",
    "chars": 1293,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Detector\",\r\n        \"architecture\":         \"MobileNet7_5\",\r\n      "
  },
  {
    "path": "configs/pascal_20_detector.json",
    "chars": 1601,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Detector\",\r\n        \"architecture\":         \"MobileNet7_5\",\r\n      "
  },
  {
    "path": "configs/pascal_20_detector_2.json",
    "chars": 1724,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Detector\",\r\n        \"architecture\":         \"MobileNet1_0\",\r\n      "
  },
  {
    "path": "configs/pascal_20_segnet.json",
    "chars": 1129,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"SegNet\",\r\n        \"architecture\":         \"MobileNet7_5\",\r\n        "
  },
  {
    "path": "configs/person_detector.json",
    "chars": 1578,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Detector\",\r\n        \"architecture\":         \"MobileNet7_5\",\r\n      "
  },
  {
    "path": "configs/raccoon_detector.json",
    "chars": 1422,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Detector\",\r\n        \"architecture\":         \"MobileNet5_0\",\r\n      "
  },
  {
    "path": "configs/santa_uno.json",
    "chars": 1017,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"Classifier\",\r\n        \"architecture\":         \"MobileNet7_5\",\r\n    "
  },
  {
    "path": "configs/segmentation.json",
    "chars": 1069,
    "preview": "{\r\n    \"model\" : {\r\n        \"type\":                 \"SegNet\",\r\n        \"architecture\":         \"MobileNet7_5\",\r\n        "
  },
  {
    "path": "example_scripts/arm_nn/README.md",
    "chars": 12394,
    "preview": "# PyArmNN Object Detection Sample Application\n\n## Introduction\nThis sample application guides the user and shows how to "
  },
  {
    "path": "example_scripts/arm_nn/box.py",
    "chars": 5788,
    "preview": "import numpy as np\nimport cv2\n\n\n# Todo : BoundBox & its related method extraction\nclass BoundBox:\n    def __init__(self,"
  },
  {
    "path": "example_scripts/arm_nn/cv_utils.py",
    "chars": 7282,
    "preview": "# Copyright © 2020 Arm Ltd and Contributors. All rights reserved.\n# SPDX-License-Identifier: MIT\n\n\"\"\"\nThis file contains"
  },
  {
    "path": "example_scripts/arm_nn/network_executor.py",
    "chars": 4092,
    "preview": "# Copyright © 2020 Arm Ltd and Contributors. All rights reserved.\n# SPDX-License-Identifier: MIT\n\nimport os\nfrom typing "
  },
  {
    "path": "example_scripts/arm_nn/run_video_file.py",
    "chars": 7830,
    "preview": "# Copyright © 2020 Arm Ltd and Contributors. All rights reserved.\r\n# SPDX-License-Identifier: MIT\r\n\r\n\"\"\"\r\nObject detecti"
  },
  {
    "path": "example_scripts/arm_nn/run_video_stream.py",
    "chars": 7540,
    "preview": "\"\"\"\r\nObject detection demo that takes a video stream from a device, runs inference\r\non each frame producing bounding box"
  },
  {
    "path": "example_scripts/arm_nn/yolov2.py",
    "chars": 3639,
    "preview": "# Copyright © 2020 Arm Ltd and Contributors. All rights reserved.\r\n# SPDX-License-Identifier: MIT\r\n\r\n\"\"\"\r\nContains funct"
  },
  {
    "path": "example_scripts/edge_tpu/detector/box.py",
    "chars": 5995,
    "preview": "\r\nimport numpy as np\r\nimport cv2\r\n\r\n\r\n# Todo : BoundBox & its related method extraction\r\nclass BoundBox:\r\n    def __init"
  },
  {
    "path": "example_scripts/edge_tpu/detector/detector_video.py",
    "chars": 6069,
    "preview": "import argparse\nimport io\nimport time\nimport numpy as np\nimport cv2\n\nfrom box import BoundBox, nms_boxes, boxes_to_array"
  },
  {
    "path": "example_scripts/k210/classifier/santa_uno.py",
    "chars": 1206,
    "preview": "# tested with firmware maixpy_v0.6.2_72_g22a8555b5_openmv_kmodel_v4_with_ide_support\r\nimport sensor, image, lcd, time\r\ni"
  },
  {
    "path": "example_scripts/k210/detector/yolov2/person_detector_v4.py",
    "chars": 1195,
    "preview": "#tested with firmware maixpy_v0.6.2_72_g22a8555b5_openmv_kmodel_v4_with_ide_support\r\nimport sensor, image, lcd\r\nimport K"
  },
  {
    "path": "example_scripts/k210/detector/yolov2/raccoon_detector.py",
    "chars": 1196,
    "preview": "# tested with firmware maixpy_v0.6.2_72_g22a8555b5_openmv_kmodel_v4_with_ide_support\r\nimport sensor, image, lcd\r\nimport "
  },
  {
    "path": "example_scripts/k210/detector/yolov2/raccoon_detector_uart.py",
    "chars": 1473,
    "preview": "# tested with firmware 5-0.22\r\nimport sensor,image,lcd\r\nimport KPU as kpu\r\nfrom fpioa_manager import fm\r\nfrom machine im"
  },
  {
    "path": "example_scripts/k210/detector/yolov3/raccoon_detector.py",
    "chars": 1388,
    "preview": "# needs firmware from my fork with yolov3 support, see\r\n# https://github.com/sipeed/MaixPy/pull/451\r\n\r\nimport sensor, im"
  },
  {
    "path": "example_scripts/k210/segnet/segnet-support-is-WIP-contributions-welcome",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "example_scripts/oak/yolov2/YOLO_best_mAP.json",
    "chars": 2386,
    "preview": "{\n    \"NN_config\":\n    {\n        \"output_format\" : \"raw\",\n        \"NN_family\" : \"YOLO\",\n        \"NN_specific_metadata\" :"
  },
  {
    "path": "example_scripts/oak/yolov2/box.py",
    "chars": 5995,
    "preview": "\r\nimport numpy as np\r\nimport cv2\r\n\r\n\r\n# Todo : BoundBox & its related method extraction\r\nclass BoundBox:\r\n    def __init"
  },
  {
    "path": "example_scripts/oak/yolov2/yolo.py",
    "chars": 10138,
    "preview": "import consts.resource_paths\nimport cv2\nimport depthai\nimport argparse\nimport time \nimport numpy as np\n\nIOU_THRESHOLD = "
  },
  {
    "path": "example_scripts/oak/yolov2/yolo_alt.py",
    "chars": 5550,
    "preview": "import consts.resource_paths\nimport cv2\nimport depthai\nimport argparse\nimport time \nimport numpy as np\nfrom box import B"
  },
  {
    "path": "example_scripts/tensorflow_lite/classifier/base_camera.py",
    "chars": 3611,
    "preview": "import time\nimport threading\ntry:\n    from greenlet import getcurrent as get_ident\nexcept ImportError:\n    try:\n        "
  },
  {
    "path": "example_scripts/tensorflow_lite/classifier/camera_opencv.py",
    "chars": 532,
    "preview": "import cv2\nfrom base_camera import BaseCamera\n\n\nclass Camera(BaseCamera):\n    video_source = 0\n\n    @staticmethod\n    de"
  },
  {
    "path": "example_scripts/tensorflow_lite/classifier/camera_pi.py",
    "chars": 775,
    "preview": "import io\nimport time\nimport picamera\nimport picamera.array\nimport cv2\nfrom base_camera import BaseCamera\n\n\nclass Camera"
  },
  {
    "path": "example_scripts/tensorflow_lite/classifier/classifier_file.py",
    "chars": 3071,
    "preview": "import time\nimport argparse\nimport os\nimport cv2\nimport numpy as np\nfrom tqdm import tqdm\n\nfrom cv_utils import init_vid"
  },
  {
    "path": "example_scripts/tensorflow_lite/classifier/classifier_stream.py",
    "chars": 3462,
    "preview": "import time\nimport argparse\nimport os\nimport cv2\nimport numpy as np\n\nfrom cv_utils import decode_classifier, draw_classi"
  },
  {
    "path": "example_scripts/tensorflow_lite/classifier/cv_utils.py",
    "chars": 14917,
    "preview": "# Copyright © 2020 Arm Ltd and Contributors. All rights reserved.\n# SPDX-License-Identifier: MIT\n\n\"\"\"\nThis file contains"
  },
  {
    "path": "example_scripts/tensorflow_lite/classifier/templates/index.html",
    "chars": 192,
    "preview": "<html>\n  <head>\n    <title>Video Streaming Demonstration</title>\n  </head>\n  <body>\n    <h1>Tflite Image Classification "
  },
  {
    "path": "example_scripts/tensorflow_lite/detector/base_camera.py",
    "chars": 3611,
    "preview": "import time\nimport threading\ntry:\n    from greenlet import getcurrent as get_ident\nexcept ImportError:\n    try:\n        "
  },
  {
    "path": "example_scripts/tensorflow_lite/detector/camera_opencv.py",
    "chars": 562,
    "preview": "import cv2\nfrom base_camera import BaseCamera\n\n\nclass Camera(BaseCamera):\n    video_source = 0\n\n    @staticmethod\n    de"
  },
  {
    "path": "example_scripts/tensorflow_lite/detector/camera_pi.py",
    "chars": 775,
    "preview": "import io\nimport time\nimport picamera\nimport picamera.array\nimport cv2\nfrom base_camera import BaseCamera\n\n\nclass Camera"
  },
  {
    "path": "example_scripts/tensorflow_lite/detector/cv_utils.py",
    "chars": 14917,
    "preview": "# Copyright © 2020 Arm Ltd and Contributors. All rights reserved.\n# SPDX-License-Identifier: MIT\n\n\"\"\"\nThis file contains"
  },
  {
    "path": "example_scripts/tensorflow_lite/detector/detector_file.py",
    "chars": 3076,
    "preview": "import time\nimport argparse\nimport os\nimport cv2\nimport numpy as np\nfrom tqdm import tqdm\n\nfrom cv_utils import init_vid"
  },
  {
    "path": "example_scripts/tensorflow_lite/detector/detector_stream.py",
    "chars": 3498,
    "preview": "import time\nimport argparse\nimport os\nimport cv2\nimport numpy as np\n\nfrom cv_utils import decode_yolov3, preprocess, dra"
  },
  {
    "path": "example_scripts/tensorflow_lite/detector/templates/index.html",
    "chars": 188,
    "preview": "<html>\n  <head>\n    <title>Video Streaming Demonstration</title>\n  </head>\n  <body>\n    <h1>Tflite Object Detection Demo"
  },
  {
    "path": "example_scripts/tensorflow_lite/segnet/base_camera.py",
    "chars": 3611,
    "preview": "import time\nimport threading\ntry:\n    from greenlet import getcurrent as get_ident\nexcept ImportError:\n    try:\n        "
  },
  {
    "path": "example_scripts/tensorflow_lite/segnet/camera_opencv.py",
    "chars": 562,
    "preview": "import cv2\nfrom base_camera import BaseCamera\n\n\nclass Camera(BaseCamera):\n    video_source = 0\n\n    @staticmethod\n    de"
  },
  {
    "path": "example_scripts/tensorflow_lite/segnet/camera_pi.py",
    "chars": 649,
    "preview": "import io\nimport time\nimport picamera\nimport picamera.array\nimport cv2\nfrom base_camera import BaseCamera\n\n\nclass Camera"
  },
  {
    "path": "example_scripts/tensorflow_lite/segnet/cv_utils.py",
    "chars": 14917,
    "preview": "# Copyright © 2020 Arm Ltd and Contributors. All rights reserved.\n# SPDX-License-Identifier: MIT\n\n\"\"\"\nThis file contains"
  },
  {
    "path": "example_scripts/tensorflow_lite/segnet/segnet_file.py",
    "chars": 3344,
    "preview": "import time\nimport argparse\nimport os\nimport cv2\nimport numpy as np\nfrom tqdm import tqdm\n\nimport random\nrandom.seed(0)\n"
  },
  {
    "path": "example_scripts/tensorflow_lite/segnet/segnet_stream.py",
    "chars": 3773,
    "preview": "import time\nimport argparse\nimport os\nimport cv2\nimport numpy as np\n\nimport random\nrandom.seed(0)\n\nfrom cv_utils import "
  },
  {
    "path": "example_scripts/tensorflow_lite/segnet/templates/index.html",
    "chars": 193,
    "preview": "<html>\n  <head>\n    <title>Video Streaming Demonstration</title>\n  </head>\n  <body>\n    <h1>Tflite Semantic Segmentation"
  },
  {
    "path": "resources/aXeleRate_face_detector.ipynb",
    "chars": 12669,
    "preview": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"aXeleRate_pascal20_detector.ipyn"
  },
  {
    "path": "resources/aXeleRate_human_segmentation.ipynb",
    "chars": 15082,
    "preview": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"aXeleRate_human_segmentation.ipy"
  },
  {
    "path": "resources/aXeleRate_mark_detector.ipynb",
    "chars": 12416,
    "preview": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"aXeleRate_mark_detector.ipynb\",\n"
  },
  {
    "path": "resources/aXeleRate_pascal20_detector.ipynb",
    "chars": 17282,
    "preview": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"aXeleRate_pascal20_detector.ipyn"
  },
  {
    "path": "resources/aXeleRate_person_detector.ipynb",
    "chars": 12935,
    "preview": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"aXeleRate_person_detector.ipynb\""
  },
  {
    "path": "resources/aXeleRate_standford_dog_classifier.ipynb",
    "chars": 15360,
    "preview": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"aXeleRate_standford_dog_classifi"
  },
  {
    "path": "sample_datasets/detector/anns/2007_000032.xml",
    "chars": 1213,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_000032.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns/2007_000033.xml",
    "chars": 1002,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_000033.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_000243.xml",
    "chars": 558,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_000243.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_000250.xml",
    "chars": 784,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_000250.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_000645.xml",
    "chars": 763,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_000645.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_001595.xml",
    "chars": 775,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_001595.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_001834.xml",
    "chars": 558,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_001834.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_003131.xml",
    "chars": 547,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_003131.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_003201.xml",
    "chars": 984,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_003201.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_003593.xml",
    "chars": 765,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_003593.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_004627.xml",
    "chars": 780,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_004627.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "sample_datasets/detector/anns_validation/2007_005803.xml",
    "chars": 559,
    "preview": "<annotation>\n\t<folder>VOC2012</folder>\n\t<filename>2007_005803.jpg</filename>\n\t<source>\n\t\t<database>The VOC2007 Database<"
  },
  {
    "path": "setup.py",
    "chars": 723,
    "preview": "from setuptools import setup, find_packages\r\nfrom os import path\r\nthis_directory = path.abspath(path.dirname(__file__))\r"
  },
  {
    "path": "tests_training_and_inference.py",
    "chars": 5958,
    "preview": "import argparse\r\nimport json\r\nfrom axelerate import setup_training, setup_evaluation\r\nimport tensorflow.keras.backend as"
  }
]

About this extraction

This page contains the full source code of the AIWintermuteAI/aXeleRate GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 135 files (572.1 KB), approximately 149.4k tokens, and a symbol index with 604 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!