Full Code of EASY-EAI/yolov5 for AI

master 4e14e074579f cached
102 files
588.6 KB
179.3k tokens
474 symbols
1 requests
Download .txt
Showing preview only (621K chars total). Download the full file or copy to clipboard to get everything.
Repository: EASY-EAI/yolov5
Branch: master
Commit: 4e14e074579f
Files: 102
Total size: 588.6 KB

Directory structure:
gitextract_19tvehv_/

├── .dockerignore
├── .gitattributes
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.md
│   │   ├── feature-request.md
│   │   └── question.md
│   ├── dependabot.yml
│   └── workflows/
│       ├── ci-testing.yml
│       ├── codeql-analysis.yml
│       ├── greetings.yml
│       ├── rebase.yml
│       └── stale.yml
├── .gitignore
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── README.md
├── data/
│   ├── Argoverse.yaml
│   ├── GlobalWheat2020.yaml
│   ├── Objects365.yaml
│   ├── SKU-110K.yaml
│   ├── VisDrone.yaml
│   ├── coco.yaml
│   ├── coco128.yaml
│   ├── hyps/
│   │   ├── hyp.finetune.yaml
│   │   ├── hyp.finetune_objects365.yaml
│   │   ├── hyp.scratch-high.yaml
│   │   ├── hyp.scratch-low.yaml
│   │   ├── hyp.scratch-med.yaml
│   │   └── hyp.scratch.yaml
│   ├── mask.yaml
│   ├── scripts/
│   │   ├── download_weights.sh
│   │   ├── get_coco.sh
│   │   └── get_coco128.sh
│   ├── voc.yaml
│   └── xView.yaml
├── detect.py
├── export.py
├── hubconf.py
├── models/
│   ├── __init__.py
│   ├── common.py
│   ├── common_rk_plug_in.py
│   ├── experimental.py
│   ├── hub/
│   │   ├── anchors.yaml
│   │   ├── yolov3-spp.yaml
│   │   ├── yolov3-tiny.yaml
│   │   ├── yolov3.yaml
│   │   ├── yolov5-bifpn.yaml
│   │   ├── yolov5-fpn.yaml
│   │   ├── yolov5-p2.yaml
│   │   ├── yolov5-p6.yaml
│   │   ├── yolov5-p7.yaml
│   │   ├── yolov5-panet.yaml
│   │   ├── yolov5l6.yaml
│   │   ├── yolov5m6.yaml
│   │   ├── yolov5n6.yaml
│   │   ├── yolov5s-ghost.yaml
│   │   ├── yolov5s-transformer.yaml
│   │   ├── yolov5s6.yaml
│   │   └── yolov5x6.yaml
│   ├── tf.py
│   ├── yolo.py
│   ├── yolov5l.yaml
│   ├── yolov5m.yaml
│   ├── yolov5n.yaml
│   ├── yolov5s.yaml
│   └── yolov5x.yaml
├── requirements.txt
├── setup.cfg
├── train.py
├── tutorial.ipynb
├── utils/
│   ├── __init__.py
│   ├── activations.py
│   ├── augmentations.py
│   ├── autoanchor.py
│   ├── autobatch.py
│   ├── aws/
│   │   ├── __init__.py
│   │   ├── mime.sh
│   │   ├── resume.py
│   │   └── userdata.sh
│   ├── callbacks.py
│   ├── datasets.py
│   ├── downloads.py
│   ├── flask_rest_api/
│   │   ├── README.md
│   │   ├── example_request.py
│   │   └── restapi.py
│   ├── general.py
│   ├── google_app_engine/
│   │   ├── Dockerfile
│   │   ├── additional_requirements.txt
│   │   └── app.yaml
│   ├── loggers/
│   │   ├── __init__.py
│   │   └── wandb/
│   │       ├── README.md
│   │       ├── __init__.py
│   │       ├── log_dataset.py
│   │       ├── sweep.py
│   │       ├── sweep.yaml
│   │       └── wandb_utils.py
│   ├── loss.py
│   ├── metrics.py
│   ├── plots.py
│   └── torch_utils.py
└── val.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .dockerignore
================================================
# Repo-specific DockerIgnore -------------------------------------------------------------------------------------------
#.git
.cache
.idea
runs
output
coco
storage.googleapis.com

data/samples/*
**/results*.csv
*.jpg

# Neural Network weights -----------------------------------------------------------------------------------------------
**/*.pt
**/*.pth
**/*.onnx
**/*.engine
**/*.mlmodel
**/*.torchscript
**/*.torchscript.pt
**/*.tflite
**/*.h5
**/*.pb
*_saved_model/
*_web_model/

# Below Copied From .gitignore -----------------------------------------------------------------------------------------
# Below Copied From .gitignore -----------------------------------------------------------------------------------------


# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
wandb/
.installed.cfg
*.egg

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# dotenv
.env

# virtualenv
.venv*
venv*/
ENV*/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/


# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------

# General
.DS_Store
.AppleDouble
.LSOverride

# Icon must end with two \r
Icon
Icon?

# Thumbnails
._*

# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk


# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839

# User-specific stuff:
.idea/*
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/dictionaries
.html  # Bokeh Plots
.pg  # TensorFlow Frozen Graphs
.avi # videos

# Sensitive or high-churn files:
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml

# Gradle:
.idea/**/gradle.xml
.idea/**/libraries

# CMake
cmake-build-debug/
cmake-build-release/

# Mongo Explorer plugin:
.idea/**/mongoSettings.xml

## File-based project format:
*.iws

## Plugin-specific files:

# IntelliJ
out/

# mpeltonen/sbt-idea plugin
.idea_modules/

# JIRA plugin
atlassian-ide-plugin.xml

# Cursive Clojure plugin
.idea/replstate.xml

# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties


================================================
FILE: .gitattributes
================================================
# this drop notebooks from GitHub language stats
*.ipynb linguist-vendored


================================================
FILE: .github/ISSUE_TEMPLATE/bug-report.md
================================================
---
name: "🐛 Bug report"
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''

---

Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following, otherwise it is non-actionable, and we can not help you:
 - **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo
 - **Common dataset**: coco.yaml or coco128.yaml
 - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments
 
If this is a custom dataset/training question you **must include** your `train*.jpg`, `test*.jpg` and `results.png` figures, or we can not help you. You can generate these with `utils.plot_results()`.


## 🐛 Bug
A clear and concise description of what the bug is.


## To Reproduce (REQUIRED)

Input:
```
import torch

a = torch.tensor([5])
c = a / 0
```

Output:
```
Traceback (most recent call last):
  File "/Users/glennjocher/opt/anaconda3/envs/env1/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-5-be04c762b799>", line 5, in <module>
    c = a / 0
RuntimeError: ZeroDivisionError
```


## Expected behavior
A clear and concise description of what you expected to happen.


## Environment
If applicable, add screenshots to help explain your problem.

 - OS: [e.g. Ubuntu]
 - GPU [e.g. 2080 Ti]


## Additional context
Add any other context about the problem here.


================================================
FILE: .github/ISSUE_TEMPLATE/feature-request.md
================================================
---
name: "🚀 Feature request"
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''

---

## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->

## Motivation

<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->

## Pitch

<!-- A clear and concise description of what you want to happen. -->

## Alternatives

<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->

## Additional context

<!-- Add any other context or screenshots about the feature request here. -->


================================================
FILE: .github/ISSUE_TEMPLATE/question.md
================================================
---
name: "❓Question"
about: Ask a general question
title: ''
labels: question
assignees: ''

---

## ❔Question


## Additional context


================================================
FILE: .github/dependabot.yml
================================================
version: 2
updates:
- package-ecosystem: pip
  directory: "/"
  schedule:
    interval: weekly
    time: "04:00"
  open-pull-requests-limit: 10
  reviewers:
  - glenn-jocher
  labels:
  - dependencies


================================================
FILE: .github/workflows/ci-testing.yml
================================================
name: CI CPU testing

on:  # https://help.github.com/en/actions/reference/events-that-trigger-workflows
  push:
    branches: [ master ]
  pull_request:
    # The branches below must be a subset of the branches above
    branches: [ master ]
  schedule:
    - cron: '0 0 * * *'  # Runs at 00:00 UTC every day

jobs:
  cpu-tests:

    runs-on: ${{ matrix.os }}
    strategy:
      fail-fast: false
      matrix:
        os: [ubuntu-latest, macos-latest, windows-latest]
        python-version: [3.8]
        model: ['yolov5s']  # models to test

    # Timeout: https://stackoverflow.com/a/59076067/4521646
    timeout-minutes: 50
    steps:
      - uses: actions/checkout@v2
      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v2
        with:
          python-version: ${{ matrix.python-version }}

      # Note: This uses an internal pip API and may not always work
      # https://github.com/actions/cache/blob/master/examples.md#multiple-oss-in-a-workflow
      - name: Get pip cache
        id: pip-cache
        run: |
          python -c "from pip._internal.locations import USER_CACHE_DIR; print('::set-output name=dir::' + USER_CACHE_DIR)"

      - name: Cache pip
        uses: actions/cache@v1
        with:
          path: ${{ steps.pip-cache.outputs.dir }}
          key: ${{ runner.os }}-${{ matrix.python-version }}-pip-${{ hashFiles('requirements.txt') }}
          restore-keys: |
            ${{ runner.os }}-${{ matrix.python-version }}-pip-

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install -qr requirements.txt -f https://download.pytorch.org/whl/cpu/torch_stable.html
          pip install -q onnx
          python --version
          pip --version
          pip list
        shell: bash

      - name: Download data
        run: |
          # curl -L -o tmp.zip https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip
          # unzip -q tmp.zip -d ../
          # rm tmp.zip

      - name: Tests workflow
        run: |
          # export PYTHONPATH="$PWD"  # to run '$ python *.py' files in subdirectories
          di=cpu # inference devices  # define device

          # train
          python train.py --img 128 --batch 16 --weights weights/${{ matrix.model }}.pt --cfg models/${{ matrix.model }}.yaml --epochs 1 --device $di
          # detect
          python detect.py --weights weights/${{ matrix.model }}.pt --device $di
          python detect.py --weights runs/train/exp/weights/last.pt --device $di
          # test
          python test.py --img 128 --batch 16 --weights weights/${{ matrix.model }}.pt --device $di
          python test.py --img 128 --batch 16 --weights runs/train/exp/weights/last.pt --device $di

          python hubconf.py  # hub
          python models/yolo.py --cfg models/${{ matrix.model }}.yaml  # inspect
          python models/export.py --img 128 --batch 1 --weights weights/${{ matrix.model }}.pt  # export
        shell: bash


================================================
FILE: .github/workflows/codeql-analysis.yml
================================================
# This action runs GitHub's industry-leading static analysis engine, CodeQL, against a repository's source code to find security vulnerabilities. 
# https://github.com/github/codeql-action

name: "CodeQL"

on:
  schedule:
    - cron: '0 0 1 * *'  # Runs at 00:00 UTC on the 1st of every month

jobs:
  analyze:
    name: Analyze
    runs-on: ubuntu-latest

    strategy:
      fail-fast: false
      matrix:
        language: [ 'python' ]
        # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
        # Learn more:
        # https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed

    steps:
    - name: Checkout repository
      uses: actions/checkout@v2

    # Initializes the CodeQL tools for scanning.
    - name: Initialize CodeQL
      uses: github/codeql-action/init@v1
      with:
        languages: ${{ matrix.language }}
        # If you wish to specify custom queries, you can do so here or in a config file.
        # By default, queries listed here will override any specified in a config file.
        # Prefix the list here with "+" to use these queries and those in the config file.
        # queries: ./path/to/local/query, your-org/your-repo/queries@main

    # Autobuild attempts to build any compiled languages  (C/C++, C#, or Java).
    # If this step fails, then you should remove it and run the build manually (see below)
    - name: Autobuild
      uses: github/codeql-action/autobuild@v1

    # ℹ️ Command-line programs to run using the OS shell.
    # 📚 https://git.io/JvXDl

    # ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
    #    and modify them (or add more) to build your code if your project
    #    uses a compiled language

    #- run: |
    #   make bootstrap
    #   make release

    - name: Perform CodeQL Analysis
      uses: github/codeql-action/analyze@v1


================================================
FILE: .github/workflows/greetings.yml
================================================
name: Greetings

on: [pull_request_target, issues]

jobs:
  greeting:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/first-interaction@v1
        with:
          repo-token: ${{ secrets.GITHUB_TOKEN }}
          pr-message: |
            👋 Hello @${{ github.actor }}, thank you for submitting a 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
            - ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
            ```bash
            git remote add upstream https://github.com/ultralytics/yolov5.git
            git fetch upstream
            git checkout feature  # <----- replace 'feature' with local branch name
            git rebase upstream/master
            git push -u origin -f
            ```
            - ✅ Verify all Continuous Integration (CI) **checks are passing**.
            - ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_  -Bruce Lee

          issue-message: |
            👋 Hello @${{ github.actor }}, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ [Tutorials](https://github.com/ultralytics/yolov5/wiki#tutorials) to get started, where you can find quickstart guides for simple tasks like [Custom Data Training](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) all the way to advanced concepts like [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607).

            If this is a 🐛 Bug Report, please provide screenshots and **minimum viable code to reproduce your issue**, otherwise we can not help you.

            If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online [W&B logging](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data#visualize) if available.

            For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

            ## Requirements

            Python 3.8 or later with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed, including `torch>=1.7`. To install run:
            ```bash
            $ pip install -r requirements.txt
            ```

            ## Environments
            
            YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
            
            - **Google Colab and Kaggle** notebooks with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
            - **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
            - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
            - **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
            
            
            ## Status
            
            ![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)
            
            If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([test.py](https://github.com/ultralytics/yolov5/blob/master/test.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/models/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
            


================================================
FILE: .github/workflows/rebase.yml
================================================
name: Automatic Rebase
# https://github.com/marketplace/actions/automatic-rebase

on:
  issue_comment:
    types: [created]

jobs:
  rebase:
    name: Rebase
    if: github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase')
    runs-on: ubuntu-latest
    steps:
      - name: Checkout the latest code
        uses: actions/checkout@v2
        with:
          fetch-depth: 0
      - name: Automatic Rebase
        uses: cirrus-actions/rebase@1.3.1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}


================================================
FILE: .github/workflows/stale.yml
================================================
name: Close stale issues
on:
  schedule:
    - cron: "0 0 * * *"

jobs:
  stale:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/stale@v3
        with:
          repo-token: ${{ secrets.GITHUB_TOKEN }}
          stale-issue-message: 'This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.'
          stale-pr-message: 'This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.'
          days-before-stale: 30
          days-before-close: 5
          exempt-issue-labels: 'documentation,tutorial'
          operations-per-run: 100  # The maximum number of operations per run, used to control rate limiting.


================================================
FILE: .gitignore
================================================
# Repo-specific GitIgnore ----------------------------------------------------------------------------------------------
*.tif
*.tiff
*.heic
*.TIF
*.TIFF
*.HEIC
*.mp4
*.mov
*.MOV
*.avi
*.data
*.json
*.cfg
!setup.cfg
!cfg/yolov3*.cfg

storage.googleapis.com
runs/*
data/*
!data/images
!data/*.yaml
!data/hyps
!data/scripts
!data/images
!data/images/zidane.jpg
!data/images/bus.jpg
!data/*.sh

results*.csv

# Datasets -------------------------------------------------------------------------------------------------------------
coco/
coco128/
VOC/

# MATLAB GitIgnore -----------------------------------------------------------------------------------------------------
*.m~
*.mat
!targets*.mat

# Neural Network weights -----------------------------------------------------------------------------------------------
*.weights
*.pt
*.pb
*.onnx
*.engine
*.mlmodel
*.torchscript
*.tflite
*.h5
*_saved_model/
*_web_model/
darknet53.conv.74
yolov3-tiny.conv.15

# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
/wandb/
.installed.cfg
*.egg


# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# dotenv
.env

# virtualenv
.venv*
venv*/
ENV*/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/


# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------

# General
.DS_Store
.AppleDouble
.LSOverride

# Icon must end with two \r
Icon
Icon?

# Thumbnails
._*

# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk


# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839

# User-specific stuff:
.idea/*
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/dictionaries
.html  # Bokeh Plots
.pg  # TensorFlow Frozen Graphs
.avi # videos

# Sensitive or high-churn files:
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml

# Gradle:
.idea/**/gradle.xml
.idea/**/libraries

# CMake
cmake-build-debug/
cmake-build-release/

# Mongo Explorer plugin:
.idea/**/mongoSettings.xml

## File-based project format:
*.iws

## Plugin-specific files:

# IntelliJ
out/

# mpeltonen/sbt-idea plugin
.idea_modules/

# JIRA plugin
atlassian-ide-plugin.xml

# Cursive Clojure plugin
.idea/replstate.xml

# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties


================================================
FILE: .pre-commit-config.yaml
================================================
# Define hooks for code formations
# Will be applied on any updated commit files if a user has installed and linked commit hook

default_language_version:
  python: python3.8

# Define bot property if installed via https://github.com/marketplace/pre-commit-ci
ci:
  autofix_prs: true
  autoupdate_commit_msg: '[pre-commit.ci] pre-commit suggestions'
  autoupdate_schedule: quarterly
  # submodules: true

repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.0.1
    hooks:
      - id: end-of-file-fixer
      - id: trailing-whitespace
      - id: check-case-conflict
      - id: check-yaml
      - id: check-toml
      - id: pretty-format-json
      - id: check-docstring-first

  - repo: https://github.com/asottile/pyupgrade
    rev: v2.23.1
    hooks:
      - id: pyupgrade
        args: [--py36-plus]
        name: Upgrade code

  - repo: https://github.com/PyCQA/isort
    rev: 5.9.3
    hooks:
      - id: isort
        name: Sort imports

  # TODO
  #- repo: https://github.com/pre-commit/mirrors-yapf
  #  rev: v0.31.0
  #  hooks:
  #    - id: yapf
  #      name: formatting

  # TODO
  #- repo: https://github.com/executablebooks/mdformat
  #  rev: 0.7.7
  #  hooks:
  #    - id: mdformat
  #      additional_dependencies:
  #        - mdformat-gfm
  #        - mdformat-black
  #        - mdformat_frontmatter

  # TODO
  #- repo: https://github.com/asottile/yesqa
  #  rev: v1.2.3
  #  hooks:
  #    - id: yesqa

  - repo: https://github.com/PyCQA/flake8
    rev: 3.9.2
    hooks:
      - id: flake8
        name: PEP8


================================================
FILE: CONTRIBUTING.md
================================================
## Contributing to YOLOv5 🚀

We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible, whether it's:

- Reporting a bug
- Discussing the current state of the code
- Submitting a fix
- Proposing a new feature
- Becoming a maintainer

YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be
helping push the frontiers of what's possible in AI 😃!

## Submitting a Pull Request (PR) 🛠️

Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:

### 1. Select File to Update

Select `requirements.txt` to update by clicking on it in GitHub.
<p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>

### 2. Click 'Edit this file'

Button is in top-right corner.
<p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>

### 3. Make Changes

Change `matplotlib` version from `3.2.2` to `3.3`.
<p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>

### 4. Preview Changes and Submit PR

Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
<p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>

### PR recommendations

To allow your work to be integrated as seamlessly as possible, we advise you to:

- ✅ Verify your PR is **up-to-date with upstream/master.** If your PR is behind upstream/master an
  automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may
  be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature'
  with the name of your local branch:

  ```bash
  git remote add upstream https://github.com/ultralytics/yolov5.git
  git fetch upstream
  git checkout feature  # <----- replace 'feature' with local branch name
  git merge upstream/master
  git push -u origin -f
  ```

- ✅ Verify all Continuous Integration (CI) **checks are passing**.
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase
  but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_  — Bruce Lee

## Submitting a Bug Report 🐛

If you spot a problem with YOLOv5 please submit a Bug Report!

For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few
short guidelines below to help users provide what we need in order to get started.

When asking a question, people will be better able to provide help if you provide **code** that they can easily
understand and use to **reproduce** the problem. This is referred to by community members as creating
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
the problem should be:

* ✅ **Minimal** – Use as little code as possible that still produces the same problem
* ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
* ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem

In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
should be:

* ✅ **Current** – Verify that your code is up-to-date with current
  GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new
  copy to ensure your problem has not already been resolved by previous commits.
* ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this
  repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **
Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
understand and diagnose your problem.

## License

By contributing, you agree that your contributions will be licensed under
the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/)


================================================
FILE: Dockerfile
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license

# Start FROM Nvidia PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch
FROM nvcr.io/nvidia/pytorch:21.10-py3

# Install linux packages
RUN apt update && apt install -y zip htop screen libgl1-mesa-glx

# Install python dependencies
COPY requirements.txt .
RUN python -m pip install --upgrade pip
RUN pip uninstall -y nvidia-tensorboard nvidia-tensorboard-plugin-dlprof
RUN pip install --no-cache -r requirements.txt coremltools onnx gsutil notebook wandb>=0.12.2
RUN pip install --no-cache -U torch torchvision numpy Pillow
# RUN pip install --no-cache torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

# Create working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Copy contents
COPY . /usr/src/app

# Downloads to user config dir
ADD https://ultralytics.com/assets/Arial.ttf /root/.config/Ultralytics/

# Set environment variables
# ENV HOME=/usr/src/app


# Usage Examples -------------------------------------------------------------------------------------------------------

# Build and Push
# t=ultralytics/yolov5:latest && sudo docker build -t $t . && sudo docker push $t

# Pull and Run
# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all $t

# Pull and Run with local directory access
# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/datasets:/usr/src/datasets $t

# Kill all
# sudo docker kill $(sudo docker ps -q)

# Kill all image-based
# sudo docker kill $(sudo docker ps -qa --filter ancestor=ultralytics/yolov5:latest)

# Bash into running container
# sudo docker exec -it 5a9b5863d93d bash

# Bash into stopped container
# id=$(sudo docker ps -qa) && sudo docker start $id && sudo docker exec -it $id bash

# Clean up
# docker system prune -a --volumes

# Update Ubuntu drivers
# https://www.maketecheasier.com/install-nvidia-drivers-ubuntu/

# DDP test
# python -m torch.distributed.run --nproc_per_node 2 --master_port 1 train.py --epochs 3

# GCP VM from Image
# docker.io/ultralytics/yolov5:latest


================================================
FILE: LICENSE
================================================
GNU GENERAL PUBLIC LICENSE
                       Version 3, 29 June 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU General Public License is a free, copyleft license for
software and other kinds of works.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.  We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors.  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights.  Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received.  You must make sure that they, too, receive
or can get the source code.  And you must show them these terms so they
know their rights.

  Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.

  For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software.  For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.

  Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so.  This is fundamentally incompatible with the aim of
protecting users' freedom to change the software.  The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable.  Therefore, we
have designed this version of the GPL to prohibit the practice for those
products.  If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.

  Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary.  To prevent this, the GPL assures that
patents cannot be used to render the program non-free.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Use with the GNU Affero General Public License.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see <http://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>
    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.

  The GNU General Public License does not permit incorporating your program
into proprietary programs.  If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library.  If this is what you want to do, use the GNU Lesser General
Public License instead of this License.  But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.


================================================
FILE: README.md
================================================
## 仓库说明:

本仓库是针对基于EASY-EAI-Nano(RV1126)从PC端模型训练、模型单步测试、pytorch模型转换为onnx模型的流程说明,并以口罩检测为例子说明。而模型如何部署到硬件主板上,完整的在线文档教程可以查看以下在线文档的链接:

## 环境说明:

python version >= 3.6

pytorch version >= 1.7

onnx verison >= 1.11

## 准备数据
口罩检测数据百度链接:https://pan.baidu.com/s/1vtxWurn1Mqu-wJ017eaQrw 提取码:6666 

数据集解压后(脚本在数据集里面),执行以下脚本生成train.txt和valid.txt:
```python
python list_dataset_file.py
```


## 训练模型
训练一个口罩检测模型,需要修改"data/mask.yaml"里面的train.txt和valid.txt的路径。训练脚本如下所示:
```python
python train.py --data mask.yaml --cfg yolov5s.yaml --weights "" --batch-size 64
                                       yolov5m                                40
                                       yolov5l                                24
                                       yolov5x                                16
```
训练完成后会在

## 模型预测
测试训练好的模型:
```python
python detect.py --source data/images --weights ./runs/train/exp/weights/best.pt --conf 0.5
```
测试结果会在"runs/detect"生成:
<img src="./photo/image.jpg">


## 模型导出
执行以下指令把pt模型转换为onnx模型,同时会生成best.anchors.txt:
```python
python export.py --include onnx --rknpu RV1126 --weights ./runs/train/exp/weights/best.pt
```


### EASY-EAI-Nano基于NPU运行速度测试(单位:ms):

| 模型(640x640输入)         | EASY-EAI-Nano(RV1126)  |
| :---------------------- | :-----------------------------------------:  |
| yolov5s int8量化 |   52    |
| yolov5m int8量化 |   93    |



## 参考库:

https://github.com/ultralytics/yolov5

https://github.com/soloIife/yolov5_for_rknn


## 技术交流群:

QQ群:810456486





================================================
FILE: data/Argoverse.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/
# Example usage: python train.py --data Argoverse.yaml
# parent
# ├── yolov5
# └── datasets
#     └── Argoverse  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/Argoverse  # dataset root dir
train: Argoverse-1.1/images/train/  # train images (relative to 'path') 39384 images
val: Argoverse-1.1/images/val/  # val images (relative to 'path') 15062 images
test: Argoverse-1.1/images/test/  # test images (optional) https://eval.ai/web/challenges/challenge-page/800/overview

# Classes
nc: 8  # number of classes
names: ['person',  'bicycle',  'car',  'motorcycle',  'bus',  'truck',  'traffic_light',  'stop_sign']  # class names


# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
  import json

  from tqdm import tqdm
  from utils.general import download, Path


  def argoverse2yolo(set):
      labels = {}
      a = json.load(open(set, "rb"))
      for annot in tqdm(a['annotations'], desc=f"Converting {set} to YOLOv5 format..."):
          img_id = annot['image_id']
          img_name = a['images'][img_id]['name']
          img_label_name = img_name[:-3] + "txt"

          cls = annot['category_id']  # instance class id
          x_center, y_center, width, height = annot['bbox']
          x_center = (x_center + width / 2) / 1920.0  # offset and scale
          y_center = (y_center + height / 2) / 1200.0  # offset and scale
          width /= 1920.0  # scale
          height /= 1200.0  # scale

          img_dir = set.parents[2] / 'Argoverse-1.1' / 'labels' / a['seq_dirs'][a['images'][annot['image_id']]['sid']]
          if not img_dir.exists():
              img_dir.mkdir(parents=True, exist_ok=True)

          k = str(img_dir / img_label_name)
          if k not in labels:
              labels[k] = []
          labels[k].append(f"{cls} {x_center} {y_center} {width} {height}\n")

      for k in labels:
          with open(k, "w") as f:
              f.writelines(labels[k])


  # Download
  dir = Path('../datasets/Argoverse')  # dataset root dir
  urls = ['https://argoverse-hd.s3.us-east-2.amazonaws.com/Argoverse-HD-Full.zip']
  download(urls, dir=dir, delete=False)

  # Convert
  annotations_dir = 'Argoverse-HD/annotations/'
  (dir / 'Argoverse-1.1' / 'tracking').rename(dir / 'Argoverse-1.1' / 'images')  # rename 'tracking' to 'images'
  for d in "train.json", "val.json":
      argoverse2yolo(dir / annotations_dir / d)  # convert VisDrone annotations to YOLO labels


================================================
FILE: data/GlobalWheat2020.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Global Wheat 2020 dataset http://www.global-wheat.com/
# Example usage: python train.py --data GlobalWheat2020.yaml
# parent
# ├── yolov5
# └── datasets
#     └── GlobalWheat2020  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/GlobalWheat2020  # dataset root dir
train: # train images (relative to 'path') 3422 images
  - images/arvalis_1
  - images/arvalis_2
  - images/arvalis_3
  - images/ethz_1
  - images/rres_1
  - images/inrae_1
  - images/usask_1
val: # val images (relative to 'path') 748 images (WARNING: train set contains ethz_1)
  - images/ethz_1
test: # test images (optional) 1276 images
  - images/utokyo_1
  - images/utokyo_2
  - images/nau_1
  - images/uq_1

# Classes
nc: 1  # number of classes
names: ['wheat_head']  # class names


# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
  from utils.general import download, Path

  # Download
  dir = Path(yaml['path'])  # dataset root dir
  urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip',
          'https://github.com/ultralytics/yolov5/releases/download/v1.0/GlobalWheat2020_labels.zip']
  download(urls, dir=dir)

  # Make Directories
  for p in 'annotations', 'images', 'labels':
      (dir / p).mkdir(parents=True, exist_ok=True)

  # Move
  for p in 'arvalis_1', 'arvalis_2', 'arvalis_3', 'ethz_1', 'rres_1', 'inrae_1', 'usask_1', \
           'utokyo_1', 'utokyo_2', 'nau_1', 'uq_1':
      (dir / p).rename(dir / 'images' / p)  # move to /images
      f = (dir / p).with_suffix('.json')  # json file
      if f.exists():
          f.rename((dir / 'annotations' / p).with_suffix('.json'))  # move to /annotations


================================================
FILE: data/Objects365.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Objects365 dataset https://www.objects365.org/
# Example usage: python train.py --data Objects365.yaml
# parent
# ├── yolov5
# └── datasets
#     └── Objects365  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/Objects365  # dataset root dir
train: images/train  # train images (relative to 'path') 1742289 images
val: images/val # val images (relative to 'path') 80000 images
test:  # test images (optional)

# Classes
nc: 365  # number of classes
names: ['Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',
        'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',
        'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',
        'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',
        'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle',
        'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird',
        'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck',
        'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning',
        'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife',
        'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock',
        'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish',
        'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan',
        'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard',
        'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign',
        'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat',
        'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard',
        'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry',
        'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks',
        'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors',
        'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape',
        'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck',
        'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette',
        'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket',
        'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine',
        'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine',
        'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon',
        'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse',
        'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball',
        'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin',
        'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts',
        'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit',
        'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD',
        'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder',
        'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips',
        'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab',
        'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal',
        'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart',
        'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',
        'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',
        'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',
        'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis']


# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
  from pycocotools.coco import COCO
  from tqdm import tqdm

  from utils.general import Path, download, np, xyxy2xywhn

  # Make Directories
  dir = Path(yaml['path'])  # dataset root dir
  for p in 'images', 'labels':
      (dir / p).mkdir(parents=True, exist_ok=True)
      for q in 'train', 'val':
          (dir / p / q).mkdir(parents=True, exist_ok=True)

  # Train, Val Splits
  for split, patches in [('train', 50 + 1), ('val', 43 + 1)]:
      print(f"Processing {split} in {patches} patches ...")
      images, labels = dir / 'images' / split, dir / 'labels' / split

      # Download
      url = f"https://dorc.ks3-cn-beijing.ksyun.com/data-set/2020Objects365%E6%95%B0%E6%8D%AE%E9%9B%86/{split}/"
      if split == 'train':
          download([f'{url}zhiyuan_objv2_{split}.tar.gz'], dir=dir, delete=False)  # annotations json
          download([f'{url}patch{i}.tar.gz' for i in range(patches)], dir=images, curl=True, delete=False, threads=8)
      elif split == 'val':
          download([f'{url}zhiyuan_objv2_{split}.json'], dir=dir, delete=False)  # annotations json
          download([f'{url}images/v1/patch{i}.tar.gz' for i in range(15 + 1)], dir=images, curl=True, delete=False, threads=8)
          download([f'{url}images/v2/patch{i}.tar.gz' for i in range(16, patches)], dir=images, curl=True, delete=False, threads=8)

      # Move
      for f in tqdm(images.rglob('*.jpg'), desc=f'Moving {split} images'):
          f.rename(images / f.name)  # move to /images/{split}

      # Labels
      coco = COCO(dir / f'zhiyuan_objv2_{split}.json')
      names = [x["name"] for x in coco.loadCats(coco.getCatIds())]
      for cid, cat in enumerate(names):
          catIds = coco.getCatIds(catNms=[cat])
          imgIds = coco.getImgIds(catIds=catIds)
          for im in tqdm(coco.loadImgs(imgIds), desc=f'Class {cid + 1}/{len(names)} {cat}'):
              width, height = im["width"], im["height"]
              path = Path(im["file_name"])  # image filename
              try:
                  with open(labels / path.with_suffix('.txt').name, 'a') as file:
                      annIds = coco.getAnnIds(imgIds=im["id"], catIds=catIds, iscrowd=None)
                      for a in coco.loadAnns(annIds):
                          x, y, w, h = a['bbox']  # bounding box in xywh (xy top-left corner)
                          xyxy = np.array([x, y, x + w, y + h])[None]  # pixels(1,4)
                          x, y, w, h = xyxy2xywhn(xyxy, w=width, h=height, clip=True)[0]  # normalized and clipped
                          file.write(f"{cid} {x:.5f} {y:.5f} {w:.5f} {h:.5f}\n")
              except Exception as e:
                  print(e)


================================================
FILE: data/SKU-110K.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19
# Example usage: python train.py --data SKU-110K.yaml
# parent
# ├── yolov5
# └── datasets
#     └── SKU-110K  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/SKU-110K  # dataset root dir
train: train.txt  # train images (relative to 'path')  8219 images
val: val.txt  # val images (relative to 'path')  588 images
test: test.txt  # test images (optional)  2936 images

# Classes
nc: 1  # number of classes
names: ['object']  # class names


# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
  import shutil
  from tqdm import tqdm
  from utils.general import np, pd, Path, download, xyxy2xywh

  # Download
  dir = Path(yaml['path'])  # dataset root dir
  parent = Path(dir.parent)  # download dir
  urls = ['http://trax-geometry.s3.amazonaws.com/cvpr_challenge/SKU110K_fixed.tar.gz']
  download(urls, dir=parent, delete=False)

  # Rename directories
  if dir.exists():
      shutil.rmtree(dir)
  (parent / 'SKU110K_fixed').rename(dir)  # rename dir
  (dir / 'labels').mkdir(parents=True, exist_ok=True)  # create labels dir

  # Convert labels
  names = 'image', 'x1', 'y1', 'x2', 'y2', 'class', 'image_width', 'image_height'  # column names
  for d in 'annotations_train.csv', 'annotations_val.csv', 'annotations_test.csv':
      x = pd.read_csv(dir / 'annotations' / d, names=names).values  # annotations
      images, unique_images = x[:, 0], np.unique(x[:, 0])
      with open((dir / d).with_suffix('.txt').__str__().replace('annotations_', ''), 'w') as f:
          f.writelines(f'./images/{s}\n' for s in unique_images)
      for im in tqdm(unique_images, desc=f'Converting {dir / d}'):
          cls = 0  # single-class dataset
          with open((dir / 'labels' / im).with_suffix('.txt'), 'a') as f:
              for r in x[images == im]:
                  w, h = r[6], r[7]  # image width, height
                  xywh = xyxy2xywh(np.array([[r[1] / w, r[2] / h, r[3] / w, r[4] / h]]))[0]  # instance
                  f.write(f"{cls} {xywh[0]:.5f} {xywh[1]:.5f} {xywh[2]:.5f} {xywh[3]:.5f}\n")  # write label


================================================
FILE: data/VisDrone.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset
# Example usage: python train.py --data VisDrone.yaml
# parent
# ├── yolov5
# └── datasets
#     └── VisDrone  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/VisDrone  # dataset root dir
train: VisDrone2019-DET-train/images  # train images (relative to 'path')  6471 images
val: VisDrone2019-DET-val/images  # val images (relative to 'path')  548 images
test: VisDrone2019-DET-test-dev/images  # test images (optional)  1610 images

# Classes
nc: 10  # number of classes
names: ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor']


# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
  from utils.general import download, os, Path

  def visdrone2yolo(dir):
      from PIL import Image
      from tqdm import tqdm

      def convert_box(size, box):
          # Convert VisDrone box to YOLO xywh box
          dw = 1. / size[0]
          dh = 1. / size[1]
          return (box[0] + box[2] / 2) * dw, (box[1] + box[3] / 2) * dh, box[2] * dw, box[3] * dh

      (dir / 'labels').mkdir(parents=True, exist_ok=True)  # make labels directory
      pbar = tqdm((dir / 'annotations').glob('*.txt'), desc=f'Converting {dir}')
      for f in pbar:
          img_size = Image.open((dir / 'images' / f.name).with_suffix('.jpg')).size
          lines = []
          with open(f, 'r') as file:  # read annotation.txt
              for row in [x.split(',') for x in file.read().strip().splitlines()]:
                  if row[4] == '0':  # VisDrone 'ignored regions' class 0
                      continue
                  cls = int(row[5]) - 1
                  box = convert_box(img_size, tuple(map(int, row[:4])))
                  lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n")
                  with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl:
                      fl.writelines(lines)  # write label.txt


  # Download
  dir = Path(yaml['path'])  # dataset root dir
  urls = ['https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-train.zip',
          'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip',
          'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip',
          'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip']
  download(urls, dir=dir)

  # Convert
  for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev':
      visdrone2yolo(dir / d)  # convert VisDrone annotations to YOLO labels


================================================
FILE: data/coco.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# COCO 2017 dataset http://cocodataset.org
# Example usage: python train.py --data coco.yaml
# parent
# ├── yolov5
# └── datasets
#     └── coco  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
train: D:/ai_project/coco/coco2017labels/coco/train2017.txt  # 118287 images
val: D:/ai_project/coco/coco2017labels/coco/val2017.txt  # 5000 images
test: D:/ai_project/coco/coco2017labels/coco/test-dev2017.txt  # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794

# Classes
nc: 80  # number of classes
names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
        'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
        'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
        'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
        'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
        'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
        'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
        'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
        'hair drier', 'toothbrush']  # class names


# Download script/URL (optional)


================================================
FILE: data/coco128.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
# Example usage: python train.py --data coco128.yaml
# parent
# ├── yolov5
# └── datasets
#     └── coco128  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco128  # dataset root dir
train: images/train2017  # train images (relative to 'path') 128 images
val: images/train2017  # val images (relative to 'path') 128 images
test:  # test images (optional)

# Classes
nc: 80  # number of classes
names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
        'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
        'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
        'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
        'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
        'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
        'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
        'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
        'hair drier', 'toothbrush']  # class names


# Download script/URL (optional)
download: https://ultralytics.com/assets/coco128.zip


================================================
FILE: data/hyps/hyp.finetune.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for VOC finetuning
# python train.py --batch 64 --weights yolov5m.pt --data VOC.yaml --img 512 --epochs 50
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials

# Hyperparameter Evolution Results
# Generations: 306
#                   P         R     mAP.5 mAP.5:.95       box       obj       cls
# Metrics:        0.6     0.936     0.896     0.684    0.0115   0.00805   0.00146

lr0: 0.0032
lrf: 0.12
momentum: 0.843
weight_decay: 0.00036
warmup_epochs: 2.0
warmup_momentum: 0.5
warmup_bias_lr: 0.05
box: 0.0296
cls: 0.243
cls_pw: 0.631
obj: 0.301
obj_pw: 0.911
iou_t: 0.2
anchor_t: 2.91
# anchors: 3.63
fl_gamma: 0.0
hsv_h: 0.0138
hsv_s: 0.664
hsv_v: 0.464
degrees: 0.373
translate: 0.245
scale: 0.898
shear: 0.602
perspective: 0.0
flipud: 0.00856
fliplr: 0.5
mosaic: 1.0
mixup: 0.243
copy_paste: 0.0


================================================
FILE: data/hyps/hyp.finetune_objects365.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license

lr0: 0.00258
lrf: 0.17
momentum: 0.779
weight_decay: 0.00058
warmup_epochs: 1.33
warmup_momentum: 0.86
warmup_bias_lr: 0.0711
box: 0.0539
cls: 0.299
cls_pw: 0.825
obj: 0.632
obj_pw: 1.0
iou_t: 0.2
anchor_t: 3.44
anchors: 3.2
fl_gamma: 0.0
hsv_h: 0.0188
hsv_s: 0.704
hsv_v: 0.36
degrees: 0.0
translate: 0.0902
scale: 0.491
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
mosaic: 1.0
mixup: 0.0
copy_paste: 0.0


================================================
FILE: data/hyps/hyp.scratch-high.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for high-augmentation COCO training from scratch
# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials

lr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.2  # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 3.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.1  # warmup initial bias lr
box: 0.05  # box loss gain
cls: 0.3  # cls loss gain
cls_pw: 1.0  # cls BCELoss positive_weight
obj: 0.7  # obj loss gain (scale with pixels)
obj_pw: 1.0  # obj BCELoss positive_weight
iou_t: 0.20  # IoU training threshold
anchor_t: 4.0  # anchor-multiple threshold
# anchors: 3  # anchors per output layer (0 to ignore)
fl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.9  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 1.0  # image mosaic (probability)
mixup: 0.1  # image mixup (probability)
copy_paste: 0.1  # segment copy-paste (probability)


================================================
FILE: data/hyps/hyp.scratch-low.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for low-augmentation COCO training from scratch
# python train.py --batch 64 --cfg yolov5n6.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --linear
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials

lr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.01  # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 3.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.1  # warmup initial bias lr
box: 0.05  # box loss gain
cls: 0.5  # cls loss gain
cls_pw: 1.0  # cls BCELoss positive_weight
obj: 1.0  # obj loss gain (scale with pixels)
obj_pw: 1.0  # obj BCELoss positive_weight
iou_t: 0.20  # IoU training threshold
anchor_t: 4.0  # anchor-multiple threshold
# anchors: 3  # anchors per output layer (0 to ignore)
fl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.5  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 1.0  # image mosaic (probability)
mixup: 0.0  # image mixup (probability)
copy_paste: 0.0  # segment copy-paste (probability)


================================================
FILE: data/hyps/hyp.scratch-med.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for medium-augmentation COCO training from scratch
# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials

lr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1  # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 3.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.1  # warmup initial bias lr
box: 0.05  # box loss gain
cls: 0.3  # cls loss gain
cls_pw: 1.0  # cls BCELoss positive_weight
obj: 0.7  # obj loss gain (scale with pixels)
obj_pw: 1.0  # obj BCELoss positive_weight
iou_t: 0.20  # IoU training threshold
anchor_t: 4.0  # anchor-multiple threshold
# anchors: 3  # anchors per output layer (0 to ignore)
fl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.9  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 1.0  # image mosaic (probability)
mixup: 0.1  # image mixup (probability)
copy_paste: 0.0  # segment copy-paste (probability)


================================================
FILE: data/hyps/hyp.scratch.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Hyperparameters for COCO training from scratch
# python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials

lr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1  # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 3.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.1  # warmup initial bias lr
box: 0.05  # box loss gain
cls: 0.5  # cls loss gain
cls_pw: 1.0  # cls BCELoss positive_weight
obj: 1.0  # obj loss gain (scale with pixels)
obj_pw: 1.0  # obj BCELoss positive_weight
iou_t: 0.20  # IoU training threshold
anchor_t: 4.0  # anchor-multiple threshold
# anchors: 3  # anchors per output layer (0 to ignore)
fl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.5  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 1.0  # image mosaic (probability)
mixup: 0.0  # image mixup (probability)
copy_paste: 0.0  # segment copy-paste (probability)


================================================
FILE: data/mask.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# COCO 2017 dataset http://cocodataset.org
# Example usage: python train.py --data mask.yaml
# parent
# ├── yolov5
# └── datasets
#     └── coco  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
train: E:/dataset/mask/train.txt
val: E:/dataset/mask/valid.txt 
test: E:/dataset/mask/valid.txt  

# Classes
nc: 2  # number of classes
names: ['head', 'mask']  # class names


# Download script/URL (optional)


================================================
FILE: data/scripts/download_weights.sh
================================================
#!/bin/bash
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Download latest models from https://github.com/ultralytics/yolov5/releases
# Example usage: bash path/to/download_weights.sh
# parent
# └── yolov5
#     ├── yolov5s.pt  ← downloads here
#     ├── yolov5m.pt
#     └── ...

python - <<EOF
from utils.downloads import attempt_download

models = ['n', 's', 'm', 'l', 'x']
models.extend([x + '6' for x in models])  # add P6 models

for x in models:
    attempt_download(f'yolov5{x}.pt')

EOF


================================================
FILE: data/scripts/get_coco.sh
================================================
#!/bin/bash
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Download COCO 2017 dataset http://cocodataset.org
# Example usage: bash data/scripts/get_coco.sh
# parent
# ├── yolov5
# └── datasets
#     └── coco  ← downloads here

# Download/unzip labels
d='../datasets' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB
echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &

# Download/unzip images
d='../datasets/coco/images' # unzip directory
url=http://images.cocodataset.org/zips/
f1='train2017.zip' # 19G, 118k images
f2='val2017.zip'   # 1G, 5k images
f3='test2017.zip'  # 7G, 41k images (optional)
for f in $f1 $f2; do
  echo 'Downloading' $url$f '...'
  curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
done
wait # finish background tasks


================================================
FILE: data/scripts/get_coco128.sh
================================================
#!/bin/bash
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
# Example usage: bash data/scripts/get_coco128.sh
# parent
# ├── yolov5
# └── datasets
#     └── coco128  ← downloads here

# Download/unzip images and labels
d='../datasets' # unzip directory
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
f='coco128.zip' # or 'coco128-segments.zip', 68 MB
echo 'Downloading' $url$f ' ...'
curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &

wait # finish background tasks


================================================
FILE: data/voc.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC
# Example usage: python train.py --data VOC.yaml
# parent
# ├── yolov5
# └── datasets
#     └── VOC  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/VOC
train: # train images (relative to 'path')  16551 images
  - images/train2012
  - images/train2007
  - images/val2012
  - images/val2007
val: # val images (relative to 'path')  4952 images
  - images/test2007
test: # test images (optional)
  - images/test2007

# Classes
nc: 20  # number of classes
names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
        'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']  # class names


# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
  import xml.etree.ElementTree as ET

  from tqdm import tqdm
  from utils.general import download, Path


  def convert_label(path, lb_path, year, image_id):
      def convert_box(size, box):
          dw, dh = 1. / size[0], 1. / size[1]
          x, y, w, h = (box[0] + box[1]) / 2.0 - 1, (box[2] + box[3]) / 2.0 - 1, box[1] - box[0], box[3] - box[2]
          return x * dw, y * dh, w * dw, h * dh

      in_file = open(path / f'VOC{year}/Annotations/{image_id}.xml')
      out_file = open(lb_path, 'w')
      tree = ET.parse(in_file)
      root = tree.getroot()
      size = root.find('size')
      w = int(size.find('width').text)
      h = int(size.find('height').text)

      for obj in root.iter('object'):
          cls = obj.find('name').text
          if cls in yaml['names'] and not int(obj.find('difficult').text) == 1:
              xmlbox = obj.find('bndbox')
              bb = convert_box((w, h), [float(xmlbox.find(x).text) for x in ('xmin', 'xmax', 'ymin', 'ymax')])
              cls_id = yaml['names'].index(cls)  # class id
              out_file.write(" ".join([str(a) for a in (cls_id, *bb)]) + '\n')


  # Download
  dir = Path(yaml['path'])  # dataset root dir
  url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
  urls = [url + 'VOCtrainval_06-Nov-2007.zip',  # 446MB, 5012 images
          url + 'VOCtest_06-Nov-2007.zip',  # 438MB, 4953 images
          url + 'VOCtrainval_11-May-2012.zip']  # 1.95GB, 17126 images
  download(urls, dir=dir / 'images', delete=False)

  # Convert
  path = dir / f'images/VOCdevkit'
  for year, image_set in ('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test'):
      imgs_path = dir / 'images' / f'{image_set}{year}'
      lbs_path = dir / 'labels' / f'{image_set}{year}'
      imgs_path.mkdir(exist_ok=True, parents=True)
      lbs_path.mkdir(exist_ok=True, parents=True)

      image_ids = open(path / f'VOC{year}/ImageSets/Main/{image_set}.txt').read().strip().split()
      for id in tqdm(image_ids, desc=f'{image_set}{year}'):
          f = path / f'VOC{year}/JPEGImages/{id}.jpg'  # old img path
          lb_path = (lbs_path / f.name).with_suffix('.txt')  # new label path
          f.rename(imgs_path / f.name)  # move image
          convert_label(path, lb_path, year, id)  # convert labels to YOLO format


================================================
FILE: data/xView.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# xView 2018 dataset https://challenge.xviewdataset.org
# --------  DOWNLOAD DATA MANUALLY from URL above and unzip to 'datasets/xView' before running train command!  --------
# Example usage: python train.py --data xView.yaml
# parent
# ├── yolov5
# └── datasets
#     └── xView  ← downloads here


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/xView  # dataset root dir
train: images/autosplit_train.txt  # train images (relative to 'path') 90% of 847 train images
val: images/autosplit_val.txt  # train images (relative to 'path') 10% of 847 train images

# Classes
nc: 60  # number of classes
names: ['Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',
        'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',
        'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',
        'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',
        'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane',
        'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',
        'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',
        'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',
        'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower']  # class names


# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
  import json
  import os
  from pathlib import Path

  import numpy as np
  from PIL import Image
  from tqdm import tqdm

  from utils.datasets import autosplit
  from utils.general import download, xyxy2xywhn


  def convert_labels(fname=Path('xView/xView_train.geojson')):
      # Convert xView geoJSON labels to YOLO format
      path = fname.parent
      with open(fname) as f:
          print(f'Loading {fname}...')
          data = json.load(f)

      # Make dirs
      labels = Path(path / 'labels' / 'train')
      os.system(f'rm -rf {labels}')
      labels.mkdir(parents=True, exist_ok=True)

      # xView classes 11-94 to 0-59
      xview_class2index = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, -1, 9, 10, 11,
                           12, 13, 14, 15, -1, -1, 16, 17, 18, 19, 20, 21, 22, -1, 23, 24, 25, -1, 26, 27, -1, 28, -1,
                           29, 30, 31, 32, 33, 34, 35, 36, 37, -1, 38, 39, 40, 41, 42, 43, 44, 45, -1, -1, -1, -1, 46,
                           47, 48, 49, -1, 50, 51, -1, 52, -1, -1, -1, 53, 54, -1, 55, -1, -1, 56, -1, 57, -1, 58, 59]

      shapes = {}
      for feature in tqdm(data['features'], desc=f'Converting {fname}'):
          p = feature['properties']
          if p['bounds_imcoords']:
              id = p['image_id']
              file = path / 'train_images' / id
              if file.exists():  # 1395.tif missing
                  try:
                      box = np.array([int(num) for num in p['bounds_imcoords'].split(",")])
                      assert box.shape[0] == 4, f'incorrect box shape {box.shape[0]}'
                      cls = p['type_id']
                      cls = xview_class2index[int(cls)]  # xView class to 0-60
                      assert 59 >= cls >= 0, f'incorrect class index {cls}'

                      # Write YOLO label
                      if id not in shapes:
                          shapes[id] = Image.open(file).size
                      box = xyxy2xywhn(box[None].astype(np.float), w=shapes[id][0], h=shapes[id][1], clip=True)
                      with open((labels / id).with_suffix('.txt'), 'a') as f:
                          f.write(f"{cls} {' '.join(f'{x:.6f}' for x in box[0])}\n")  # write label.txt
                  except Exception as e:
                      print(f'WARNING: skipping one label for {file}: {e}')


  # Download manually from https://challenge.xviewdataset.org
  dir = Path(yaml['path'])  # dataset root dir
  # urls = ['https://d307kc0mrhucc3.cloudfront.net/train_labels.zip',  # train labels
  #         'https://d307kc0mrhucc3.cloudfront.net/train_images.zip',  # 15G, 847 train images
  #         'https://d307kc0mrhucc3.cloudfront.net/val_images.zip']  # 5G, 282 val images (no labels)
  # download(urls, dir=dir, delete=False)

  # Convert labels
  convert_labels(dir / 'xView_train.geojson')

  # Move images
  images = Path(dir / 'images')
  images.mkdir(parents=True, exist_ok=True)
  Path(dir / 'train_images').rename(dir / 'images' / 'train')
  Path(dir / 'val_images').rename(dir / 'images' / 'val')

  # Split
  autosplit(dir / 'images' / 'train')


================================================
FILE: detect.py
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Run inference on images, videos, directories, streams, etc.

Usage:
    $ python path/to/detect.py --weights yolov5s.pt --source 0  # webcam
                                                             img.jpg  # image
                                                             vid.mp4  # video
                                                             path/  # directory
                                                             path/*.jpg  # glob
                                                             'https://youtu.be/Zgi9g1ksQHc'  # YouTube
                                                             'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream
"""

import argparse
import os
import sys
from pathlib import Path

import cv2
import torch
import torch.backends.cudnn as cudnn

FILE = Path(__file__).resolve()
ROOT = FILE.parents[0]  # YOLOv5 root directory
if str(ROOT) not in sys.path:
    sys.path.append(str(ROOT))  # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative

from models.common import DetectMultiBackend
from utils.datasets import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams
from utils.general import (LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr,
                           increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import select_device, time_sync


@torch.no_grad()
def run(weights=ROOT / 'yolov5s.pt',  # model.pt path(s)
        source=ROOT / 'data/images',  # file/dir/URL/glob, 0 for webcam
        imgsz=(640, 640),  # inference size (height, width)
        conf_thres=0.25,  # confidence threshold
        iou_thres=0.45,  # NMS IOU threshold
        max_det=1000,  # maximum detections per image
        device='',  # cuda device, i.e. 0 or 0,1,2,3 or cpu
        view_img=False,  # show results
        save_txt=False,  # save results to *.txt
        save_conf=False,  # save confidences in --save-txt labels
        save_crop=False,  # save cropped prediction boxes
        nosave=False,  # do not save images/videos
        classes=None,  # filter by class: --class 0, or --class 0 2 3
        agnostic_nms=False,  # class-agnostic NMS
        augment=False,  # augmented inference
        visualize=False,  # visualize features
        update=False,  # update all models
        project=ROOT / 'runs/detect',  # save results to project/name
        name='exp',  # save results to project/name
        exist_ok=False,  # existing project/name ok, do not increment
        line_thickness=3,  # bounding box thickness (pixels)
        hide_labels=False,  # hide labels
        hide_conf=False,  # hide confidences
        half=False,  # use FP16 half-precision inference
        dnn=False,  # use OpenCV DNN for ONNX inference
        ):
    source = str(source)
    save_img = not nosave and not source.endswith('.txt')  # save inference images
    is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
    is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
    webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)
    if is_url and is_file:
        source = check_file(source)  # download

    # Directories
    save_dir = increment_path(Path(project) / name, exist_ok=exist_ok)  # increment run
    (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True)  # make dir

    # Load model
    device = select_device(device)
    model = DetectMultiBackend(weights, device=device, dnn=dnn)
    stride, names, pt, jit, onnx, engine = model.stride, model.names, model.pt, model.jit, model.onnx, model.engine
    imgsz = check_img_size(imgsz, s=stride)  # check image size

    # Half
    half &= (pt or jit or engine) and device.type != 'cpu'  # half precision only supported by PyTorch on CUDA
    if pt or jit:
        model.model.half() if half else model.model.float()

    # Dataloader
    if webcam:
        view_img = check_imshow()
        cudnn.benchmark = True  # set True to speed up constant image size inference
        dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt)
        bs = len(dataset)  # batch_size
    else:
        dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt)
        bs = 1  # batch_size
    vid_path, vid_writer = [None] * bs, [None] * bs

    # Run inference
    model.warmup(imgsz=(1, 3, *imgsz), half=half)  # warmup
    dt, seen = [0.0, 0.0, 0.0], 0
    for path, im, im0s, vid_cap, s in dataset:
        t1 = time_sync()
        im = torch.from_numpy(im).to(device)
        im = im.half() if half else im.float()  # uint8 to fp16/32
        im /= 255  # 0 - 255 to 0.0 - 1.0
        if len(im.shape) == 3:
            im = im[None]  # expand for batch dim
        t2 = time_sync()
        dt[0] += t2 - t1

        # Inference
        visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
        pred = model(im, augment=augment, visualize=visualize)
        t3 = time_sync()
        dt[1] += t3 - t2

        # NMS
        pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
        dt[2] += time_sync() - t3

        # Second-stage classifier (optional)
        # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)

        # Process predictions
        for i, det in enumerate(pred):  # per image
            seen += 1
            if webcam:  # batch_size >= 1
                p, im0, frame = path[i], im0s[i].copy(), dataset.count
                s += f'{i}: '
            else:
                p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)

            p = Path(p)  # to Path
            save_path = str(save_dir / p.name)  # im.jpg
            txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # im.txt
            s += '%gx%g ' % im.shape[2:]  # print string
            gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
            imc = im0.copy() if save_crop else im0  # for save_crop
            annotator = Annotator(im0, line_width=line_thickness, example=str(names))
            if len(det):
                # Rescale boxes from img_size to im0 size
                det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()

                # Print results
                for c in det[:, -1].unique():
                    n = (det[:, -1] == c).sum()  # detections per class
                    s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string

                # Write results
                for *xyxy, conf, cls in reversed(det):
                    if save_txt:  # Write to file
                        xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                        line = (cls, *xywh, conf) if save_conf else (cls, *xywh)  # label format
                        with open(txt_path + '.txt', 'a') as f:
                            f.write(('%g ' * len(line)).rstrip() % line + '\n')

                    if save_img or save_crop or view_img:  # Add bbox to image
                        c = int(cls)  # integer class
                        label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
                        annotator.box_label(xyxy, label, color=colors(c, True))
                        if save_crop:
                            save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)

            # Print time (inference-only)
            LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')

            # Stream results
            im0 = annotator.result()
            if view_img:
                cv2.imshow(str(p), im0)
                cv2.waitKey(1)  # 1 millisecond

            # Save results (image with detections)
            if save_img:
                if dataset.mode == 'image':
                    cv2.imwrite(save_path, im0)
                else:  # 'video' or 'stream'
                    if vid_path[i] != save_path:  # new video
                        vid_path[i] = save_path
                        if isinstance(vid_writer[i], cv2.VideoWriter):
                            vid_writer[i].release()  # release previous video writer
                        if vid_cap:  # video
                            fps = vid_cap.get(cv2.CAP_PROP_FPS)
                            w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
                            h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
                        else:  # stream
                            fps, w, h = 30, im0.shape[1], im0.shape[0]
                            save_path += '.mp4'
                        vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
                    vid_writer[i].write(im0)

    # Print results
    t = tuple(x / seen * 1E3 for x in dt)  # speeds per image
    LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
    if save_txt or save_img:
        s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
        LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
    if update:
        strip_optimizer(weights)  # update model (to fix SourceChangeWarning)


def parse_opt():
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)')
    parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam')
    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
    parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--view-img', action='store_true', help='show results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
    parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
    parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
    parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--visualize', action='store_true', help='visualize features')
    parser.add_argument('--update', action='store_true', help='update all models')
    parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name')
    parser.add_argument('--name', default='exp', help='save results to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
    parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
    parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
    parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
    parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
    opt = parser.parse_args()
    opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1  # expand
    print_args(FILE.stem, opt)
    return opt


def main(opt):
    check_requirements(exclude=('tensorboard', 'thop'))
    run(**vars(opt))


if __name__ == "__main__":
    opt = parse_opt()
    main(opt)


================================================
FILE: export.py
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit

Format                  | Example                   | `--include ...` argument
---                     | ---                       | ---
PyTorch                 | yolov5s.pt                | -
TorchScript             | yolov5s.torchscript       | `torchscript`
ONNX                    | yolov5s.onnx              | `onnx`
CoreML                  | yolov5s.mlmodel           | `coreml`
TensorFlow SavedModel   | yolov5s_saved_model/      | `saved_model`
TensorFlow GraphDef     | yolov5s.pb                | `pb`
TensorFlow Lite         | yolov5s.tflite            | `tflite`
TensorFlow.js           | yolov5s_web_model/        | `tfjs`
TensorRT                | yolov5s.engine            | `engine`

Usage:
    $ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml saved_model pb tflite tfjs

Inference:
    $ python path/to/detect.py --weights yolov5s.pt
                                         yolov5s.torchscript
                                         yolov5s.onnx
                                         yolov5s.mlmodel  (under development)
                                         yolov5s_saved_model
                                         yolov5s.pb
                                         yolov5s.tflite
                                         yolov5s.engine

TensorFlow.js:
    $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
    $ npm install
    $ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model
    $ npm start
"""

import argparse
import json
import os
import subprocess
import sys
import time
from pathlib import Path

# activate rknn hack
if len(sys.argv)>=3 and '--rknpu' in sys.argv:
    _index = sys.argv.index('--rknpu')
    if sys.argv[_index+1].upper() in ['RK1808', 'RV1109', 'RV1126','RK3399PRO']:
        os.environ['RKNN_model_hack'] = 'npu_1'
    elif sys.argv[_index+1].upper() in ['RK3566', 'RK3568', 'RK3588','RK3588S']:
        os.environ['RKNN_model_hack'] = 'npu_2'
    else:
        assert False,"{} not recognized".format(sys.argv[_index+1])

import torch
import torch.nn as nn
from torch.utils.mobile_optimizer import optimize_for_mobile

FILE = Path(__file__).resolve()
ROOT = FILE.parents[0]  # YOLOv5 root directory
if str(ROOT) not in sys.path:
    sys.path.append(str(ROOT))  # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative

from models.common import Conv
from models.experimental import attempt_load
from models.yolo import Detect
from utils.activations import SiLU
from utils.datasets import LoadImages
from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, colorstr, file_size, print_args,
                           url2file)
from utils.torch_utils import select_device


def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')):
    # YOLOv5 TorchScript model export
    try:
        LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')
        f = file.with_suffix('.torchscript')

        ts = torch.jit.trace(model, im, strict=False)
        d = {"shape": im.shape, "stride": int(max(model.stride)), "names": model.names}
        extra_files = {'config.txt': json.dumps(d)}  # torch._C.ExtraFilesMap()
        (optimize_for_mobile(ts) if optimize else ts).save(str(f), _extra_files=extra_files)

        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
    except Exception as e:
        LOGGER.info(f'{prefix} export failure: {e}')


def export_onnx(model, im, file, opset, train, dynamic, simplify, prefix=colorstr('ONNX:')):
    # YOLOv5 ONNX export
    try:
        check_requirements(('onnx',))
        import onnx

        LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')
        f = file.with_suffix('.onnx')

        torch.onnx.export(model, im, f, verbose=False, opset_version=opset,
                          training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,
                          do_constant_folding=not train,
                          input_names=['images'],
                          output_names=['output'],
                          dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # shape(1,3,640,640)
                                        'output': {0: 'batch', 1: 'anchors'}  # shape(1,25200,85)
                                        } if dynamic else None)

        # Checks
        model_onnx = onnx.load(f)  # load onnx model
        onnx.checker.check_model(model_onnx)  # check onnx model
        # LOGGER.info(onnx.helper.printable_graph(model_onnx.graph))  # print

        # Simplify
        if simplify:
            try:
                check_requirements(('onnx-simplifier',))
                import onnxsim

                LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')
                model_onnx, check = onnxsim.simplify(
                    model_onnx,
                    dynamic_input_shape=dynamic,
                    input_shapes={'images': list(im.shape)} if dynamic else None)
                assert check, 'assert check failed'
                onnx.save(model_onnx, f)
            except Exception as e:
                LOGGER.info(f'{prefix} simplifier failure: {e}')
        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
        LOGGER.info(f"{prefix} run --dynamic ONNX model inference with: 'python detect.py --weights {f}'")
    except Exception as e:
        LOGGER.info(f'{prefix} export failure: {e}')


def export_coreml(model, im, file, prefix=colorstr('CoreML:')):
    # YOLOv5 CoreML export
    ct_model = None
    try:
        check_requirements(('coremltools',))
        import coremltools as ct

        LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
        f = file.with_suffix('.mlmodel')

        model.train()  # CoreML exports should be placed in model.train() mode
        ts = torch.jit.trace(model, im, strict=False)  # TorchScript model
        ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])
        ct_model.save(f)

        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
    except Exception as e:
        LOGGER.info(f'\n{prefix} export failure: {e}')

    return ct_model


def export_saved_model(model, im, file, dynamic,
                       tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45,
                       conf_thres=0.25, prefix=colorstr('TensorFlow saved_model:')):
    # YOLOv5 TensorFlow saved_model export
    keras_model = None
    try:
        import tensorflow as tf
        from tensorflow import keras

        from models.tf import TFDetect, TFModel

        LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
        f = str(file).replace('.pt', '_saved_model')
        batch_size, ch, *imgsz = list(im.shape)  # BCHW

        tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
        im = tf.zeros((batch_size, *imgsz, 3))  # BHWC order for TensorFlow
        y = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
        inputs = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)
        outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
        keras_model = keras.Model(inputs=inputs, outputs=outputs)
        keras_model.trainable = False
        keras_model.summary()
        keras_model.save(f, save_format='tf')

        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
    except Exception as e:
        LOGGER.info(f'\n{prefix} export failure: {e}')

    return keras_model


def export_pb(keras_model, im, file, prefix=colorstr('TensorFlow GraphDef:')):
    # YOLOv5 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow
    try:
        import tensorflow as tf
        from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2

        LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
        f = file.with_suffix('.pb')

        m = tf.function(lambda x: keras_model(x))  # full model
        m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))
        frozen_func = convert_variables_to_constants_v2(m)
        frozen_func.graph.as_graph_def()
        tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)

        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
    except Exception as e:
        LOGGER.info(f'\n{prefix} export failure: {e}')


def export_tflite(keras_model, im, file, int8, data, ncalib, prefix=colorstr('TensorFlow Lite:')):
    # YOLOv5 TensorFlow Lite export
    try:
        import tensorflow as tf

        from models.tf import representative_dataset_gen

        LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
        batch_size, ch, *imgsz = list(im.shape)  # BCHW
        f = str(file).replace('.pt', '-fp16.tflite')

        converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
        converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
        converter.target_spec.supported_types = [tf.float16]
        converter.optimizations = [tf.lite.Optimize.DEFAULT]
        if int8:
            dataset = LoadImages(check_dataset(data)['train'], img_size=imgsz, auto=False)  # representative data
            converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib)
            converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
            converter.target_spec.supported_types = []
            converter.inference_input_type = tf.uint8  # or tf.int8
            converter.inference_output_type = tf.uint8  # or tf.int8
            converter.experimental_new_quantizer = False
            f = str(file).replace('.pt', '-int8.tflite')

        tflite_model = converter.convert()
        open(f, "wb").write(tflite_model)
        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')

    except Exception as e:
        LOGGER.info(f'\n{prefix} export failure: {e}')


def export_tfjs(keras_model, im, file, prefix=colorstr('TensorFlow.js:')):
    # YOLOv5 TensorFlow.js export
    try:
        check_requirements(('tensorflowjs',))
        import re

        import tensorflowjs as tfjs

        LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')
        f = str(file).replace('.pt', '_web_model')  # js dir
        f_pb = file.with_suffix('.pb')  # *.pb path
        f_json = f + '/model.json'  # *.json path

        cmd = f"tensorflowjs_converter --input_format=tf_frozen_model " \
              f"--output_node_names='Identity,Identity_1,Identity_2,Identity_3' {f_pb} {f}"
        subprocess.run(cmd, shell=True)

        json = open(f_json).read()
        with open(f_json, 'w') as j:  # sort JSON Identity_* in ascending order
            subst = re.sub(
                r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
                r'"Identity.?.?": {"name": "Identity.?.?"}, '
                r'"Identity.?.?": {"name": "Identity.?.?"}, '
                r'"Identity.?.?": {"name": "Identity.?.?"}}}',
                r'{"outputs": {"Identity": {"name": "Identity"}, '
                r'"Identity_1": {"name": "Identity_1"}, '
                r'"Identity_2": {"name": "Identity_2"}, '
                r'"Identity_3": {"name": "Identity_3"}}}',
                json)
            j.write(subst)

        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
    except Exception as e:
        LOGGER.info(f'\n{prefix} export failure: {e}')


def export_engine(model, im, file, train, half, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')):
    try:
        check_requirements(('tensorrt',))
        import tensorrt as trt

        opset = (12, 13)[trt.__version__[0] == '8']  # test on TensorRT 7.x and 8.x
        export_onnx(model, im, file, opset, train, False, simplify)
        onnx = file.with_suffix('.onnx')
        assert onnx.exists(), f'failed to export ONNX file: {onnx}'

        LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...')
        f = file.with_suffix('.engine')  # TensorRT engine file
        logger = trt.Logger(trt.Logger.INFO)
        if verbose:
            logger.min_severity = trt.Logger.Severity.VERBOSE

        builder = trt.Builder(logger)
        config = builder.create_builder_config()
        config.max_workspace_size = workspace * 1 << 30

        flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
        network = builder.create_network(flag)
        parser = trt.OnnxParser(network, logger)
        if not parser.parse_from_file(str(onnx)):
            raise RuntimeError(f'failed to load ONNX file: {onnx}')

        inputs = [network.get_input(i) for i in range(network.num_inputs)]
        outputs = [network.get_output(i) for i in range(network.num_outputs)]
        LOGGER.info(f'{prefix} Network Description:')
        for inp in inputs:
            LOGGER.info(f'{prefix}\tinput "{inp.name}" with shape {inp.shape} and dtype {inp.dtype}')
        for out in outputs:
            LOGGER.info(f'{prefix}\toutput "{out.name}" with shape {out.shape} and dtype {out.dtype}')

        half &= builder.platform_has_fast_fp16
        LOGGER.info(f'{prefix} building FP{16 if half else 32} engine in {f}')
        if half:
            config.set_flag(trt.BuilderFlag.FP16)
        with builder.build_engine(network, config) as engine, open(f, 'wb') as t:
            t.write(engine.serialize())
        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')

    except Exception as e:
        LOGGER.info(f'\n{prefix} export failure: {e}')


@torch.no_grad()
def run(data=ROOT / 'data/coco128.yaml',  # 'dataset.yaml path'
        weights=ROOT / 'yolov5s.pt',  # weights path
        imgsz=(640, 640),  # image (height, width)
        batch_size=1,  # batch size
        device='cpu',  # cuda device, i.e. 0 or 0,1,2,3 or cpu
        include=('torchscript', 'onnx', 'coreml'),  # include formats
        half=False,  # FP16 half-precision export
        inplace=False,  # set YOLOv5 Detect() inplace=True
        train=False,  # model.train() mode
        optimize=False,  # TorchScript: optimize for mobile
        int8=False,  # CoreML/TF INT8 quantization
        dynamic=False,  # ONNX/TF: dynamic axes
        simplify=False,  # ONNX: simplify model
        opset=14,  # ONNX: opset version
        verbose=False,  # TensorRT: verbose log
        workspace=4,  # TensorRT: workspace size (GB)
        nms=False,  # TF: add NMS to model
        agnostic_nms=False,  # TF: add agnostic NMS to model
        topk_per_class=100,  # TF.js NMS: topk per class to keep
        topk_all=100,  # TF.js NMS: topk for all classes to keep
        iou_thres=0.45,  # TF.js NMS: IoU threshold
        conf_thres=0.25,  # TF.js NMS: confidence threshold
        rknn_friendly = False,
        ):
    t = time.time()
    include = [x.lower() for x in include]
    tf_exports = list(x in include for x in ('saved_model', 'pb', 'tflite', 'tfjs'))  # TensorFlow exports
    imgsz *= 2 if len(imgsz) == 1 else 1  # expand
    file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights)

    # Load PyTorch model
    device = select_device(device)
    assert not (device.type == 'cpu' and half), '--half only compatible with GPU export, i.e. use --device 0'
    model = attempt_load(weights, map_location=device, inplace=True, fuse=True)  # load FP32 model
    nc, names = model.nc, model.names  # number of classes, class names

    # Input
    gs = int(max(model.stride))  # grid size (max stride)
    imgsz = [check_img_size(x, gs) for x in imgsz]  # verify img_size are gs-multiples
    im = torch.zeros(batch_size, 3, *imgsz).to(device)  # image size(1,3,320,192) BCHW iDetection

    # Update model
    if half:
        im, model = im.half(), model.half()  # to FP16
    model.train() if train else model.eval()  # training mode = no Detect() layer grid construction
    for k, m in model.named_modules():
        if isinstance(m, Conv):  # assign export-friendly activations
            if isinstance(m.act, nn.SiLU):
                m.act = SiLU()
        elif isinstance(m, Detect):
            m.inplace = inplace
            m.onnx_dynamic = dynamic
            # m.forward = m.forward_export  # assign forward (optional)

        if os.getenv('RKNN_model_hack', '0') == 'npu_1':
            from models.common import Focus
            from models.common_rk_plug_in import surrogate_focus
            if isinstance(model.model[0], Focus):
                # For yolo v5 version
                surrogate_focous = surrogate_focus(int(model.model[0].conv.conv.weight.shape[1]/4),
                                                model.model[0].conv.conv.weight.shape[0],
                                                k=tuple(model.model[0].conv.conv.weight.shape[2:4]),
                                                s=model.model[0].conv.conv.stride,
                                                p=model.model[0].conv.conv.padding,
                                                g=model.model[0].conv.conv.groups,
                                                act=True)
                surrogate_focous.conv.conv.weight = model.model[0].conv.conv.weight
                surrogate_focous.conv.conv.bias = model.model[0].conv.conv.bias
                surrogate_focous.conv.act = model.model[0].conv.act
                temp_i = model.model[0].i
                temp_f = model.model[0].f

                model.model[0] = surrogate_focous
                model.model[0].i = temp_i
                model.model[0].f = temp_f
                model.model[0].eval()
            elif isinstance(model.model[0], Conv) and model.model[0].conv.kernel_size == (6, 6):
                # For yolo v6 version
                surrogate_focous = surrogate_focus(model.model[0].conv.weight.shape[1],
                                                model.model[0].conv.weight.shape[0],
                                                k=(3,3), # 6/2, 6/2
                                                s=1,
                                                p=(1,1), # 2/2, 2/2
                                                g=model.model[0].conv.groups,
                                                act=hasattr(model.model[0], 'act'))
                surrogate_focous.conv.conv.weight[:,:3,:,:] = model.model[0].conv.weight[:,:,::2,::2]
                surrogate_focous.conv.conv.weight[:,3:6,:,:] = model.model[0].conv.weight[:,:,1::2,::2]
                surrogate_focous.conv.conv.weight[:,6:9,:,:] = model.model[0].conv.weight[:,:,::2,1::2]
                surrogate_focous.conv.conv.weight[:,9:,:,:] = model.model[0].conv.weight[:,:,1::2,1::2]
                surrogate_focous.conv.conv.bias = model.model[0].conv.bias
                surrogate_focous.conv.act = model.model[0].act
                temp_i = model.model[0].i
                temp_f = model.model[0].f

                model.model[0] = surrogate_focous
                model.model[0].i = temp_i
                model.model[0].f = temp_f
                model.model[0].eval()

    # save anchors
    if isinstance(model.model[-1], Detect):
        print('---> save anchors for RKNN')
        RK_anchors = model.model[-1].stride.reshape(3,1).repeat(1,3).reshape(-1,1)* model.model[-1].anchors.reshape(9,2)
        RK_anchors = RK_anchors.tolist()
        print(RK_anchors)
        with open(file.with_suffix('.anchors.txt'), 'w') as anf:
            anf.write(str(RK_anchors))

    for _ in range(2):
        y = model(im)  # dry runs
    LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} ({file_size(file):.1f} MB)")

    # Exports
    if 'torchscript' in include:
        export_torchscript(model, im, file, optimize)
    if 'onnx' in include:
        export_onnx(model, im, file, opset, train, dynamic, simplify)
    if 'engine' in include:
        export_engine(model, im, file, train, half, simplify, workspace, verbose)
    if 'coreml' in include:
        export_coreml(model, im, file)

    # TensorFlow Exports
    if any(tf_exports):
        pb, tflite, tfjs = tf_exports[1:]
        assert not (tflite and tfjs), 'TFLite and TF.js models must be exported separately, please pass only one type.'
        model = export_saved_model(model, im, file, dynamic, tf_nms=nms or agnostic_nms or tfjs,
                                   agnostic_nms=agnostic_nms or tfjs, topk_per_class=topk_per_class, topk_all=topk_all,
                                   conf_thres=conf_thres, iou_thres=iou_thres)  # keras model
        if pb or tfjs:  # pb prerequisite to tfjs
            export_pb(model, im, file)
        if tflite:
            export_tflite(model, im, file, int8=int8, data=data, ncalib=100)
        if tfjs:
            export_tfjs(model, im, file)

    # Finish
    LOGGER.info(f'\nExport complete ({time.time() - t:.2f}s)'
                f"\nResults saved to {colorstr('bold', file.parent.resolve())}"
                f'\nVisualize with https://netron.app')


def parse_opt():
    parser = argparse.ArgumentParser()
    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
    parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)')
    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')
    parser.add_argument('--batch-size', type=int, default=1, help='batch size')
    parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
    parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')
    parser.add_argument('--train', action='store_true', help='model.train() mode')
    parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
    parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')
    parser.add_argument('--dynamic', action='store_true', help='ONNX/TF: dynamic axes')
    parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
    parser.add_argument('--opset', type=int, default=12, help='ONNX: opset version')
    parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log')
    parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)')
    parser.add_argument('--nms', action='store_true', help='TF: add NMS to model')
    parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model')
    parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')
    parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')
    parser.add_argument('--include', nargs='+',
                        default=['torchscript'],
                        help='available formats are (torchscript, onnx, engine, coreml, saved_model, pb, tflite, tfjs)')
    parser.add_argument('--rknpu', default=None, help='RKNN npu platform')
    opt = parser.parse_args()
    print_args(FILE.stem, opt)
    return opt


def main(opt):
    for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]):
        run(**vars(opt))


if __name__ == "__main__":
    opt = parse_opt()
    del opt.rknpu
    main(opt)


================================================
FILE: hubconf.py
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5/

Usage:
    import torch
    model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
    model = torch.hub.load('ultralytics/yolov5:master', 'custom', 'path/to/yolov5s.onnx')  # file from branch
"""

import torch


def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    """Creates a specified YOLOv5 model

    Arguments:
        name (str): name of model, i.e. 'yolov5s'
        pretrained (bool): load pretrained weights into the model
        channels (int): number of input channels
        classes (int): number of model classes
        autoshape (bool): apply YOLOv5 .autoshape() wrapper to model
        verbose (bool): print all information to screen
        device (str, torch.device, None): device to use for model parameters

    Returns:
        YOLOv5 pytorch model
    """
    from pathlib import Path

    from models.common import AutoShape, DetectMultiBackend
    from models.yolo import Model
    from utils.downloads import attempt_download
    from utils.general import check_requirements, intersect_dicts, set_logging
    from utils.torch_utils import select_device

    check_requirements(exclude=('tensorboard', 'thop', 'opencv-python'))
    set_logging(verbose=verbose)

    name = Path(name)
    path = name.with_suffix('.pt') if name.suffix == '' else name  # checkpoint path
    try:
        device = select_device(('0' if torch.cuda.is_available() else 'cpu') if device is None else device)

        if pretrained and channels == 3 and classes == 80:
            model = DetectMultiBackend(path, device=device)  # download/load FP32 model
            # model = models.experimental.attempt_load(path, map_location=device)  # download/load FP32 model
        else:
            cfg = list((Path(__file__).parent / 'models').rglob(f'{path.stem}.yaml'))[0]  # model.yaml path
            model = Model(cfg, channels, classes)  # create model
            if pretrained:
                ckpt = torch.load(attempt_download(path), map_location=device)  # load
                csd = ckpt['model'].float().state_dict()  # checkpoint state_dict as FP32
                csd = intersect_dicts(csd, model.state_dict(), exclude=['anchors'])  # intersect
                model.load_state_dict(csd, strict=False)  # load
                if len(ckpt['model'].names) == classes:
                    model.names = ckpt['model'].names  # set class names attribute
        if autoshape:
            model = AutoShape(model)  # for file/URI/PIL/cv2/np inputs and NMS
        return model.to(device)

    except Exception as e:
        help_url = 'https://github.com/ultralytics/yolov5/issues/36'
        s = 'Cache may be out of date, try `force_reload=True`. See %s for help.' % help_url
        raise Exception(s) from e


def custom(path='path/to/model.pt', autoshape=True, verbose=True, device=None):
    # YOLOv5 custom or local model
    return _create(path, autoshape=autoshape, verbose=verbose, device=device)


def yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-nano model https://github.com/ultralytics/yolov5
    return _create('yolov5n', pretrained, channels, classes, autoshape, verbose, device)


def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-small model https://github.com/ultralytics/yolov5
    return _create('yolov5s', pretrained, channels, classes, autoshape, verbose, device)


def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-medium model https://github.com/ultralytics/yolov5
    return _create('yolov5m', pretrained, channels, classes, autoshape, verbose, device)


def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-large model https://github.com/ultralytics/yolov5
    return _create('yolov5l', pretrained, channels, classes, autoshape, verbose, device)


def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-xlarge model https://github.com/ultralytics/yolov5
    return _create('yolov5x', pretrained, channels, classes, autoshape, verbose, device)


def yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-nano-P6 model https://github.com/ultralytics/yolov5
    return _create('yolov5n6', pretrained, channels, classes, autoshape, verbose, device)


def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-small-P6 model https://github.com/ultralytics/yolov5
    return _create('yolov5s6', pretrained, channels, classes, autoshape, verbose, device)


def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-medium-P6 model https://github.com/ultralytics/yolov5
    return _create('yolov5m6', pretrained, channels, classes, autoshape, verbose, device)


def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-large-P6 model https://github.com/ultralytics/yolov5
    return _create('yolov5l6', pretrained, channels, classes, autoshape, verbose, device)


def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
    # YOLOv5-xlarge-P6 model https://github.com/ultralytics/yolov5
    return _create('yolov5x6', pretrained, channels, classes, autoshape, verbose, device)


if __name__ == '__main__':
    model = _create(name='yolov5s', pretrained=True, channels=3, classes=80, autoshape=True, verbose=True)  # pretrained
    # model = custom(path='path/to/model.pt')  # custom

    # Verify inference
    from pathlib import Path

    import cv2
    import numpy as np
    from PIL import Image

    imgs = ['data/images/zidane.jpg',  # filename
            Path('data/images/zidane.jpg'),  # Path
            'https://ultralytics.com/images/zidane.jpg',  # URI
            cv2.imread('data/images/bus.jpg')[:, :, ::-1],  # OpenCV
            Image.open('data/images/bus.jpg'),  # PIL
            np.zeros((320, 640, 3))]  # numpy

    results = model(imgs, size=320)  # batched inference
    results.print()
    results.save()


================================================
FILE: models/__init__.py
================================================


================================================
FILE: models/common.py
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Common modules
"""
import os
import json
import math
import platform
import warnings
from collections import OrderedDict, namedtuple
from copy import copy
from pathlib import Path

import cv2
import numpy as np
import pandas as pd
import requests
import torch
import torch.nn as nn
from PIL import Image
from torch.cuda import amp

from utils.datasets import exif_transpose, letterbox
from utils.general import (LOGGER, check_requirements, check_suffix, colorstr, increment_path, make_divisible,
                           non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import copy_attr, time_sync


def autopad(k, p=None):  # kernel, padding
    # Pad to 'same'
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p


class Conv(nn.Module):
    # Standard convolution
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super().__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = nn.ReLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())

    def forward(self, x):
        return self.act(self.bn(self.conv(x)))

    def forward_fuse(self, x):
        return self.act(self.conv(x))


class DWConv(Conv):
    # Depth-wise convolution class
    def __init__(self, c1, c2, k=1, s=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act)


class TransformerLayer(nn.Module):
    # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
    def __init__(self, c, num_heads):
        super().__init__()
        self.q = nn.Linear(c, c, bias=False)
        self.k = nn.Linear(c, c, bias=False)
        self.v = nn.Linear(c, c, bias=False)
        self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
        self.fc1 = nn.Linear(c, c, bias=False)
        self.fc2 = nn.Linear(c, c, bias=False)

    def forward(self, x):
        x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
        x = self.fc2(self.fc1(x)) + x
        return x


class TransformerBlock(nn.Module):
    # Vision Transformer https://arxiv.org/abs/2010.11929
    def __init__(self, c1, c2, num_heads, num_layers):
        super().__init__()
        self.conv = None
        if c1 != c2:
            self.conv = Conv(c1, c2)
        self.linear = nn.Linear(c2, c2)  # learnable position embedding
        self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers)))
        self.c2 = c2

    def forward(self, x):
        if self.conv is not None:
            x = self.conv(x)
        b, _, w, h = x.shape
        p = x.flatten(2).permute(2, 0, 1)
        return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h)


class Bottleneck(nn.Module):
    # Standard bottleneck
    def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, shortcut, groups, expansion
        super().__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_, c2, 3, 1, g=g)
        self.add = shortcut and c1 == c2

    def forward(self, x):
        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))


class BottleneckCSP(nn.Module):
    # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
        self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
        self.cv4 = Conv(2 * c_, c2, 1, 1)
        self.bn = nn.BatchNorm2d(2 * c_)  # applied to cat(cv2, cv3)
        self.act = nn.ReLU()
        self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))

    def forward(self, x):
        y1 = self.cv3(self.m(self.cv1(x)))
        y2 = self.cv2(x)
        return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))


class C3(nn.Module):
    # CSP Bottleneck with 3 convolutions
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c1, c_, 1, 1)
        self.cv3 = Conv(2 * c_, c2, 1)  # act=FReLU(c2)
        self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
        # self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])

    def forward(self, x):
        return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))


class C3TR(C3):
    # C3 module with TransformerBlock()
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
        super().__init__(c1, c2, n, shortcut, g, e)
        c_ = int(c2 * e)
        self.m = TransformerBlock(c_, c_, 4, n)


class C3SPP(C3):
    # C3 module with SPP()
    def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
        super().__init__(c1, c2, n, shortcut, g, e)
        c_ = int(c2 * e)
        self.m = SPP(c_, c_, k)


class C3Ghost(C3):
    # C3 module with GhostBottleneck()
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
        super().__init__(c1, c2, n, shortcut, g, e)
        c_ = int(c2 * e)  # hidden channels
        self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))

if os.getenv('RKNN_model_hack', '0') == '0':
    class SPP(nn.Module):
        # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
        def __init__(self, c1, c2, k=(5, 9, 13)):
            super().__init__()
            c_ = c1 // 2  # hidden channels
            self.cv1 = Conv(c1, c_, 1, 1)
            self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
            self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])

        def forward(self, x):
            x = self.cv1(x)
            with warnings.catch_warnings():
                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
                return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
elif os.getenv('RKNN_model_hack', '0') in ['npu_1', 'npu_2']:
    # TODO remove this hack when rknn-toolkit1/2 add this optimize rules
    class SPP(nn.Module):
        def __init__(self, c1, c2, k=(5, 9, 13)):
            super().__init__()
            c_ = c1 // 2  # hidden channels
            self.cv1 = Conv(c1, c_, 1, 1)
            self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
            self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
            for value in k:
                assert (value%2 == 1) and (value!= 1), "value in [{}] only support odd number for RKNN model hack"

        def forward(self, x):
            x = self.cv1(x)
            with warnings.catch_warnings():
                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
                y = [x]
                for maxpool in self.m:
                    kernel_size = maxpool.kernel_size
                    m = x
                    for i in range(math.floor(kernel_size/2)):
                        m = torch.nn.functional.max_pool2d(m, 3, 1, 1)
                    y = [*y, m]
            return self.cv2(torch.cat(y, 1))

if os.getenv('RKNN_model_hack', '0') in ['0','npu_2']:
    class SPPF(nn.Module):
        # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
        def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
            super().__init__()
            c_ = c1 // 2  # hidden channels
            self.cv1 = Conv(c1, c_, 1, 1)
            self.cv2 = Conv(c_ * 4, c2, 1, 1)
            self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)

        def forward(self, x):
            x = self.cv1(x)
            with warnings.catch_warnings():
                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
                y1 = self.m(x)
                y2 = self.m(y1)
                return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
elif os.getenv('RKNN_model_hack', '0') == 'npu_1':
    class SPPF(nn.Module):
        # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
        def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
            super().__init__()
            c_ = c1 // 2  # hidden channels
            self.cv1 = Conv(c1, c_, 1, 1)
            self.cv2 = Conv(c_ * 4, c2, 1, 1)
            self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)

        def forward(self, x):
            x = self.cv1(x)
            with warnings.catch_warnings():
                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
                y1 = self.m(x)
                y2 = self.m(y1)

            with warnings.catch_warnings():
                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
                y = [x]
                kernel_size = self.m.kernel_size
                _3x3_stack = math.floor(kernel_size/2)
                for i in range(3):
                    m = y[-1]
                    for _ in range(_3x3_stack):
                        m = torch.nn.functional.max_pool2d(m, 3, 1, 1)
                    y = [*y, m]
            return self.cv2(torch.cat(y, 1))


class Focus(nn.Module):
    # Focus wh information into c-space
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super().__init__()
        self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
        # self.contract = Contract(gain=2)

    def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
        return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
        # return self.conv(self.contract(x))


class GhostConv(nn.Module):
    # Ghost Convolution https://github.com/huawei-noah/ghostnet
    def __init__(self, c1, c2, k=1, s=1, g=1, act=True):  # ch_in, ch_out, kernel, stride, groups
        super().__init__()
        c_ = c2 // 2  # hidden channels
        self.cv1 = Conv(c1, c_, k, s, None, g, act)
        self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)

    def forward(self, x):
        y = self.cv1(x)
        return torch.cat([y, self.cv2(y)], 1)


class GhostBottleneck(nn.Module):
    # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
    def __init__(self, c1, c2, k=3, s=1):  # ch_in, ch_out, kernel, stride
        super().__init__()
        c_ = c2 // 2
        self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1),  # pw
                                  DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(),  # dw
                                  GhostConv(c_, c2, 1, 1, act=False))  # pw-linear
        self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
                                      Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()

    def forward(self, x):
        return self.conv(x) + self.shortcut(x)


class Contract(nn.Module):
    # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
    def __init__(self, gain=2):
        super().__init__()
        self.gain = gain

    def forward(self, x):
        b, c, h, w = x.size()  # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'
        s = self.gain
        x = x.view(b, c, h // s, s, w // s, s)  # x(1,64,40,2,40,2)
        x = x.permute(0, 3, 5, 1, 2, 4).contiguous()  # x(1,2,2,64,40,40)
        return x.view(b, c * s * s, h // s, w // s)  # x(1,256,40,40)


class Expand(nn.Module):
    # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
    def __init__(self, gain=2):
        super().__init__()
        self.gain = gain

    def forward(self, x):
        b, c, h, w = x.size()  # assert C / s ** 2 == 0, 'Indivisible gain'
        s = self.gain
        x = x.view(b, s, s, c // s ** 2, h, w)  # x(1,2,2,16,80,80)
        x = x.permute(0, 3, 4, 1, 5, 2).contiguous()  # x(1,16,80,2,80,2)
        return x.view(b, c // s ** 2, h * s, w * s)  # x(1,16,160,160)


class Concat(nn.Module):
    # Concatenate a list of tensors along dimension
    def __init__(self, dimension=1):
        super().__init__()
        self.d = dimension

    def forward(self, x):
        return torch.cat(x, self.d)


class DetectMultiBackend(nn.Module):
    # YOLOv5 MultiBackend class for python inference on various backends
    def __init__(self, weights='yolov5s.pt', device=None, dnn=False):
        # Usage:
        #   PyTorch:      weights = *.pt
        #   TorchScript:            *.torchscript
        #   CoreML:                 *.mlmodel
        #   TensorFlow:             *_saved_model
        #   TensorFlow:             *.pb
        #   TensorFlow Lite:        *.tflite
        #   ONNX Runtime:           *.onnx
        #   OpenCV DNN:             *.onnx with dnn=True
        #   TensorRT:               *.engine
        #   RKNN:                   *.rknn
        from models.experimental import attempt_download, attempt_load  # scoped to avoid circular import

        super().__init__()
        w = str(weights[0] if isinstance(weights, list) else weights)
        suffix = Path(w).suffix.lower()
        suffixes = ['.pt', '.torchscript', '.onnx', '.engine', '.tflite', '.pb', '', '.mlmodel', '.rknn']
        check_suffix(w, suffixes)  # check weights have acceptable suffix
        pt, jit, onnx, engine, tflite, pb, saved_model, coreml, rknn_model = (suffix == x for x in suffixes)  # backend booleans
        stride, names = 64, [f'class{i}' for i in range(1000)]  # assign defaults
        attempt_download(w)  # download if not local

        if jit:  # TorchScript
            LOGGER.info(f'Loading {w} for TorchScript inference...')
            extra_files = {'config.txt': ''}  # model metadata
            model = torch.jit.load(w, _extra_files=extra_files)
            if extra_files['config.txt']:
                d = json.loads(extra_files['config.txt'])  # extra_files dict
                stride, names = int(d['stride']), d['names']
        elif pt:  # PyTorch
            model = attempt_load(weights, map_location=device)
            stride = int(model.stride.max())  # model stride
            names = model.module.names if hasattr(model, 'module') else model.names  # get class names
            self.model = model  # explicitly assign for to(), cpu(), cuda(), half()
        elif coreml:  # CoreML
            LOGGER.info(f'Loading {w} for CoreML inference...')
            import coremltools as ct
            model = ct.models.MLModel(w)
        elif dnn:  # ONNX OpenCV DNN
            LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...')
            check_requirements(('opencv-python>=4.5.4',))
            net = cv2.dnn.readNetFromONNX(w)
        elif onnx:  # ONNX Runtime
            LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
            cuda = torch.cuda.is_available()
            check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))
            import onnxruntime
            providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
            session = onnxruntime.InferenceSession(w, providers=providers)
        elif engine:  # TensorRT
            LOGGER.info(f'Loading {w} for TensorRT inference...')
            import tensorrt as trt  # https://developer.nvidia.com/nvidia-tensorrt-download
            Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
            logger = trt.Logger(trt.Logger.INFO)
            with open(w, 'rb') as f, trt.Runtime(logger) as runtime:
                model = runtime.deserialize_cuda_engine(f.read())
            bindings = OrderedDict()
            for index in range(model.num_bindings):
                name = model.get_binding_name(index)
                dtype = trt.nptype(model.get_binding_dtype(index))
                shape = tuple(model.get_binding_shape(index))
                data = torch.from_numpy(np.empty(shape, dtype=np.dtype(dtype))).to(device)
                bindings[name] = Binding(name, dtype, shape, data, int(data.data_ptr()))
            binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())
            context = model.create_execution_context()
            batch_size = bindings['images'].shape[0]
        elif rknn_model:
            # TODO if post-process in model, then we can add code here.
            pass
        else:  # TensorFlow model (TFLite, pb, saved_model)
            if pb:  # https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
                LOGGER.info(f'Loading {w} for TensorFlow *.pb inference...')
                import tensorflow as tf

                def wrap_frozen_graph(gd, inputs, outputs):
                    x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), [])  # wrapped
                    return x.prune(tf.nest.map_structure(x.graph.as_graph_element, inputs),
                                   tf.nest.map_structure(x.graph.as_graph_element, outputs))

                graph_def = tf.Graph().as_graph_def()
                graph_def.ParseFromString(open(w, 'rb').read())
                frozen_func = wrap_frozen_graph(gd=graph_def, inputs="x:0", outputs="Identity:0")
            elif saved_model:
                LOGGER.info(f'Loading {w} for TensorFlow saved_model inference...')
                import tensorflow as tf
                model = tf.keras.models.load_model(w)
            elif tflite:  # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python
                if 'edgetpu' in w.lower():
                    LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...')
                    import tflite_runtime.interpreter as tfli
                    delegate = {'Linux': 'libedgetpu.so.1',  # install https://coral.ai/software/#edgetpu-runtime
                                'Darwin': 'libedgetpu.1.dylib',
                                'Windows': 'edgetpu.dll'}[platform.system()]
                    interpreter = tfli.Interpreter(model_path=w, experimental_delegates=[tfli.load_delegate(delegate)])
                else:
                    LOGGER.info(f'Loading {w} for TensorFlow Lite inference...')
                    import tensorflow as tf
                    interpreter = tf.lite.Interpreter(model_path=w)  # load TFLite model
                interpreter.allocate_tensors()  # allocate
                input_details = interpreter.get_input_details()  # inputs
                output_details = interpreter.get_output_details()  # outputs
        self.__dict__.update(locals())  # assign all variables to self

    def forward(self, im, augment=False, visualize=False, val=False):
        # YOLOv5 MultiBackend inference
        b, ch, h, w = im.shape  # batch, channel, height, width
        if self.pt:  # PyTorch
            y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize)
            return y if val else y[0]
        elif self.coreml:  # CoreML
            im = im.permute(0, 2, 3, 1).cpu().numpy()  # torch BCHW to numpy BHWC shape(1,320,192,3)
            im = Image.fromarray((im[0] * 255).astype('uint8'))
            # im = im.resize((192, 320), Image.ANTIALIAS)
            y = self.model.predict({'image': im})  # coordinates are xywh normalized
            box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]])  # xyxy pixels
            conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)
            y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1)
        elif self.onnx:  # ONNX
            im = im.cpu().numpy()  # torch to numpy
            if self.dnn:  # ONNX OpenCV DNN
                self.net.setInput(im)
                y = self.net.forward()
            else:  # ONNX Runtime
                y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0]
        elif self.engine:  # TensorRT
            assert im.shape == self.bindings['images'].shape, (im.shape, self.bindings['images'].shape)
            self.binding_addrs['images'] = int(im.data_ptr())
            self.context.execute_v2(list(self.binding_addrs.values()))
            y = self.bindings['output'].data
        elif self.rknn_model:
            # TODO if post-process in model, then we can add code here.
            pass
        else:  # TensorFlow model (TFLite, pb, saved_model)
            im = im.permute(0, 2, 3, 1).cpu().numpy()  # torch BCHW to numpy BHWC shape(1,320,192,3)
            if self.pb:
                y = self.frozen_func(x=self.tf.constant(im)).numpy()
            elif self.saved_model:
                y = self.model(im, training=False).numpy()
            elif self.tflite:
                input, output = self.input_details[0], self.output_details[0]
                int8 = input['dtype'] == np.uint8  # is TFLite quantized uint8 model
                if int8:
                    scale, zero_point = input['quantization']
                    im = (im / scale + zero_point).astype(np.uint8)  # de-scale
                self.interpreter.set_tensor(input['index'], im)
                self.interpreter.invoke()
                y = self.interpreter.get_tensor(output['index'])
                if int8:
                    scale, zero_point = output['quantization']
                    y = (y.astype(np.float32) - zero_point) * scale  # re-scale
            y[..., 0] *= w  # x
            y[..., 1] *= h  # y
            y[..., 2] *= w  # w
            y[..., 3] *= h  # h
        y = torch.tensor(y) if isinstance(y, np.ndarray) else y
        return (y, []) if val else y

    def warmup(self, imgsz=(1, 3, 640, 640), half=False):
        # Warmup model by running inference once
        if self.pt or self.engine or self.onnx:  # warmup types
            if isinstance(self.device, torch.device) and self.device.type != 'cpu':  # only warmup GPU models
                im = torch.zeros(*imgsz).to(self.device).type(torch.half if half else torch.float)  # input image
                self.forward(im)  # warmup


class AutoShape(nn.Module):
    # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
    conf = 0.25  # NMS confidence threshold
    iou = 0.45  # NMS IoU threshold
    agnostic = False  # NMS class-agnostic
    multi_label = False  # NMS multiple labels per box
    classes = None  # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
    max_det = 1000  # maximum number of detections per image
    amp = False  # Automatic Mixed Precision (AMP) inference

    def __init__(self, model):
        super().__init__()
        LOGGER.info('Adding AutoShape... ')
        copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=())  # copy attributes
        self.dmb = isinstance(model, DetectMultiBackend)  # DetectMultiBackend() instance
        self.pt = not self.dmb or model.pt  # PyTorch model
        self.model = model.eval()

    def _apply(self, fn):
        # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
        self = super()._apply(fn)
        if self.pt:
            m = self.model.model.model[-1] if self.dmb else self.model.model[-1]  # Detect()
            m.stride = fn(m.stride)
            m.grid = list(map(fn, m.grid))
            if isinstance(m.anchor_grid, list):
                m.anchor_grid = list(map(fn, m.anchor_grid))
        return self

    @torch.no_grad()
    def forward(self, imgs, size=640, augment=False, profile=False):
        # Inference from various sources. For height=640, width=1280, RGB images example inputs are:
        #   file:       imgs = 'data/images/zidane.jpg'  # str or PosixPath
        #   URI:             = 'https://ultralytics.com/images/zidane.jpg'
        #   OpenCV:          = cv2.imread('image.jpg')[:,:,::-1]  # HWC BGR to RGB x(640,1280,3)
        #   PIL:             = Image.open('image.jpg') or ImageGrab.grab()  # HWC x(640,1280,3)
        #   numpy:           = np.zeros((640,1280,3))  # HWC
        #   torch:           = torch.zeros(16,3,320,640)  # BCHW (scaled to size=640, 0-1 values)
        #   multiple:        = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...]  # list of images

        t = [time_sync()]
        p = next(self.model.parameters()) if self.pt else torch.zeros(1)  # for device and type
        autocast = self.amp and (p.device.type != 'cpu')  # Automatic Mixed Precision (AMP) inference
        if isinstance(imgs, torch.Tensor):  # torch
            with amp.autocast(enabled=autocast):
                return self.model(imgs.to(p.device).type_as(p), augment, profile)  # inference

        # Pre-process
        n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs])  # number of images, list of images
        shape0, shape1, files = [], [], []  # image and inference shapes, filenames
        for i, im in enumerate(imgs):
            f = f'image{i}'  # filename
            if isinstance(im, (str, Path)):  # filename or uri
                im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im
                im = np.asarray(exif_transpose(im))
            elif isinstance(im, Image.Image):  # PIL Image
                im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f
            files.append(Path(f).with_suffix('.jpg').name)
            if im.shape[0] < 5:  # image in CHW
                im = im.transpose((1, 2, 0))  # reverse dataloader .transpose(2, 0, 1)
            im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3)  # enforce 3ch input
            s = im.shape[:2]  # HWC
            shape0.append(s)  # image shape
            g = (size / max(s))  # gain
            shape1.append([y * g for y in s])
            imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im)  # update
        shape1 = [make_divisible(x, self.stride) for x in np.stack(shape1, 0).max(0)]  # inference shape
        x = [letterbox(im, new_shape=shape1 if self.pt else size, auto=False)[0] for im in imgs]  # pad
        x = np.stack(x, 0) if n > 1 else x[0][None]  # stack
        x = np.ascontiguousarray(x.transpose((0, 3, 1, 2)))  # BHWC to BCHW
        x = torch.from_numpy(x).to(p.device).type_as(p) / 255  # uint8 to fp16/32
        t.append(time_sync())

        with amp.autocast(enabled=autocast):
            # Inference
            y = self.model(x, augment, profile)  # forward
            t.append(time_sync())

            # Post-process
            y = non_max_suppression(y if self.dmb else y[0], self.conf, iou_thres=self.iou, classes=self.classes,
                                    agnostic=self.agnostic, multi_label=self.multi_label, max_det=self.max_det)  # NMS
            for i in range(n):
                scale_coords(shape1, y[i][:, :4], shape0[i])

            t.append(time_sync())
            return Detections(imgs, y, files, t, self.names, x.shape)


class Detections:
    # YOLOv5 detections class for inference results
    def __init__(self, imgs, pred, files, times=(0, 0, 0, 0), names=None, shape=None):
        super().__init__()
        d = pred[0].device  # device
        gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs]  # normalizations
        self.imgs = imgs  # list of images as numpy arrays
        self.pred = pred  # list of tensors pred[0] = (xyxy, conf, cls)
        self.names = names  # class names
        self.files = files  # image filenames
        self.times = times  # profiling times
        self.xyxy = pred  # xyxy pixels
        self.xywh = [xyxy2xywh(x) for x in pred]  # xywh pixels
        self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)]  # xyxy normalized
        self.xywhn = [x / g for x, g in zip(self.xywh, gn)]  # xywh normalized
        self.n = len(self.pred)  # number of images (batch size)
        self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3))  # timestamps (ms)
        self.s = shape  # inference BCHW shape

    def display(self, pprint=False, show=False, save=False, crop=False, render=False, save_dir=Path('')):
        crops = []
        for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):
            s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} '  # string
            if pred.shape[0]:
                for c in pred[:, -1].unique():
                    n = (pred[:, -1] == c).sum()  # detections per class
                    s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, "  # add to string
                if show or save or render or crop:
                    annotator = Annotator(im, example=str(self.names))
                    for *box, conf, cls in reversed(pred):  # xyxy, confidence, class
                        label = f'{self.names[int(cls)]} {conf:.2f}'
                        if crop:
                            file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None
                            crops.append({'box': box, 'conf': conf, 'cls': cls, 'label': label,
                                          'im': save_one_box(box, im, file=file, save=save)})
                        else:  # all others
                            annotator.box_label(box, label, color=colors(cls))
                    im = annotator.im
            else:
                s += '(no detections)'

            im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im  # from np
            if pprint:
                LOGGER.info(s.rstrip(', '))
            if show:
                im.show(self.files[i])  # show
            if save:
                f = self.files[i]
                im.save(save_dir / f)  # save
                if i == self.n - 1:
                    LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")
            if render:
                self.imgs[i] = np.asarray(im)
        if crop:
            if save:
                LOGGER.info(f'Saved results to {save_dir}\n')
            return crops

    def print(self):
        self.display(pprint=True)  # print results
        LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' %
                    self.t)

    def show(self):
        self.display(show=True)  # show results

    def save(self, save_dir='runs/detect/exp'):
        save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True)  # increment save_dir
        self.display(save=True, save_dir=save_dir)  # save results

    def crop(self, save=True, save_dir='runs/detect/exp'):
        save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None
        return self.display(crop=True, save=save, save_dir=save_dir)  # crop results

    def render(self):
        self.display(render=True)  # render results
        return self.imgs

    def pandas(self):
        # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
        new = copy(self)  # return copy
        ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name'  # xyxy columns
        cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name'  # xywh columns
        for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
            a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)]  # update
            setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
        return new

    def tolist(self):
        # return a list of Detections objects, i.e. 'for result in results.tolist():'
        r = range(self.n)  # iterable
        x = [Detections([self.imgs[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r]
        # for d in x:
        #    for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
        #        setattr(d, k, getattr(d, k)[0])  # pop out of list
        return x

    def __len__(self):
        return self.n


class Classify(nn.Module):
    # Classification head, i.e. x(b,c1,20,20) to x(b,c2)
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1):  # ch_in, ch_out, kernel, stride, padding, groups
        super().__init__()
        self.aap = nn.AdaptiveAvgPool2d(1)  # to x(b,c1,1,1)
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g)  # to x(b,c2,1,1)
        self.flat = nn.Flatten()

    def forward(self, x):
        z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1)  # cat if list
        return self.flat(self.conv(z))  # flatten to x(b,c2)


================================================
FILE: models/common_rk_plug_in.py
================================================
# This file contains modules common to various models

import torch
import torch.nn as nn
from models.common import Conv


class surrogate_focus(nn.Module):
    # surrogate_focus wh information into c-space
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super(surrogate_focus, self).__init__()
        self.conv = Conv(c1 * 4, c2, k, s, p, g, act)

        with torch.no_grad():
            self.conv1 = nn.Conv2d(3, 3, (2, 2), groups=3, bias=False, stride=(2, 2))
            self.conv1.weight[:, :, 0, 0] = 1
            self.conv1.weight[:, :, 0, 1] = 0
            self.conv1.weight[:, :, 1, 0] = 0
            self.conv1.weight[:, :, 1, 1] = 0

            self.conv2 = nn.Conv2d(3, 3, (2, 2), groups=3, bias=False, stride=(2, 2))
            self.conv2.weight[:, :, 0, 0] = 0
            self.conv2.weight[:, :, 0, 1] = 0
            self.conv2.weight[:, :, 1, 0] = 1
            self.conv2.weight[:, :, 1, 1] = 0

            self.conv3 = nn.Conv2d(3, 3, (2, 2), groups=3, bias=False, stride=(2, 2))
            self.conv3.weight[:, :, 0, 0] = 0
            self.conv3.weight[:, :, 0, 1] = 1
            self.conv3.weight[:, :, 1, 0] = 0
            self.conv3.weight[:, :, 1, 1] = 0

            self.conv4 = nn.Conv2d(3, 3, (2, 2), groups=3, bias=False, stride=(2, 2))
            self.conv4.weight[:, :, 0, 0] = 0
            self.conv4.weight[:, :, 0, 1] = 0
            self.conv4.weight[:, :, 1, 0] = 0
            self.conv4.weight[:, :, 1, 1] = 1

    def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
        return self.conv(torch.cat([self.conv1(x), self.conv2(x), self.conv3(x), self.conv4(x)], 1))


class preprocess_conv_layer(nn.Module):
    """docstring for preprocess_conv_layer"""
    #   input_module 为输入模型,即为想要导出模型
    #   mean_value 的值可以是 [m1, m2, m3] 或 常数m
    #   std_value 的值可以是 [s1, s2, s3] 或 常数s
    #   BGR2RGB的操作默认为首先执行,既替代的原有操作顺序为
    #       BGR2RGB -> minus mean -> minus std (与rknn config 设置保持一致) -> nhwc2nchw
    #
    #   使用示例-伪代码:
    #       from add_preprocess_conv_layer import preprocess_conv_layer
    #       model_A = create_model()
    #       model_output = preprocess_co_nv_layer(model_A, mean_value, std_value, BGR2RGB)
    #       onnx_export(model_output)
    #
    #   量化时:
    #       rknn.config的中 channel_mean_value 、reorder_channel 均不赋值。
    #
    #   部署代码:
    #       rknn_input 的属性
    #           pass_through = 1
    #
    #   另外:
    #       由于加入permute操作,c端输入为opencv mat(hwc格式)即可,无需在外部将hwc改成chw格式。
    #

    def __init__(self, input_module, mean_value, std_value, BGR2RGB=False):
        super(preprocess_conv_layer, self).__init__()
        if isinstance(mean_value, int):
            mean_value = [mean_value for i in range(3)]
        if isinstance(std_value, int):
            std_value = [std_value for i in range(3)]

        assert len(mean_value) <= 3, 'mean_value should be int, or list with 3 element'
        assert len(std_value) <= 3, 'std_value should be int, or list with 3 element'

        self.input_module = input_module

        with torch.no_grad():
            self.conv1 = nn.Conv2d(3, 3, (1, 1), groups=1, bias=True, stride=(1, 1))

            if BGR2RGB is False:
                self.conv1.weight[:, :, :, :] = 0
                self.conv1.weight[0, 0, :, :] = 1/std_value[0]
                self.conv1.weight[1, 1, :, :] = 1/std_value[1]
                self.conv1.weight[2, 2, :, :] = 1/std_value[2]
            elif BGR2RGB is True:
                self.conv1.weight[:, :, :, :] = 0
                self.conv1.weight[0, 2, :, :] = 1/std_value[0]
                self.conv1.weight[1, 1, :, :] = 1/std_value[1]
                self.conv1.weight[2, 0, :, :] = 1/std_value[2]

            self.conv1.bias[0] = -mean_value[0]/std_value[0]
            self.conv1.bias[1] = -mean_value[1]/std_value[1]
            self.conv1.bias[2] = -mean_value[2]/std_value[2]

        self.conv1.eval()

    def forward(self, x):
        x = x.permute(0, 3, 1, 2)  # NHWC -> NCHW, apply for rknn_pass_through
        x = self.conv1(x)
        return self.input_module(x)

================================================
FILE: models/experimental.py
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Experimental modules
"""
import math

import numpy as np
import torch
import torch.nn as nn

from models.common import Conv
from utils.downloads import attempt_download


class CrossConv(nn.Module):
    # Cross Convolution Downsample
    def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
        # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
        super().__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, (1, k), (1, s))
        self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
        self.add = shortcut and c1 == c2

    def forward(self, x):
        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))


class Sum(nn.Module):
    # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
    def __init__(self, n, weight=False):  # n: number of inputs
        super().__init__()
        self.weight = weight  # apply weights boolean
        self.iter = range(n - 1)  # iter object
        if weight:
            self.w = nn.Parameter(-torch.arange(1.0, n) / 2, requires_grad=True)  # layer weights

    def forward(self, x):
        y = x[0]  # no weight
        if self.weight:
            w = torch.sigmoid(self.w) * 2
            for i in self.iter:
                y = y + x[i + 1] * w[i]
        else:
            for i in self.iter:
                y = y + x[i + 1]
        return y


class MixConv2d(nn.Module):
    # Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595
    def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):  # ch_in, ch_out, kernel, stride, ch_strategy
        super().__init__()
        n = len(k)  # number of convolutions
        if equal_ch:  # equal c_ per group
            i = torch.linspace(0, n - 1E-6, c2).floor()  # c2 indices
            c_ = [(i == g).sum() for g in range(n)]  # intermediate channels
        else:  # equal weight.numel() per group
            b = [c2] + [0] * n
            a = np.eye(n + 1, n, k=-1)
            a -= np.roll(a, 1, axis=1)
            a *= np.array(k) ** 2
            a[0] = 1
            c_ = np.linalg.lstsq(a, b, rcond=None)[0].round()  # solve for equal weight indices, ax = b

        self.m = nn.ModuleList(
            [nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)])
        self.bn = nn.BatchNorm2d(c2)
        self.act = nn.ReLU()

    def forward(self, x):
        return self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))


class Ensemble(nn.ModuleList):
    # Ensemble of models
    def __init__(self):
        super().__init__()

    def forward(self, x, augment=False, profile=False, visualize=False):
        y = []
        for module in self:
            y.append(module(x, augment, profile, visualize)[0])
        # y = torch.stack(y).max(0)[0]  # max ensemble
        # y = torch.stack(y).mean(0)  # mean ensemble
        y = torch.cat(y, 1)  # nms ensemble
        return y, None  # inference, train output


def attempt_load(weights, map_location=None, inplace=True, fuse=True):
    from models.yolo import Detect, Model

    # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
    model = Ensemble()
    for w in weights if isinstance(weights, list) else [weights]:
        ckpt = torch.load(attempt_download(w), map_location=map_location)  # load
        if fuse:
            model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval())  # FP32 model
        else:
            model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().eval())  # without layer fuse

    # Compatibility updates
    for m in model.modules():
        if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]:
            m.inplace = inplace  # pytorch 1.7.0 compatibility
            if type(m) is Detect:
                if not isinstance(m.anchor_grid, list):  # new Detect Layer compatibility
                    delattr(m, 'anchor_grid')
                    setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)
        elif type(m) is Conv:
            m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatibility

    if len(model) == 1:
        return model[-1]  # return model
    else:
        print(f'Ensemble created with {weights}\n')
        for k in ['names']:
            setattr(model, k, getattr(model[-1], k))
        model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride  # max stride
        return model  # return ensemble


================================================
FILE: models/hub/anchors.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Default anchors for COCO data


# P5 -------------------------------------------------------------------------------------------------------------------
# P5-640:
anchors_p5_640:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32


# P6 -------------------------------------------------------------------------------------------------------------------
# P6-640:  thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11,  21,19,  17,41,  43,32,  39,70,  86,64,  65,131,  134,130,  120,265,  282,180,  247,354,  512,387
anchors_p6_640:
  - [9,11,  21,19,  17,41]  # P3/8
  - [43,32,  39,70,  86,64]  # P4/16
  - [65,131,  134,130,  120,265]  # P5/32
  - [282,180,  247,354,  512,387]  # P6/64

# P6-1280:  thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27,  44,40,  38,94,  96,68,  86,152,  180,137,  140,301,  303,264,  238,542,  436,615,  739,380,  925,792
anchors_p6_1280:
  - [19,27,  44,40,  38,94]  # P3/8
  - [96,68,  86,152,  180,137]  # P4/16
  - [140,301,  303,264,  238,542]  # P5/32
  - [436,615,  739,380,  925,792]  # P6/64

# P6-1920:  thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41,  67,59,  57,141,  144,103,  129,227,  270,205,  209,452,  455,396,  358,812,  653,922,  1109,570,  1387,1187
anchors_p6_1920:
  - [28,41,  67,59,  57,141]  # P3/8
  - [144,103,  129,227,  270,205]  # P4/16
  - [209,452,  455,396,  358,812]  # P5/32
  - [653,922,  1109,570,  1387,1187]  # P6/64


# P7 -------------------------------------------------------------------------------------------------------------------
# P7-640:  thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11,  13,30,  29,20,  30,46,  61,38,  39,92,  78,80,  146,66,  79,163,  149,150,  321,143,  157,303,  257,402,  359,290,  524,372
anchors_p7_640:
  - [11,11,  13,30,  2
Download .txt
gitextract_19tvehv_/

├── .dockerignore
├── .gitattributes
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.md
│   │   ├── feature-request.md
│   │   └── question.md
│   ├── dependabot.yml
│   └── workflows/
│       ├── ci-testing.yml
│       ├── codeql-analysis.yml
│       ├── greetings.yml
│       ├── rebase.yml
│       └── stale.yml
├── .gitignore
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── README.md
├── data/
│   ├── Argoverse.yaml
│   ├── GlobalWheat2020.yaml
│   ├── Objects365.yaml
│   ├── SKU-110K.yaml
│   ├── VisDrone.yaml
│   ├── coco.yaml
│   ├── coco128.yaml
│   ├── hyps/
│   │   ├── hyp.finetune.yaml
│   │   ├── hyp.finetune_objects365.yaml
│   │   ├── hyp.scratch-high.yaml
│   │   ├── hyp.scratch-low.yaml
│   │   ├── hyp.scratch-med.yaml
│   │   └── hyp.scratch.yaml
│   ├── mask.yaml
│   ├── scripts/
│   │   ├── download_weights.sh
│   │   ├── get_coco.sh
│   │   └── get_coco128.sh
│   ├── voc.yaml
│   └── xView.yaml
├── detect.py
├── export.py
├── hubconf.py
├── models/
│   ├── __init__.py
│   ├── common.py
│   ├── common_rk_plug_in.py
│   ├── experimental.py
│   ├── hub/
│   │   ├── anchors.yaml
│   │   ├── yolov3-spp.yaml
│   │   ├── yolov3-tiny.yaml
│   │   ├── yolov3.yaml
│   │   ├── yolov5-bifpn.yaml
│   │   ├── yolov5-fpn.yaml
│   │   ├── yolov5-p2.yaml
│   │   ├── yolov5-p6.yaml
│   │   ├── yolov5-p7.yaml
│   │   ├── yolov5-panet.yaml
│   │   ├── yolov5l6.yaml
│   │   ├── yolov5m6.yaml
│   │   ├── yolov5n6.yaml
│   │   ├── yolov5s-ghost.yaml
│   │   ├── yolov5s-transformer.yaml
│   │   ├── yolov5s6.yaml
│   │   └── yolov5x6.yaml
│   ├── tf.py
│   ├── yolo.py
│   ├── yolov5l.yaml
│   ├── yolov5m.yaml
│   ├── yolov5n.yaml
│   ├── yolov5s.yaml
│   └── yolov5x.yaml
├── requirements.txt
├── setup.cfg
├── train.py
├── tutorial.ipynb
├── utils/
│   ├── __init__.py
│   ├── activations.py
│   ├── augmentations.py
│   ├── autoanchor.py
│   ├── autobatch.py
│   ├── aws/
│   │   ├── __init__.py
│   │   ├── mime.sh
│   │   ├── resume.py
│   │   └── userdata.sh
│   ├── callbacks.py
│   ├── datasets.py
│   ├── downloads.py
│   ├── flask_rest_api/
│   │   ├── README.md
│   │   ├── example_request.py
│   │   └── restapi.py
│   ├── general.py
│   ├── google_app_engine/
│   │   ├── Dockerfile
│   │   ├── additional_requirements.txt
│   │   └── app.yaml
│   ├── loggers/
│   │   ├── __init__.py
│   │   └── wandb/
│   │       ├── README.md
│   │       ├── __init__.py
│   │       ├── log_dataset.py
│   │       ├── sweep.py
│   │       ├── sweep.yaml
│   │       └── wandb_utils.py
│   ├── loss.py
│   ├── metrics.py
│   ├── plots.py
│   └── torch_utils.py
└── val.py
Download .txt
SYMBOL INDEX (474 symbols across 28 files)

FILE: detect.py
  function run (line 39) | def run(weights=ROOT / 'yolov5s.pt',  # model.pt path(s)
  function parse_opt (line 203) | def parse_opt():
  function main (line 236) | def main(opt):

FILE: export.py
  function export_torchscript (line 75) | def export_torchscript(model, im, file, optimize, prefix=colorstr('Torch...
  function export_onnx (line 91) | def export_onnx(model, im, file, opset, train, dynamic, simplify, prefix...
  function export_coreml (line 135) | def export_coreml(model, im, file, prefix=colorstr('CoreML:')):
  function export_saved_model (line 157) | def export_saved_model(model, im, file, dynamic,
  function export_pb (line 189) | def export_pb(keras_model, im, file, prefix=colorstr('TensorFlow GraphDe...
  function export_tflite (line 209) | def export_tflite(keras_model, im, file, int8, data, ncalib, prefix=colo...
  function export_tfjs (line 242) | def export_tfjs(keras_model, im, file, prefix=colorstr('TensorFlow.js:')):
  function export_engine (line 278) | def export_engine(model, im, file, train, half, simplify, workspace=4, v...
  function run (line 325) | def run(data=ROOT / 'data/coco128.yaml',  # 'dataset.yaml path'
  function parse_opt (line 467) | def parse_opt():
  function main (line 499) | def main(opt):

FILE: hubconf.py
  function _create (line 14) | def _create(name, pretrained=True, channels=3, classes=80, autoshape=Tru...
  function custom (line 68) | def custom(path='path/to/model.pt', autoshape=True, verbose=True, device...
  function yolov5n (line 73) | def yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, ver...
  function yolov5s (line 78) | def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, ver...
  function yolov5m (line 83) | def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, ver...
  function yolov5l (line 88) | def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, ver...
  function yolov5x (line 93) | def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, ver...
  function yolov5n6 (line 98) | def yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, ve...
  function yolov5s6 (line 103) | def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, ve...
  function yolov5m6 (line 108) | def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, ve...
  function yolov5l6 (line 113) | def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, ve...
  function yolov5x6 (line 118) | def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, ve...

FILE: models/common.py
  function autopad (line 30) | def autopad(k, p=None):  # kernel, padding
  class Conv (line 37) | class Conv(nn.Module):
    method __init__ (line 39) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in,...
    method forward (line 45) | def forward(self, x):
    method forward_fuse (line 48) | def forward_fuse(self, x):
  class DWConv (line 52) | class DWConv(Conv):
    method __init__ (line 54) | def __init__(self, c1, c2, k=1, s=1, act=True):  # ch_in, ch_out, kern...
  class TransformerLayer (line 58) | class TransformerLayer(nn.Module):
    method __init__ (line 60) | def __init__(self, c, num_heads):
    method forward (line 69) | def forward(self, x):
  class TransformerBlock (line 75) | class TransformerBlock(nn.Module):
    method __init__ (line 77) | def __init__(self, c1, c2, num_heads, num_layers):
    method forward (line 86) | def forward(self, x):
  class Bottleneck (line 94) | class Bottleneck(nn.Module):
    method __init__ (line 96) | def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):  # ch_in, ch_ou...
    method forward (line 103) | def forward(self, x):
  class BottleneckCSP (line 107) | class BottleneckCSP(nn.Module):
    method __init__ (line 109) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ...
    method forward (line 120) | def forward(self, x):
  class C3 (line 126) | class C3(nn.Module):
    method __init__ (line 128) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ...
    method forward (line 137) | def forward(self, x):
  class C3TR (line 141) | class C3TR(C3):
    method __init__ (line 143) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
  class C3SPP (line 149) | class C3SPP(C3):
    method __init__ (line 151) | def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
  class C3Ghost (line 157) | class C3Ghost(C3):
    method __init__ (line 159) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
  class SPP (line 165) | class SPP(nn.Module):
    method __init__ (line 167) | def __init__(self, c1, c2, k=(5, 9, 13)):
    method forward (line 174) | def forward(self, x):
    method __init__ (line 182) | def __init__(self, c1, c2, k=(5, 9, 13)):
    method forward (line 191) | def forward(self, x):
  class SPP (line 181) | class SPP(nn.Module):
    method __init__ (line 167) | def __init__(self, c1, c2, k=(5, 9, 13)):
    method forward (line 174) | def forward(self, x):
    method __init__ (line 182) | def __init__(self, c1, c2, k=(5, 9, 13)):
    method forward (line 191) | def forward(self, x):
  class SPPF (line 205) | class SPPF(nn.Module):
    method __init__ (line 207) | def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
    method forward (line 214) | def forward(self, x):
    method __init__ (line 224) | def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
    method forward (line 231) | def forward(self, x):
  class SPPF (line 222) | class SPPF(nn.Module):
    method __init__ (line 207) | def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
    method forward (line 214) | def forward(self, x):
    method __init__ (line 224) | def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
    method forward (line 231) | def forward(self, x):
  class Focus (line 251) | class Focus(nn.Module):
    method __init__ (line 253) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in,...
    method forward (line 258) | def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
  class GhostConv (line 263) | class GhostConv(nn.Module):
    method __init__ (line 265) | def __init__(self, c1, c2, k=1, s=1, g=1, act=True):  # ch_in, ch_out,...
    method forward (line 271) | def forward(self, x):
  class GhostBottleneck (line 276) | class GhostBottleneck(nn.Module):
    method __init__ (line 278) | def __init__(self, c1, c2, k=3, s=1):  # ch_in, ch_out, kernel, stride
    method forward (line 287) | def forward(self, x):
  class Contract (line 291) | class Contract(nn.Module):
    method __init__ (line 293) | def __init__(self, gain=2):
    method forward (line 297) | def forward(self, x):
  class Expand (line 305) | class Expand(nn.Module):
    method __init__ (line 307) | def __init__(self, gain=2):
    method forward (line 311) | def forward(self, x):
  class Concat (line 319) | class Concat(nn.Module):
    method __init__ (line 321) | def __init__(self, dimension=1):
    method forward (line 325) | def forward(self, x):
  class DetectMultiBackend (line 329) | class DetectMultiBackend(nn.Module):
    method __init__ (line 331) | def __init__(self, weights='yolov5s.pt', device=None, dnn=False):
    method forward (line 435) | def forward(self, im, augment=False, visualize=False, val=False):
    method warmup (line 489) | def warmup(self, imgsz=(1, 3, 640, 640), half=False):
  class AutoShape (line 497) | class AutoShape(nn.Module):
    method __init__ (line 507) | def __init__(self, model):
    method _apply (line 515) | def _apply(self, fn):
    method forward (line 527) | def forward(self, imgs, size=640, augment=False, profile=False):
  class Detections (line 585) | class Detections:
    method __init__ (line 587) | def __init__(self, imgs, pred, files, times=(0, 0, 0, 0), names=None, ...
    method display (line 604) | def display(self, pprint=False, show=False, save=False, crop=False, re...
    method print (line 643) | def print(self):
    method show (line 648) | def show(self):
    method save (line 651) | def save(self, save_dir='runs/detect/exp'):
    method crop (line 655) | def crop(self, save=True, save_dir='runs/detect/exp'):
    method render (line 659) | def render(self):
    method pandas (line 663) | def pandas(self):
    method tolist (line 673) | def tolist(self):
    method __len__ (line 682) | def __len__(self):
  class Classify (line 686) | class Classify(nn.Module):
    method __init__ (line 688) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1):  # ch_in, ch_out, k...
    method forward (line 694) | def forward(self, x):

FILE: models/common_rk_plug_in.py
  class surrogate_focus (line 8) | class surrogate_focus(nn.Module):
    method __init__ (line 10) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in,...
    method forward (line 39) | def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
  class preprocess_conv_layer (line 43) | class preprocess_conv_layer(nn.Module):
    method __init__ (line 68) | def __init__(self, input_module, mean_value, std_value, BGR2RGB=False):
    method forward (line 100) | def forward(self, x):

FILE: models/experimental.py
  class CrossConv (line 15) | class CrossConv(nn.Module):
    method __init__ (line 17) | def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
    method forward (line 25) | def forward(self, x):
  class Sum (line 29) | class Sum(nn.Module):
    method __init__ (line 31) | def __init__(self, n, weight=False):  # n: number of inputs
    method forward (line 38) | def forward(self, x):
  class MixConv2d (line 50) | class MixConv2d(nn.Module):
    method __init__ (line 52) | def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):  # ch_in, ch...
    method forward (line 71) | def forward(self, x):
  class Ensemble (line 75) | class Ensemble(nn.ModuleList):
    method __init__ (line 77) | def __init__(self):
    method forward (line 80) | def forward(self, x, augment=False, profile=False, visualize=False):
  function attempt_load (line 90) | def attempt_load(weights, map_location=None, inplace=True, fuse=True):

FILE: models/tf.py
  class TFBN (line 37) | class TFBN(keras.layers.Layer):
    method __init__ (line 39) | def __init__(self, w=None):
    method call (line 48) | def call(self, inputs):
  class TFPad (line 52) | class TFPad(keras.layers.Layer):
    method __init__ (line 53) | def __init__(self, pad):
    method call (line 57) | def call(self, inputs):
  class TFConv (line 61) | class TFConv(keras.layers.Layer):
    method __init__ (line 63) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
    method call (line 88) | def call(self, inputs):
  class TFFocus (line 92) | class TFFocus(keras.layers.Layer):
    method __init__ (line 94) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
    method call (line 99) | def call(self, inputs):  # x(b,w,h,c) -> y(b,w/2,h/2,4c)
  class TFBottleneck (line 107) | class TFBottleneck(keras.layers.Layer):
    method __init__ (line 109) | def __init__(self, c1, c2, shortcut=True, g=1, e=0.5, w=None):  # ch_i...
    method call (line 116) | def call(self, inputs):
  class TFConv2d (line 120) | class TFConv2d(keras.layers.Layer):
    method __init__ (line 122) | def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None):
    method call (line 130) | def call(self, inputs):
  class TFBottleneckCSP (line 134) | class TFBottleneckCSP(keras.layers.Layer):
    method __init__ (line 136) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
    method call (line 148) | def call(self, inputs):
  class TFC3 (line 154) | class TFC3(keras.layers.Layer):
    method __init__ (line 156) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
    method call (line 165) | def call(self, inputs):
  class TFSPP (line 169) | class TFSPP(keras.layers.Layer):
    method __init__ (line 171) | def __init__(self, c1, c2, k=(5, 9, 13), w=None):
    method call (line 178) | def call(self, inputs):
  class TFSPPF (line 183) | class TFSPPF(keras.layers.Layer):
    method __init__ (line 185) | def __init__(self, c1, c2, k=5, w=None):
    method call (line 192) | def call(self, inputs):
  class TFDetect (line 199) | class TFDetect(keras.layers.Layer):
    method __init__ (line 200) | def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None)...
    method call (line 218) | def call(self, inputs):
    method _make_grid (line 240) | def _make_grid(nx=20, ny=20):
  class TFUpsample (line 247) | class TFUpsample(keras.layers.Layer):
    method __init__ (line 248) | def __init__(self, size, scale_factor, mode, w=None):  # warning: all ...
    method call (line 257) | def call(self, inputs):
  class TFConcat (line 261) | class TFConcat(keras.layers.Layer):
    method __init__ (line 262) | def __init__(self, dimension=1, w=None):
    method call (line 267) | def call(self, inputs):
  function parse_model (line 271) | def parse_model(d, ch, model, imgsz):  # model_dict, input_channels(3)
  class TFModel (line 323) | class TFModel:
    method __init__ (line 324) | def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, model=None, imgs...
    method predict (line 340) | def predict(self, inputs, tf_nms=False, agnostic_nms=False, topk_per_c...
    method _xywh2xyxy (line 374) | def _xywh2xyxy(xywh):
  class AgnosticNMS (line 380) | class AgnosticNMS(keras.layers.Layer):
    method call (line 382) | def call(self, input, topk_all, iou_thres, conf_thres):
    method _nms (line 389) | def _nms(x, topk_all=100, iou_thres=0.45, conf_thres=0.25):  # agnosti...
  function representative_dataset_gen (line 411) | def representative_dataset_gen(dataset, ncalib=100):
  function run (line 422) | def run(weights=ROOT / 'yolov5s.pt',  # weights path
  function parse_opt (line 446) | def parse_opt():
  function main (line 458) | def main(opt):

FILE: models/yolo.py
  class Detect (line 33) | class Detect(nn.Module):
    method __init__ (line 37) | def __init__(self, nc=80, anchors=(), ch=(), inplace=True):  # detecti...
    method forward (line 49) | def forward(self, x):
    method _make_grid (line 79) | def _make_grid(self, nx=20, ny=20, i=0):
  class Model (line 91) | class Model(nn.Module):
    method __init__ (line 92) | def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None):  ...
    method forward (line 130) | def forward(self, x, augment=False, profile=False, visualize=False):
    method _forward_augment (line 135) | def _forward_augment(self, x):
    method _forward_once (line 149) | def _forward_once(self, x, profile=False, visualize=False):
    method _descale_pred (line 162) | def _descale_pred(self, p, flips, scale, img_size):
    method _clip_augmented (line 179) | def _clip_augmented(self, y):
    method _profile_one_layer (line 190) | def _profile_one_layer(self, m, x, dt):
    method _initialize_biases (line 203) | def _initialize_biases(self, cf=None):  # initialize biases into Detec...
    method _print_biases (line 213) | def _print_biases(self):
    method fuse (line 225) | def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layers
    method info (line 235) | def info(self, verbose=False, img_size=640):  # print model information
    method _apply (line 238) | def _apply(self, fn):
  function parse_model (line 250) | def parse_model(d, ch):  # model_dict, input_channels(3)

FILE: train.py
  function train (line 58) | def train(hyp,  # path/to/hyp.yaml or hyp dictionary
  function parse_opt (line 441) | def parse_opt(known=False):
  function main (line 486) | def main(opt, callbacks=Callbacks()):
  function run (line 616) | def run(**kwargs):

FILE: utils/__init__.py
  function notebook_init (line 7) | def notebook_init(verbose=True):

FILE: utils/activations.py
  class SiLU (line 12) | class SiLU(nn.Module):  # export-friendly version of nn.SiLU()
    method forward (line 14) | def forward(x):
  class Hardswish (line 18) | class Hardswish(nn.Module):  # export-friendly version of nn.Hardswish()
    method forward (line 20) | def forward(x):
  class Mish (line 26) | class Mish(nn.Module):
    method forward (line 28) | def forward(x):
  class MemoryEfficientMish (line 32) | class MemoryEfficientMish(nn.Module):
    class F (line 33) | class F(torch.autograd.Function):
      method forward (line 35) | def forward(ctx, x):
      method backward (line 40) | def backward(ctx, grad_output):
    method forward (line 46) | def forward(self, x):
  class FReLU (line 51) | class FReLU(nn.Module):
    method __init__ (line 52) | def __init__(self, c1, k=3):  # ch_in, kernel
    method forward (line 57) | def forward(self, x):
  class AconC (line 62) | class AconC(nn.Module):
    method __init__ (line 68) | def __init__(self, c1):
    method forward (line 74) | def forward(self, x):
  class MetaAconC (line 79) | class MetaAconC(nn.Module):
    method __init__ (line 85) | def __init__(self, c1, k=1, s=1, r=16):  # ch_in, kernel, stride, r
    method forward (line 95) | def forward(self, x):

FILE: utils/augmentations.py
  class Albumentations (line 16) | class Albumentations:
    method __init__ (line 18) | def __init__(self):
    method __call__ (line 40) | def __call__(self, im, labels, p=1.0):
  function augment_hsv (line 47) | def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
  function hist_equalize (line 63) | def hist_equalize(im, clahe=True, bgr=False):
  function replicate (line 74) | def replicate(im, labels):
  function letterbox (line 91) | def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True...
  function random_perspective (line 124) | def random_perspective(im, targets=(), segments=(), degrees=10, translat...
  function copy_paste (line 213) | def copy_paste(im, labels, segments, p=0.5):
  function cutout (line 237) | def cutout(im, labels, p=0.5):
  function mixup (line 264) | def mixup(im, labels, im2, labels2):
  function box_candidates (line 272) | def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e...

FILE: utils/autoanchor.py
  function check_anchor_order (line 18) | def check_anchor_order(m):
  function check_anchors (line 28) | def check_anchors(dataset, model, thr=4.0, imgsz=640):
  function kmean_anchors (line 65) | def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=...

FILE: utils/autobatch.py
  function check_train_batch_size (line 16) | def check_train_batch_size(model, imgsz=640):
  function autobatch (line 22) | def autobatch(model, imgsz=640, fraction=0.9, batch_size=16):

FILE: utils/callbacks.py
  class Callbacks (line 7) | class Callbacks:
    method __init__ (line 12) | def __init__(self):
    method register_action (line 39) | def register_action(self, hook, name='', callback=None):
    method get_registered_actions (line 52) | def get_registered_actions(self, hook=None):
    method run (line 64) | def run(self, hook, *args, **kwargs):

FILE: utils/datasets.py
  function get_hash (line 45) | def get_hash(paths):
  function exif_size (line 53) | def exif_size(img):
  function exif_transpose (line 68) | def exif_transpose(image):
  function create_dataloader (line 94) | def create_dataloader(path, imgsz, batch_size, stride, single_cls=False,...
  class InfiniteDataLoader (line 124) | class InfiniteDataLoader(dataloader.DataLoader):
    method __init__ (line 130) | def __init__(self, *args, **kwargs):
    method __len__ (line 135) | def __len__(self):
    method __iter__ (line 138) | def __iter__(self):
  class _RepeatSampler (line 143) | class _RepeatSampler:
    method __init__ (line 150) | def __init__(self, sampler):
    method __iter__ (line 153) | def __iter__(self):
  class LoadImages (line 158) | class LoadImages:
    method __init__ (line 160) | def __init__(self, path, img_size=640, stride=32, auto=True):
    method __iter__ (line 189) | def __iter__(self):
    method __next__ (line 193) | def __next__(self):
    method new_video (line 231) | def new_video(self, path):
    method __len__ (line 236) | def __len__(self):
  class LoadWebcam (line 240) | class LoadWebcam:  # for inference
    method __init__ (line 242) | def __init__(self, pipe='0', img_size=640, stride=32):
    method __iter__ (line 249) | def __iter__(self):
    method __next__ (line 253) | def __next__(self):
    method __len__ (line 278) | def __len__(self):
  class LoadStreams (line 282) | class LoadStreams:
    method __init__ (line 284) | def __init__(self, sources='streams.txt', img_size=640, stride=32, aut...
    method update (line 326) | def update(self, i, cap, stream):
    method __iter__ (line 343) | def __iter__(self):
    method __next__ (line 347) | def __next__(self):
    method __len__ (line 366) | def __len__(self):
  function img2label_paths (line 370) | def img2label_paths(img_paths):
  class LoadImagesAndLabels (line 376) | class LoadImagesAndLabels(Dataset):
    method __init__ (line 380) | def __init__(self, path, img_size=640, batch_size=16, augment=False, h...
    method cache_labels (line 507) | def cache_labels(self, path=Path('./labels.cache'), prefix=''):
    method __len__ (line 543) | def __len__(self):
    method __getitem__ (line 552) | def __getitem__(self, index):
    method collate_fn (line 626) | def collate_fn(batch):
    method collate_fn4 (line 633) | def collate_fn4(batch):
  function load_image (line 660) | def load_image(self, i):
  function load_mosaic (line 681) | def load_mosaic(self, index):
  function load_mosaic9 (line 738) | def load_mosaic9(self, index):
  function create_folder (line 812) | def create_folder(path='./new'):
  function flatten_recursive (line 819) | def flatten_recursive(path='../datasets/coco128'):
  function extract_boxes (line 827) | def extract_boxes(path='../datasets/coco128'):  # from utils.datasets im...
  function autosplit (line 861) | def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0)...
  function verify_image_label (line 885) | def verify_image_label(args):
  function dataset_stats (line 937) | def dataset_stats(path='coco128.yaml', autodownload=False, verbose=False...

FILE: utils/downloads.py
  function gsutil_getsize (line 18) | def gsutil_getsize(url=''):
  function safe_download (line 24) | def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):
  function attempt_download (line 43) | def attempt_download(file, repo='ultralytics/yolov5'):  # from utils.dow...
  function gdrive_download (line 80) | def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zi...
  function get_token (line 115) | def get_token(cookie="./cookie"):

FILE: utils/flask_rest_api/restapi.py
  function predict (line 17) | def predict():

FILE: utils/general.py
  function set_logging (line 47) | def set_logging(name=None, verbose=True):
  class Profile (line 57) | class Profile(contextlib.ContextDecorator):
    method __enter__ (line 59) | def __enter__(self):
    method __exit__ (line 62) | def __exit__(self, type, value, traceback):
  class Timeout (line 66) | class Timeout(contextlib.ContextDecorator):
    method __init__ (line 68) | def __init__(self, seconds, *, timeout_msg='', suppress_timeout_errors...
    method _timeout_handler (line 73) | def _timeout_handler(self, signum, frame):
    method __enter__ (line 76) | def __enter__(self):
    method __exit__ (line 80) | def __exit__(self, exc_type, exc_val, exc_tb):
  class WorkingDirectory (line 86) | class WorkingDirectory(contextlib.ContextDecorator):
    method __init__ (line 88) | def __init__(self, new_dir):
    method __enter__ (line 92) | def __enter__(self):
    method __exit__ (line 95) | def __exit__(self, exc_type, exc_val, exc_tb):
  function try_except (line 99) | def try_except(func):
  function methods (line 110) | def methods(instance):
  function print_args (line 115) | def print_args(name, opt):
  function init_seeds (line 120) | def init_seeds(seed=0):
  function intersect_dicts (line 130) | def intersect_dicts(da, db, exclude=()):
  function get_latest_run (line 135) | def get_latest_run(search_dir='.'):
  function user_config_dir (line 141) | def user_config_dir(dir='Ultralytics', env_var='YOLOV5_CONFIG_DIR'):
  function is_writeable (line 154) | def is_writeable(dir, test=False):
  function is_docker (line 169) | def is_docker():
  function is_colab (line 174) | def is_colab():
  function is_pip (line 183) | def is_pip():
  function is_ascii (line 188) | def is_ascii(s=''):
  function is_chinese (line 194) | def is_chinese(s='人工智能'):
  function emojis (line 199) | def emojis(str=''):
  function file_size (line 204) | def file_size(path):
  function check_online (line 215) | def check_online():
  function check_git_status (line 227) | def check_git_status():
  function check_python (line 246) | def check_python(minimum='3.6.2'):
  function check_version (line 251) | def check_version(current='0.0.0', minimum='0.0.0', name='version ', pin...
  function check_requirements (line 262) | def check_requirements(requirements=ROOT / 'requirements.txt', exclude=(...
  function check_img_size (line 298) | def check_img_size(imgsz, s=32, floor=0):
  function check_imshow (line 309) | def check_imshow():
  function check_suffix (line 324) | def check_suffix(file='yolov5s.pt', suffix=('.pt',), msg=''):
  function check_yaml (line 335) | def check_yaml(file, suffix=('.yaml', '.yml')):
  function check_file (line 340) | def check_file(file, suffix=''):
  function check_dataset (line 365) | def check_dataset(data, autodownload=True):
  function url2file (line 417) | def url2file(url):
  function download (line 424) | def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1):
  function make_divisible (line 458) | def make_divisible(x, divisor):
  function clean_str (line 465) | def clean_str(s):
  function one_cycle (line 470) | def one_cycle(y1=0.0, y2=1.0, steps=100):
  function colorstr (line 475) | def colorstr(*input):
  function labels_to_class_weights (line 500) | def labels_to_class_weights(labels, nc=80):
  function labels_to_image_weights (line 519) | def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
  function coco80_to_coco91_class (line 527) | def coco80_to_coco91_class():  # converts 80-index (val2014) to 91-index...
  function xyxy2xywh (line 539) | def xyxy2xywh(x):
  function xywh2xyxy (line 549) | def xywh2xyxy(x):
  function xywhn2xyxy (line 559) | def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
  function xyxy2xywhn (line 569) | def xyxy2xywhn(x, w=640, h=640, clip=False, eps=0.0):
  function xyn2xy (line 581) | def xyn2xy(x, w=640, h=640, padw=0, padh=0):
  function segment2box (line 589) | def segment2box(segment, width=640, height=640):
  function segments2boxes (line 597) | def segments2boxes(segments):
  function resample_segments (line 606) | def resample_segments(segments, n=1000):
  function scale_coords (line 615) | def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
  function clip_coords (line 631) | def clip_coords(boxes, shape):
  function non_max_suppression (line 643) | def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, cla...
  function strip_optimizer (line 737) | def strip_optimizer(f='best.pt', s=''):  # from utils.general import *; ...
  function print_mutation (line 753) | def print_mutation(results, hyp, save_dir, bucket):
  function apply_classifier (line 792) | def apply_classifier(x, model, img, im0):
  function increment_path (line 828) | def increment_path(path, exist_ok=False, sep='', mkdir=False):

FILE: utils/loggers/__init__.py
  class Loggers (line 37) | class Loggers():
    method __init__ (line 39) | def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, lo...
    method on_pretrain_routine_end (line 76) | def on_pretrain_routine_end(self):
    method on_train_batch_end (line 82) | def on_train_batch_end(self, ni, model, imgs, targets, paths, plots, s...
    method on_train_epoch_end (line 97) | def on_train_epoch_end(self, epoch):
    method on_val_image_end (line 102) | def on_val_image_end(self, pred, predn, path, names, im):
    method on_val_end (line 107) | def on_val_end(self):
    method on_fit_epoch_end (line 113) | def on_fit_epoch_end(self, vals, epoch, best_fitness, fi):
    method on_model_save (line 131) | def on_model_save(self, last, epoch, final_epoch, best_fitness, fi):
    method on_train_end (line 137) | def on_train_end(self, last, best, plots, epoch, results):

FILE: utils/loggers/wandb/log_dataset.py
  function create_dataset_artifact (line 10) | def create_dataset_artifact(opt):

FILE: utils/loggers/wandb/sweep.py
  function sweep (line 17) | def sweep():

FILE: utils/loggers/wandb/wandb_utils.py
  function remove_prefix (line 32) | def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX):
  function check_wandb_config_file (line 36) | def check_wandb_config_file(data_config_file):
  function check_wandb_dataset (line 43) | def check_wandb_dataset(data_file):
  function get_run_info (line 59) | def get_run_info(run_path):
  function check_wandb_resume (line 68) | def check_wandb_resume(opt):
  function process_wandb_config_ddp_mode (line 82) | def process_wandb_config_ddp_mode(opt):
  class WandbLogger (line 106) | class WandbLogger():
    method __init__ (line 120) | def __init__(self, opt, run_id=None, job_type='Training'):
    method check_and_upload_dataset (line 192) | def check_and_upload_dataset(self, opt):
    method setup_training (line 210) | def setup_training(self, opt):
    method download_dataset_artifact (line 260) | def download_dataset_artifact(self, path, alias):
    method download_model_artifact (line 280) | def download_model_artifact(self, opt):
    method log_model (line 298) | def log_model(self, path, opt, epoch, fitness_score, best_model=False):
    method log_dataset_artifact (line 322) | def log_dataset_artifact(self, data_file, single_cls, project, overwri...
    method map_val_table_path (line 379) | def map_val_table_path(self):
    method create_dataset_table (line 389) | def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_...
    method log_training_progress (line 431) | def log_training_progress(self, predn, path, names):
    method val_one_image (line 472) | def val_one_image(self, pred, predn, path, names, im):
    method log (line 494) | def log(self, log_dict):
    method end_epoch (line 505) | def end_epoch(self, best_result=False):
    method finish_run (line 537) | def finish_run(self):
  function all_logging_disabled (line 549) | def all_logging_disabled(highest_level=logging.CRITICAL):

FILE: utils/loss.py
  function smooth_BCE (line 13) | def smooth_BCE(eps=0.1):  # https://github.com/ultralytics/yolov3/issues...
  class BCEBlurWithLogitsLoss (line 18) | class BCEBlurWithLogitsLoss(nn.Module):
    method __init__ (line 20) | def __init__(self, alpha=0.05):
    method forward (line 25) | def forward(self, pred, true):
  class FocalLoss (line 35) | class FocalLoss(nn.Module):
    method __init__ (line 37) | def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
    method forward (line 45) | def forward(self, pred, true):
  class QFocalLoss (line 65) | class QFocalLoss(nn.Module):
    method __init__ (line 67) | def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
    method forward (line 75) | def forward(self, pred, true):
  class ComputeLoss (line 91) | class ComputeLoss:
    method __init__ (line 93) | def __init__(self, model, autobalance=False):
    method __call__ (line 117) | def __call__(self, p, targets):  # predictions, targets, model
    method build_targets (line 169) | def build_targets(self, p, targets):

FILE: utils/metrics.py
  function fitness (line 15) | def fitness(x):
  function ap_per_class (line 21) | def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='....
  function compute_ap (line 89) | def compute_ap(recall, precision):
  class ConfusionMatrix (line 117) | class ConfusionMatrix:
    method __init__ (line 119) | def __init__(self, nc, conf=0.25, iou_thres=0.45):
    method process_batch (line 125) | def process_batch(self, detections, labels):
    method matrix (line 165) | def matrix(self):
    method tp_fp (line 168) | def tp_fp(self):
    method plot (line 174) | def plot(self, normalize=True, save_dir='', names=()):
    method print (line 196) | def print(self):
  function bbox_iou (line 201) | def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=Fal...
  function box_iou (line 246) | def box_iou(box1, box2):
  function bbox_ioa (line 271) | def bbox_ioa(box1, box2, eps=1E-7):
  function wh_iou (line 295) | def wh_iou(wh1, wh2):
  function plot_pr_curve (line 305) | def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
  function plot_mc_curve (line 326) | def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Con...

FILE: utils/plots.py
  class Colors (line 31) | class Colors:
    method __init__ (line 33) | def __init__(self):
    method __call__ (line 40) | def __call__(self, i, bgr=False):
    method hex2rgb (line 45) | def hex2rgb(h):  # rgb order (PIL)
  function check_font (line 52) | def check_font(font='Arial.ttf', size=10):
  class Annotator (line 68) | class Annotator:
    method __init__ (line 73) | def __init__(self, im, line_width=None, font_size=None, font='Arial.tt...
    method box_label (line 85) | def box_label(self, box, label='', color=(128, 128, 128), txt_color=(2...
    method rectangle (line 110) | def rectangle(self, xy, fill=None, outline=None, width=1):
    method text (line 114) | def text(self, xy, text, txt_color=(255, 255, 255)):
    method result (line 119) | def result(self):
  function feature_visualization (line 124) | def feature_visualization(x, module_type, stage, n=32, save_dir=Path('ru...
  function hist2d (line 152) | def hist2d(x, y, n=100):
  function butter_lowpass_filtfilt (line 161) | def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):
  function output_to_target (line 174) | def output_to_target(output):
  function plot_images (line 183) | def plot_images(images, targets, paths=None, fname='images.jpg', names=N...
  function plot_lr_scheduler (line 244) | def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):
  function plot_val_txt (line 261) | def plot_val_txt():  # from utils.plots import *; plot_val()
  function plot_targets_txt (line 278) | def plot_targets_txt():  # from utils.plots import *; plot_targets_txt()
  function plot_val_study (line 291) | def plot_val_study(file='', dir='', x=None):  # from utils.plots import ...
  function plot_labels (line 330) | def plot_labels(labels, names=(), save_dir=Path('')):
  function plot_evolve (line 374) | def plot_evolve(evolve_csv='path/to/evolve.csv'):  # from utils.plots im...
  function plot_results (line 400) | def plot_results(file='path/to/results.csv', dir=''):
  function profile_idetection (line 426) | def profile_idetection(start=0, stop=0, labels=(), save_dir=''):
  function save_one_box (line 457) | def save_one_box(xyxy, im, file='image.jpg', gain=1.02, pad=10, square=F...

FILE: utils/torch_utils.py
  function torch_distributed_zero_first (line 30) | def torch_distributed_zero_first(local_rank: int):
  function date_modified (line 41) | def date_modified(path=__file__):
  function git_describe (line 47) | def git_describe(path=Path(__file__).parent):  # path must be a directory
  function select_device (line 56) | def select_device(device='', batch_size=0, newline=True):
  function time_sync (line 86) | def time_sync():
  function profile (line 93) | def profile(input, ops, n=10, device=None):
  function is_parallel (line 145) | def is_parallel(model):
  function de_parallel (line 150) | def de_parallel(model):
  function initialize_weights (line 155) | def initialize_weights(model):
  function find_modules (line 167) | def find_modules(model, mclass=nn.Conv2d):
  function sparsity (line 172) | def sparsity(model):
  function prune (line 181) | def prune(model, amount=0.3):
  function fuse_conv_and_bn (line 192) | def fuse_conv_and_bn(conv, bn):
  function model_info (line 215) | def model_info(model, verbose=False, img_size=640):
  function scale_img (line 239) | def scale_img(img, ratio=1.0, same_shape=False, gs=32):  # img(16,3,256,...
  function copy_attr (line 252) | def copy_attr(a, b, include=(), exclude=()):
  class EarlyStopping (line 261) | class EarlyStopping:
    method __init__ (line 263) | def __init__(self, patience=30):
    method __call__ (line 269) | def __call__(self, epoch, fitness):
  class ModelEMA (line 284) | class ModelEMA:
    method __init__ (line 294) | def __init__(self, model, decay=0.9999, updates=0):
    method update (line 304) | def update(self, model):
    method update_attr (line 316) | def update_attr(self, model, include=(), exclude=('process_group', 're...

FILE: val.py
  function save_one_txt (line 37) | def save_one_txt(predn, save_conf, shape, file):
  function save_one_json (line 47) | def save_one_json(predn, jdict, path, class_map):
  function process_batch (line 59) | def process_batch(detections, labels, iouv):
  function run (line 84) | def run(data,
  function parse_opt (line 306) | def parse_opt():
  function main (line 337) | def main(opt):
Condensed preview — 102 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (628K chars).
[
  {
    "path": ".dockerignore",
    "chars": 3684,
    "preview": "# Repo-specific DockerIgnore -------------------------------------------------------------------------------------------"
  },
  {
    "path": ".gitattributes",
    "chars": 75,
    "preview": "# this drop notebooks from GitHub language stats\n*.ipynb linguist-vendored\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.md",
    "chars": 1532,
    "preview": "---\nname: \"🐛 Bug report\"\nabout: Create a report to help us improve\ntitle: ''\nlabels: bug\nassignees: ''\n\n---\n\nBefore subm"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature-request.md",
    "chars": 731,
    "preview": "---\nname: \"🚀 Feature request\"\nabout: Suggest an idea for this project\ntitle: ''\nlabels: enhancement\nassignees: ''\n\n---\n\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/question.md",
    "chars": 136,
    "preview": "---\nname: \"❓Question\"\nabout: Ask a general question\ntitle: ''\nlabels: question\nassignees: ''\n\n---\n\n## ❔Question\n\n\n## Add"
  },
  {
    "path": ".github/dependabot.yml",
    "chars": 201,
    "preview": "version: 2\nupdates:\n- package-ecosystem: pip\n  directory: \"/\"\n  schedule:\n    interval: weekly\n    time: \"04:00\"\n  open-"
  },
  {
    "path": ".github/workflows/ci-testing.yml",
    "chars": 3020,
    "preview": "name: CI CPU testing\n\non:  # https://help.github.com/en/actions/reference/events-that-trigger-workflows\n  push:\n    bran"
  },
  {
    "path": ".github/workflows/codeql-analysis.yml",
    "chars": 1993,
    "preview": "# This action runs GitHub's industry-leading static analysis engine, CodeQL, against a repository's source code to find "
  },
  {
    "path": ".github/workflows/greetings.yml",
    "chars": 4884,
    "preview": "name: Greetings\n\non: [pull_request_target, issues]\n\njobs:\n  greeting:\n    runs-on: ubuntu-latest\n    steps:\n      - uses"
  },
  {
    "path": ".github/workflows/rebase.yml",
    "chars": 542,
    "preview": "name: Automatic Rebase\n# https://github.com/marketplace/actions/automatic-rebase\n\non:\n  issue_comment:\n    types: [creat"
  },
  {
    "path": ".github/workflows/stale.yml",
    "chars": 852,
    "preview": "name: Close stale issues\non:\n  schedule:\n    - cron: \"0 0 * * *\"\n\njobs:\n  stale:\n    runs-on: ubuntu-latest\n    steps:\n "
  },
  {
    "path": ".gitignore",
    "chars": 3913,
    "preview": "# Repo-specific GitIgnore ----------------------------------------------------------------------------------------------"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 1553,
    "preview": "# Define hooks for code formations\n# Will be applied on any updated commit files if a user has installed and linked comm"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 4939,
    "preview": "## Contributing to YOLOv5 🚀\n\nWe love your input! We want to make contributing to YOLOv5 as easy and transparent as possi"
  },
  {
    "path": "Dockerfile",
    "chars": 2160,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Start FROM Nvidia PyTorch image https://ngc.nvidia.com/catalog/containers/"
  },
  {
    "path": "LICENSE",
    "chars": 35127,
    "preview": "GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation,"
  },
  {
    "path": "README.md",
    "chars": 1483,
    "preview": "## 仓库说明:\n\n本仓库是针对基于EASY-EAI-Nano(RV1126)从PC端模型训练、模型单步测试、pytorch模型转换为onnx模型的流程说明,并以口罩检测为例子说明。而模型如何部署到硬件主板上,完整的在线文档教程可以查看以下"
  },
  {
    "path": "data/Argoverse.yaml",
    "chars": 2734,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~meng"
  },
  {
    "path": "data/GlobalWheat2020.yaml",
    "chars": 1863,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Global Wheat 2020 dataset http://www.global-wheat.com/\n# Example usage: pyt"
  },
  {
    "path": "data/Objects365.yaml",
    "chars": 8074,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Objects365 dataset https://www.objects365.org/\n# Example usage: python trai"
  },
  {
    "path": "data/SKU-110K.yaml",
    "chars": 2333,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19\n# Ex"
  },
  {
    "path": "data/VisDrone.yaml",
    "chars": 2904,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset\n# Exa"
  },
  {
    "path": "data/coco.yaml",
    "chars": 1649,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# COCO 2017 dataset http://cocodataset.org\n# Example usage: python train.py -"
  },
  {
    "path": "data/coco128.yaml",
    "chars": 1685,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 image"
  },
  {
    "path": "data/hyps/hyp.finetune.yaml",
    "chars": 904,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for VOC finetuning\n# python train.py --batch 64 --weights y"
  },
  {
    "path": "data/hyps/hyp.finetune_objects365.yaml",
    "chars": 457,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\nlr0: 0.00258\nlrf: 0.17\nmomentum: 0.779\nweight_decay: 0.00058\nwarmup_epochs: "
  },
  {
    "path": "data/hyps/hyp.scratch-high.yaml",
    "chars": 1680,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for high-augmentation COCO training from scratch\n# python t"
  },
  {
    "path": "data/hyps/hyp.scratch-low.yaml",
    "chars": 1688,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for low-augmentation COCO training from scratch\n# python tr"
  },
  {
    "path": "data/hyps/hyp.scratch-med.yaml",
    "chars": 1682,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for medium-augmentation COCO training from scratch\n# python"
  },
  {
    "path": "data/hyps/hyp.scratch.yaml",
    "chars": 1660,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for COCO training from scratch\n# python train.py --batch 40"
  },
  {
    "path": "data/mask.yaml",
    "chars": 540,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# COCO 2017 dataset http://cocodataset.org\n# Example usage: python train.py -"
  },
  {
    "path": "data/scripts/download_weights.sh",
    "chars": 494,
    "preview": "#!/bin/bash\n# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Download latest models from https://github.com/ultralytics/yolo"
  },
  {
    "path": "data/scripts/get_coco.sh",
    "chars": 877,
    "preview": "#!/bin/bash\n# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Download COCO 2017 dataset http://cocodataset.org\n# Example usa"
  },
  {
    "path": "data/scripts/get_coco128.sh",
    "chars": 592,
    "preview": "#!/bin/bash\n# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Download COCO128 dataset https://www.kaggle.com/ultralytics/coc"
  },
  {
    "path": "data/voc.yaml",
    "chars": 3358,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC\n# Example usage: "
  },
  {
    "path": "data/xView.yaml",
    "chars": 5010,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# xView 2018 dataset https://challenge.xviewdataset.org\n# --------  DOWNLOAD "
  },
  {
    "path": "detect.py",
    "chars": 12357,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nRun inference on images, videos, directories, streams, etc.\n\nUsage:\n    $"
  },
  {
    "path": "export.py",
    "chars": 24402,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nExport a YOLOv5 PyTorch model to other formats. TensorFlow exports author"
  },
  {
    "path": "hubconf.py",
    "chars": 6378,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5/\n\nUsage:\n  "
  },
  {
    "path": "models/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "models/common.py",
    "chars": 33330,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nCommon modules\n\"\"\"\nimport os\nimport json\nimport math\nimport platform\nimpo"
  },
  {
    "path": "models/common_rk_plug_in.py",
    "chars": 4129,
    "preview": "# This file contains modules common to various models\n\nimport torch\nimport torch.nn as nn\nfrom models.common import Conv"
  },
  {
    "path": "models/experimental.py",
    "chars": 4581,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nExperimental modules\n\"\"\"\nimport math\n\nimport numpy as np\nimport torch\nimp"
  },
  {
    "path": "models/hub/anchors.yaml",
    "chars": 3332,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Default anchors for COCO data\n\n\n# P5 --------------------------------------"
  },
  {
    "path": "models/hub/yolov3-spp.yaml",
    "chars": 1564,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov3-tiny.yaml",
    "chars": 1229,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov3.yaml",
    "chars": 1557,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov5-bifpn.yaml",
    "chars": 1420,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov5-fpn.yaml",
    "chars": 1211,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov5-p2.yaml",
    "chars": 1655,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov5-p6.yaml",
    "chars": 1701,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov5-p7.yaml",
    "chars": 2073,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov5-panet.yaml",
    "chars": 1404,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov5l6.yaml",
    "chars": 1817,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/hub/yolov5m6.yaml",
    "chars": 1819,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.67  # model depth"
  },
  {
    "path": "models/hub/yolov5n6.yaml",
    "chars": 1819,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth"
  },
  {
    "path": "models/hub/yolov5s-ghost.yaml",
    "chars": 1480,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth"
  },
  {
    "path": "models/hub/yolov5s-transformer.yaml",
    "chars": 1438,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth"
  },
  {
    "path": "models/hub/yolov5s6.yaml",
    "chars": 1819,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth"
  },
  {
    "path": "models/hub/yolov5x6.yaml",
    "chars": 1819,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.33  # model depth"
  },
  {
    "path": "models/tf.py",
    "chars": 20646,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nTensorFlow, Keras and TFLite versions of YOLOv5\nAuthored by https://githu"
  },
  {
    "path": "models/yolo.py",
    "chars": 15110,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nYOLO-specific modules\n\nUsage:\n    $ python path/to/models/yolo.py --cfg y"
  },
  {
    "path": "models/yolov5l.yaml",
    "chars": 1398,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth "
  },
  {
    "path": "models/yolov5m.yaml",
    "chars": 1400,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.67  # model depth"
  },
  {
    "path": "models/yolov5n.yaml",
    "chars": 1400,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth"
  },
  {
    "path": "models/yolov5s.yaml",
    "chars": 1400,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth"
  },
  {
    "path": "models/yolov5x.yaml",
    "chars": 1400,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.33  # model depth"
  },
  {
    "path": "requirements.txt",
    "chars": 892,
    "preview": "# pip install -r requirements.txt\n\n# Base ----------------------------------------\nmatplotlib>=3.2.2\nnumpy>=1.18.5\nopenc"
  },
  {
    "path": "setup.cfg",
    "chars": 923,
    "preview": "# Project-wide configuration file, can be used for package metadata and other toll configurations\n# Example usage: globa"
  },
  {
    "path": "train.py",
    "chars": 32665,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nTrain a YOLOv5 model on a custom dataset\n\nUsage:\n    $ python path/to/tra"
  },
  {
    "path": "tutorial.ipynb",
    "chars": 56970,
    "preview": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"YOLOv5 Tutorial\",\n      \"provena"
  },
  {
    "path": "utils/__init__.py",
    "chars": 1098,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nutils/initialization\n\"\"\"\n\n\ndef notebook_init(verbose=True):\n    # Check s"
  },
  {
    "path": "utils/activations.py",
    "chars": 3774,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nActivation functions\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch"
  },
  {
    "path": "utils/augmentations.py",
    "chars": 11729,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nImage augmentation functions\n\"\"\"\n\nimport math\nimport random\n\nimport cv2\ni"
  },
  {
    "path": "utils/autoanchor.py",
    "chars": 7157,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nAuto-anchor utils\n\"\"\"\n\nimport random\n\nimport numpy as np\nimport torch\nimp"
  },
  {
    "path": "utils/autobatch.py",
    "chars": 2175,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nAuto-batch utils\n\"\"\"\n\nfrom copy import deepcopy\n\nimport numpy as np\nimpor"
  },
  {
    "path": "utils/aws/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "utils/aws/mime.sh",
    "chars": 780,
    "preview": "# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/\n#"
  },
  {
    "path": "utils/aws/resume.py",
    "chars": 1198,
    "preview": "# Resume all interrupted trainings in yolov5/ dir including DDP trainings\n# Usage: $ python utils/aws/resume.py\n\nimport "
  },
  {
    "path": "utils/aws/userdata.sh",
    "chars": 1247,
    "preview": "#!/bin/bash\n# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html\n# This "
  },
  {
    "path": "utils/callbacks.py",
    "chars": 2358,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nCallback utils\n\"\"\"\n\n\nclass Callbacks:\n    \"\"\"\"\n    Handles all registered"
  },
  {
    "path": "utils/datasets.py",
    "chars": 45140,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nDataloaders and dataset utils\n\"\"\"\n\nimport glob\nimport hashlib\nimport json"
  },
  {
    "path": "utils/downloads.py",
    "chars": 6131,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nDownload utils\n\"\"\"\n\nimport os\nimport platform\nimport subprocess\nimport ti"
  },
  {
    "path": "utils/flask_rest_api/README.md",
    "chars": 1710,
    "preview": "# Flask REST API\n\n[REST](https://en.wikipedia.org/wiki/Representational_state_transfer) [API](https://en.wikipedia.org/w"
  },
  {
    "path": "utils/flask_rest_api/example_request.py",
    "chars": 299,
    "preview": "\"\"\"Perform test request\"\"\"\nimport pprint\n\nimport requests\n\nDETECTION_URL = \"http://localhost:5000/v1/object-detection/yo"
  },
  {
    "path": "utils/flask_rest_api/restapi.py",
    "chars": 1078,
    "preview": "\"\"\"\nRun a rest API exposing the yolov5s object detection model\n\"\"\"\nimport argparse\nimport io\n\nimport torch\nfrom flask im"
  },
  {
    "path": "utils/general.py",
    "chars": 34952,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nGeneral utils\n\"\"\"\n\nimport contextlib\nimport glob\nimport logging\nimport ma"
  },
  {
    "path": "utils/google_app_engine/Dockerfile",
    "chars": 821,
    "preview": "FROM gcr.io/google-appengine/python\n\n# Create a virtualenv for dependencies. This isolates these packages from\n# system-"
  },
  {
    "path": "utils/google_app_engine/additional_requirements.txt",
    "chars": 105,
    "preview": "# add these requirements in your app on top of the existing ones\npip==21.1\nFlask==1.0.2\ngunicorn==19.9.0\n"
  },
  {
    "path": "utils/google_app_engine/app.yaml",
    "chars": 174,
    "preview": "runtime: custom\nenv: flex\n\nservice: yolov5app\n\nliveness_check:\n  initial_delay_sec: 600\n\nmanual_scaling:\n  instances: 1\n"
  },
  {
    "path": "utils/loggers/__init__.py",
    "chars": 7007,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nLogging utils\n\"\"\"\n\nimport os\nimport warnings\nfrom threading import Thread"
  },
  {
    "path": "utils/loggers/wandb/README.md",
    "chars": 10815,
    "preview": "📚 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 🚀. UPDATED 29 September 2021.\n* [About Weights &"
  },
  {
    "path": "utils/loggers/wandb/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "utils/loggers/wandb/log_dataset.py",
    "chars": 1032,
    "preview": "import argparse\n\nfrom wandb_utils import WandbLogger\n\nfrom utils.general import LOGGER\n\nWANDB_ARTIFACT_PREFIX = 'wandb-a"
  },
  {
    "path": "utils/loggers/wandb/sweep.py",
    "chars": 1142,
    "preview": "import sys\nfrom pathlib import Path\n\nimport wandb\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[3]  # YOLOv5 root"
  },
  {
    "path": "utils/loggers/wandb/sweep.yaml",
    "chars": 2463,
    "preview": "# Hyperparameters for training\n# To set range-\n# Provide min and max values as:\n#      parameter:\n#\n#         min: scala"
  },
  {
    "path": "utils/loggers/wandb/wandb_utils.py",
    "chars": 27099,
    "preview": "\"\"\"Utilities and tools for tracking runs with Weights & Biases.\"\"\"\n\nimport logging\nimport os\nimport sys\nfrom contextlib "
  },
  {
    "path": "utils/loss.py",
    "chars": 9653,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nLoss functions\n\"\"\"\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.metric"
  },
  {
    "path": "utils/metrics.py",
    "chars": 14078,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nModel validation metrics\n\"\"\"\n\nimport math\nimport warnings\nfrom pathlib im"
  },
  {
    "path": "utils/plots.py",
    "chars": 20563,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPlotting utils\n\"\"\"\n\nimport math\nimport os\nfrom copy import copy\nfrom path"
  },
  {
    "path": "utils/torch_utils.py",
    "chars": 13450,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPyTorch utils\n\"\"\"\n\nimport datetime\nimport math\nimport os\nimport platform\n"
  },
  {
    "path": "val.py",
    "chars": 17992,
    "preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nValidate a trained YOLOv5 model accuracy on a custom dataset\n\nUsage:\n    "
  }
]

About this extraction

This page contains the full source code of the EASY-EAI/yolov5 GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 102 files (588.6 KB), approximately 179.3k tokens, and a symbol index with 474 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!