[
  {
    "path": ".dockerignore",
    "content": "# Repo-specific DockerIgnore -------------------------------------------------------------------------------------------\n#.git\n.cache\n.idea\nruns\noutput\ncoco\nstorage.googleapis.com\n\ndata/samples/*\n**/results*.csv\n*.jpg\n\n# Neural Network weights -----------------------------------------------------------------------------------------------\n**/*.pt\n**/*.pth\n**/*.onnx\n**/*.engine\n**/*.mlmodel\n**/*.torchscript\n**/*.torchscript.pt\n**/*.tflite\n**/*.h5\n**/*.pb\n*_saved_model/\n*_web_model/\n\n# Below Copied From .gitignore -----------------------------------------------------------------------------------------\n# Below Copied From .gitignore -----------------------------------------------------------------------------------------\n\n\n# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\nwandb/\n.installed.cfg\n*.egg\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# dotenv\n.env\n\n# virtualenv\n.venv*\nvenv*/\nENV*/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n\n\n# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------\n\n# General\n.DS_Store\n.AppleDouble\n.LSOverride\n\n# Icon must end with two \\r\nIcon\nIcon?\n\n# Thumbnails\n._*\n\n# Files that might appear in the root of a volume\n.DocumentRevisions-V100\n.fseventsd\n.Spotlight-V100\n.TemporaryItems\n.Trashes\n.VolumeIcon.icns\n.com.apple.timemachine.donotpresent\n\n# Directories potentially created on remote AFP share\n.AppleDB\n.AppleDesktop\nNetwork Trash Folder\nTemporary Items\n.apdisk\n\n\n# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore\n# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm\n# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839\n\n# User-specific stuff:\n.idea/*\n.idea/**/workspace.xml\n.idea/**/tasks.xml\n.idea/dictionaries\n.html  # Bokeh Plots\n.pg  # TensorFlow Frozen Graphs\n.avi # videos\n\n# Sensitive or high-churn files:\n.idea/**/dataSources/\n.idea/**/dataSources.ids\n.idea/**/dataSources.local.xml\n.idea/**/sqlDataSources.xml\n.idea/**/dynamic.xml\n.idea/**/uiDesigner.xml\n\n# Gradle:\n.idea/**/gradle.xml\n.idea/**/libraries\n\n# CMake\ncmake-build-debug/\ncmake-build-release/\n\n# Mongo Explorer plugin:\n.idea/**/mongoSettings.xml\n\n## File-based project format:\n*.iws\n\n## Plugin-specific files:\n\n# IntelliJ\nout/\n\n# mpeltonen/sbt-idea plugin\n.idea_modules/\n\n# JIRA plugin\natlassian-ide-plugin.xml\n\n# Cursive Clojure plugin\n.idea/replstate.xml\n\n# Crashlytics plugin (for Android Studio and IntelliJ)\ncom_crashlytics_export_strings.xml\ncrashlytics.properties\ncrashlytics-build.properties\nfabric.properties\n"
  },
  {
    "path": ".gitattributes",
    "content": "# this drop notebooks from GitHub language stats\n*.ipynb linguist-vendored\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.md",
    "content": "---\nname: \"🐛 Bug report\"\nabout: Create a report to help us improve\ntitle: ''\nlabels: bug\nassignees: ''\n\n---\n\nBefore submitting a bug report, please be aware that your issue **must be reproducible** with all of the following, otherwise it is non-actionable, and we can not help you:\n - **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo\n - **Common dataset**: coco.yaml or coco128.yaml\n - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments\n \nIf this is a custom dataset/training question you **must include** your `train*.jpg`, `test*.jpg` and `results.png` figures, or we can not help you. You can generate these with `utils.plot_results()`.\n\n\n## 🐛 Bug\nA clear and concise description of what the bug is.\n\n\n## To Reproduce (REQUIRED)\n\nInput:\n```\nimport torch\n\na = torch.tensor([5])\nc = a / 0\n```\n\nOutput:\n```\nTraceback (most recent call last):\n  File \"/Users/glennjocher/opt/anaconda3/envs/env1/lib/python3.7/site-packages/IPython/core/interactiveshell.py\", line 3331, in run_code\n    exec(code_obj, self.user_global_ns, self.user_ns)\n  File \"<ipython-input-5-be04c762b799>\", line 5, in <module>\n    c = a / 0\nRuntimeError: ZeroDivisionError\n```\n\n\n## Expected behavior\nA clear and concise description of what you expected to happen.\n\n\n## Environment\nIf applicable, add screenshots to help explain your problem.\n\n - OS: [e.g. Ubuntu]\n - GPU [e.g. 2080 Ti]\n\n\n## Additional context\nAdd any other context about the problem here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature-request.md",
    "content": "---\nname: \"🚀 Feature request\"\nabout: Suggest an idea for this project\ntitle: ''\nlabels: enhancement\nassignees: ''\n\n---\n\n## 🚀 Feature\n<!-- A clear and concise description of the feature proposal -->\n\n## Motivation\n\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\n\n## Pitch\n\n<!-- A clear and concise description of what you want to happen. -->\n\n## Alternatives\n\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\n\n## Additional context\n\n<!-- Add any other context or screenshots about the feature request here. -->\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/question.md",
    "content": "---\nname: \"❓Question\"\nabout: Ask a general question\ntitle: ''\nlabels: question\nassignees: ''\n\n---\n\n## ❔Question\n\n\n## Additional context\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\nupdates:\n- package-ecosystem: pip\n  directory: \"/\"\n  schedule:\n    interval: weekly\n    time: \"04:00\"\n  open-pull-requests-limit: 10\n  reviewers:\n  - glenn-jocher\n  labels:\n  - dependencies\n"
  },
  {
    "path": ".github/workflows/ci-testing.yml",
    "content": "name: CI CPU testing\n\non:  # https://help.github.com/en/actions/reference/events-that-trigger-workflows\n  push:\n    branches: [ master ]\n  pull_request:\n    # The branches below must be a subset of the branches above\n    branches: [ master ]\n  schedule:\n    - cron: '0 0 * * *'  # Runs at 00:00 UTC every day\n\njobs:\n  cpu-tests:\n\n    runs-on: ${{ matrix.os }}\n    strategy:\n      fail-fast: false\n      matrix:\n        os: [ubuntu-latest, macos-latest, windows-latest]\n        python-version: [3.8]\n        model: ['yolov5s']  # models to test\n\n    # Timeout: https://stackoverflow.com/a/59076067/4521646\n    timeout-minutes: 50\n    steps:\n      - uses: actions/checkout@v2\n      - name: Set up Python ${{ matrix.python-version }}\n        uses: actions/setup-python@v2\n        with:\n          python-version: ${{ matrix.python-version }}\n\n      # Note: This uses an internal pip API and may not always work\n      # https://github.com/actions/cache/blob/master/examples.md#multiple-oss-in-a-workflow\n      - name: Get pip cache\n        id: pip-cache\n        run: |\n          python -c \"from pip._internal.locations import USER_CACHE_DIR; print('::set-output name=dir::' + USER_CACHE_DIR)\"\n\n      - name: Cache pip\n        uses: actions/cache@v1\n        with:\n          path: ${{ steps.pip-cache.outputs.dir }}\n          key: ${{ runner.os }}-${{ matrix.python-version }}-pip-${{ hashFiles('requirements.txt') }}\n          restore-keys: |\n            ${{ runner.os }}-${{ matrix.python-version }}-pip-\n\n      - name: Install dependencies\n        run: |\n          python -m pip install --upgrade pip\n          pip install -qr requirements.txt -f https://download.pytorch.org/whl/cpu/torch_stable.html\n          pip install -q onnx\n          python --version\n          pip --version\n          pip list\n        shell: bash\n\n      - name: Download data\n        run: |\n          # curl -L -o tmp.zip https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip\n          # unzip -q tmp.zip -d ../\n          # rm tmp.zip\n\n      - name: Tests workflow\n        run: |\n          # export PYTHONPATH=\"$PWD\"  # to run '$ python *.py' files in subdirectories\n          di=cpu # inference devices  # define device\n\n          # train\n          python train.py --img 128 --batch 16 --weights weights/${{ matrix.model }}.pt --cfg models/${{ matrix.model }}.yaml --epochs 1 --device $di\n          # detect\n          python detect.py --weights weights/${{ matrix.model }}.pt --device $di\n          python detect.py --weights runs/train/exp/weights/last.pt --device $di\n          # test\n          python test.py --img 128 --batch 16 --weights weights/${{ matrix.model }}.pt --device $di\n          python test.py --img 128 --batch 16 --weights runs/train/exp/weights/last.pt --device $di\n\n          python hubconf.py  # hub\n          python models/yolo.py --cfg models/${{ matrix.model }}.yaml  # inspect\n          python models/export.py --img 128 --batch 1 --weights weights/${{ matrix.model }}.pt  # export\n        shell: bash\n"
  },
  {
    "path": ".github/workflows/codeql-analysis.yml",
    "content": "# This action runs GitHub's industry-leading static analysis engine, CodeQL, against a repository's source code to find security vulnerabilities. \n# https://github.com/github/codeql-action\n\nname: \"CodeQL\"\n\non:\n  schedule:\n    - cron: '0 0 1 * *'  # Runs at 00:00 UTC on the 1st of every month\n\njobs:\n  analyze:\n    name: Analyze\n    runs-on: ubuntu-latest\n\n    strategy:\n      fail-fast: false\n      matrix:\n        language: [ 'python' ]\n        # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]\n        # Learn more:\n        # https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed\n\n    steps:\n    - name: Checkout repository\n      uses: actions/checkout@v2\n\n    # Initializes the CodeQL tools for scanning.\n    - name: Initialize CodeQL\n      uses: github/codeql-action/init@v1\n      with:\n        languages: ${{ matrix.language }}\n        # If you wish to specify custom queries, you can do so here or in a config file.\n        # By default, queries listed here will override any specified in a config file.\n        # Prefix the list here with \"+\" to use these queries and those in the config file.\n        # queries: ./path/to/local/query, your-org/your-repo/queries@main\n\n    # Autobuild attempts to build any compiled languages  (C/C++, C#, or Java).\n    # If this step fails, then you should remove it and run the build manually (see below)\n    - name: Autobuild\n      uses: github/codeql-action/autobuild@v1\n\n    # ℹ️ Command-line programs to run using the OS shell.\n    # 📚 https://git.io/JvXDl\n\n    # ✏️ If the Autobuild fails above, remove it and uncomment the following three lines\n    #    and modify them (or add more) to build your code if your project\n    #    uses a compiled language\n\n    #- run: |\n    #   make bootstrap\n    #   make release\n\n    - name: Perform CodeQL Analysis\n      uses: github/codeql-action/analyze@v1\n"
  },
  {
    "path": ".github/workflows/greetings.yml",
    "content": "name: Greetings\n\non: [pull_request_target, issues]\n\njobs:\n  greeting:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/first-interaction@v1\n        with:\n          repo-token: ${{ secrets.GITHUB_TOKEN }}\n          pr-message: |\n            👋 Hello @${{ github.actor }}, thank you for submitting a 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:\n            - ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:\n            ```bash\n            git remote add upstream https://github.com/ultralytics/yolov5.git\n            git fetch upstream\n            git checkout feature  # <----- replace 'feature' with local branch name\n            git rebase upstream/master\n            git push -u origin -f\n            ```\n            - ✅ Verify all Continuous Integration (CI) **checks are passing**.\n            - ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _\"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is.\"_  -Bruce Lee\n\n          issue-message: |\n            👋 Hello @${{ github.actor }}, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ [Tutorials](https://github.com/ultralytics/yolov5/wiki#tutorials) to get started, where you can find quickstart guides for simple tasks like [Custom Data Training](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) all the way to advanced concepts like [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607).\n\n            If this is a 🐛 Bug Report, please provide screenshots and **minimum viable code to reproduce your issue**, otherwise we can not help you.\n\n            If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online [W&B logging](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data#visualize) if available.\n\n            For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.\n\n            ## Requirements\n\n            Python 3.8 or later with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed, including `torch>=1.7`. To install run:\n            ```bash\n            $ pip install -r requirements.txt\n            ```\n\n            ## Environments\n            \n            YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n            \n            - **Google Colab and Kaggle** notebooks with free GPU: <a href=\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a> <a href=\"https://www.kaggle.com/ultralytics/yolov5\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a>\n            - **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)\n            - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)\n            - **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href=\"https://hub.docker.com/r/ultralytics/yolov5\"><img src=\"https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker\" alt=\"Docker Pulls\"></a>\n            \n            \n            ## Status\n            \n            ![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)\n            \n            If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([test.py](https://github.com/ultralytics/yolov5/blob/master/test.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/models/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.\n            \n"
  },
  {
    "path": ".github/workflows/rebase.yml",
    "content": "name: Automatic Rebase\n# https://github.com/marketplace/actions/automatic-rebase\n\non:\n  issue_comment:\n    types: [created]\n\njobs:\n  rebase:\n    name: Rebase\n    if: github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase')\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout the latest code\n        uses: actions/checkout@v2\n        with:\n          fetch-depth: 0\n      - name: Automatic Rebase\n        uses: cirrus-actions/rebase@1.3.1\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n"
  },
  {
    "path": ".github/workflows/stale.yml",
    "content": "name: Close stale issues\non:\n  schedule:\n    - cron: \"0 0 * * *\"\n\njobs:\n  stale:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/stale@v3\n        with:\n          repo-token: ${{ secrets.GITHUB_TOKEN }}\n          stale-issue-message: 'This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.'\n          stale-pr-message: 'This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.'\n          days-before-stale: 30\n          days-before-close: 5\n          exempt-issue-labels: 'documentation,tutorial'\n          operations-per-run: 100  # The maximum number of operations per run, used to control rate limiting.\n"
  },
  {
    "path": ".gitignore",
    "content": "# Repo-specific GitIgnore ----------------------------------------------------------------------------------------------\n*.tif\n*.tiff\n*.heic\n*.TIF\n*.TIFF\n*.HEIC\n*.mp4\n*.mov\n*.MOV\n*.avi\n*.data\n*.json\n*.cfg\n!setup.cfg\n!cfg/yolov3*.cfg\n\nstorage.googleapis.com\nruns/*\ndata/*\n!data/images\n!data/*.yaml\n!data/hyps\n!data/scripts\n!data/images\n!data/images/zidane.jpg\n!data/images/bus.jpg\n!data/*.sh\n\nresults*.csv\n\n# Datasets -------------------------------------------------------------------------------------------------------------\ncoco/\ncoco128/\nVOC/\n\n# MATLAB GitIgnore -----------------------------------------------------------------------------------------------------\n*.m~\n*.mat\n!targets*.mat\n\n# Neural Network weights -----------------------------------------------------------------------------------------------\n*.weights\n*.pt\n*.pb\n*.onnx\n*.engine\n*.mlmodel\n*.torchscript\n*.tflite\n*.h5\n*_saved_model/\n*_web_model/\ndarknet53.conv.74\nyolov3-tiny.conv.15\n\n# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n/wandb/\n.installed.cfg\n*.egg\n\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# dotenv\n.env\n\n# virtualenv\n.venv*\nvenv*/\nENV*/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n\n\n# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------\n\n# General\n.DS_Store\n.AppleDouble\n.LSOverride\n\n# Icon must end with two \\r\nIcon\nIcon?\n\n# Thumbnails\n._*\n\n# Files that might appear in the root of a volume\n.DocumentRevisions-V100\n.fseventsd\n.Spotlight-V100\n.TemporaryItems\n.Trashes\n.VolumeIcon.icns\n.com.apple.timemachine.donotpresent\n\n# Directories potentially created on remote AFP share\n.AppleDB\n.AppleDesktop\nNetwork Trash Folder\nTemporary Items\n.apdisk\n\n\n# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore\n# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm\n# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839\n\n# User-specific stuff:\n.idea/*\n.idea/**/workspace.xml\n.idea/**/tasks.xml\n.idea/dictionaries\n.html  # Bokeh Plots\n.pg  # TensorFlow Frozen Graphs\n.avi # videos\n\n# Sensitive or high-churn files:\n.idea/**/dataSources/\n.idea/**/dataSources.ids\n.idea/**/dataSources.local.xml\n.idea/**/sqlDataSources.xml\n.idea/**/dynamic.xml\n.idea/**/uiDesigner.xml\n\n# Gradle:\n.idea/**/gradle.xml\n.idea/**/libraries\n\n# CMake\ncmake-build-debug/\ncmake-build-release/\n\n# Mongo Explorer plugin:\n.idea/**/mongoSettings.xml\n\n## File-based project format:\n*.iws\n\n## Plugin-specific files:\n\n# IntelliJ\nout/\n\n# mpeltonen/sbt-idea plugin\n.idea_modules/\n\n# JIRA plugin\natlassian-ide-plugin.xml\n\n# Cursive Clojure plugin\n.idea/replstate.xml\n\n# Crashlytics plugin (for Android Studio and IntelliJ)\ncom_crashlytics_export_strings.xml\ncrashlytics.properties\ncrashlytics-build.properties\nfabric.properties\n"
  },
  {
    "path": ".pre-commit-config.yaml",
    "content": "# Define hooks for code formations\n# Will be applied on any updated commit files if a user has installed and linked commit hook\n\ndefault_language_version:\n  python: python3.8\n\n# Define bot property if installed via https://github.com/marketplace/pre-commit-ci\nci:\n  autofix_prs: true\n  autoupdate_commit_msg: '[pre-commit.ci] pre-commit suggestions'\n  autoupdate_schedule: quarterly\n  # submodules: true\n\nrepos:\n  - repo: https://github.com/pre-commit/pre-commit-hooks\n    rev: v4.0.1\n    hooks:\n      - id: end-of-file-fixer\n      - id: trailing-whitespace\n      - id: check-case-conflict\n      - id: check-yaml\n      - id: check-toml\n      - id: pretty-format-json\n      - id: check-docstring-first\n\n  - repo: https://github.com/asottile/pyupgrade\n    rev: v2.23.1\n    hooks:\n      - id: pyupgrade\n        args: [--py36-plus]\n        name: Upgrade code\n\n  - repo: https://github.com/PyCQA/isort\n    rev: 5.9.3\n    hooks:\n      - id: isort\n        name: Sort imports\n\n  # TODO\n  #- repo: https://github.com/pre-commit/mirrors-yapf\n  #  rev: v0.31.0\n  #  hooks:\n  #    - id: yapf\n  #      name: formatting\n\n  # TODO\n  #- repo: https://github.com/executablebooks/mdformat\n  #  rev: 0.7.7\n  #  hooks:\n  #    - id: mdformat\n  #      additional_dependencies:\n  #        - mdformat-gfm\n  #        - mdformat-black\n  #        - mdformat_frontmatter\n\n  # TODO\n  #- repo: https://github.com/asottile/yesqa\n  #  rev: v1.2.3\n  #  hooks:\n  #    - id: yesqa\n\n  - repo: https://github.com/PyCQA/flake8\n    rev: 3.9.2\n    hooks:\n      - id: flake8\n        name: PEP8\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "## Contributing to YOLOv5 🚀\n\nWe love your input! We want to make contributing to YOLOv5 as easy and transparent as possible, whether it's:\n\n- Reporting a bug\n- Discussing the current state of the code\n- Submitting a fix\n- Proposing a new feature\n- Becoming a maintainer\n\nYOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be\nhelping push the frontiers of what's possible in AI 😃!\n\n## Submitting a Pull Request (PR) 🛠️\n\nSubmitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:\n\n### 1. Select File to Update\n\nSelect `requirements.txt` to update by clicking on it in GitHub.\n<p align=\"center\"><img width=\"800\" alt=\"PR_step1\" src=\"https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png\"></p>\n\n### 2. Click 'Edit this file'\n\nButton is in top-right corner.\n<p align=\"center\"><img width=\"800\" alt=\"PR_step2\" src=\"https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png\"></p>\n\n### 3. Make Changes\n\nChange `matplotlib` version from `3.2.2` to `3.3`.\n<p align=\"center\"><img width=\"800\" alt=\"PR_step3\" src=\"https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png\"></p>\n\n### 4. Preview Changes and Submit PR\n\nClick on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**\nfor this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose\nchanges** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!\n<p align=\"center\"><img width=\"800\" alt=\"PR_step4\" src=\"https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png\"></p>\n\n### PR recommendations\n\nTo allow your work to be integrated as seamlessly as possible, we advise you to:\n\n- ✅ Verify your PR is **up-to-date with upstream/master.** If your PR is behind upstream/master an\n  automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may\n  be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature'\n  with the name of your local branch:\n\n  ```bash\n  git remote add upstream https://github.com/ultralytics/yolov5.git\n  git fetch upstream\n  git checkout feature  # <----- replace 'feature' with local branch name\n  git merge upstream/master\n  git push -u origin -f\n  ```\n\n- ✅ Verify all Continuous Integration (CI) **checks are passing**.\n- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _\"It is not daily increase\n  but daily decrease, hack away the unessential. The closer to the source, the less wastage there is.\"_  — Bruce Lee\n\n## Submitting a Bug Report 🐛\n\nIf you spot a problem with YOLOv5 please submit a Bug Report!\n\nFor us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few\nshort guidelines below to help users provide what we need in order to get started.\n\nWhen asking a question, people will be better able to provide help if you provide **code** that they can easily\nunderstand and use to **reproduce** the problem. This is referred to by community members as creating\na [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces\nthe problem should be:\n\n* ✅ **Minimal** – Use as little code as possible that still produces the same problem\n* ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself\n* ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem\n\nIn addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code\nshould be:\n\n* ✅ **Current** – Verify that your code is up-to-date with current\n  GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new\n  copy to ensure your problem has not already been resolved by previous commits.\n* ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this\n  repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.\n\nIf you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **\nBug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing\na [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better\nunderstand and diagnose your problem.\n\n## License\n\nBy contributing, you agree that your contributions will be licensed under\nthe [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/)\n"
  },
  {
    "path": "Dockerfile",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Start FROM Nvidia PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch\nFROM nvcr.io/nvidia/pytorch:21.10-py3\n\n# Install linux packages\nRUN apt update && apt install -y zip htop screen libgl1-mesa-glx\n\n# Install python dependencies\nCOPY requirements.txt .\nRUN python -m pip install --upgrade pip\nRUN pip uninstall -y nvidia-tensorboard nvidia-tensorboard-plugin-dlprof\nRUN pip install --no-cache -r requirements.txt coremltools onnx gsutil notebook wandb>=0.12.2\nRUN pip install --no-cache -U torch torchvision numpy Pillow\n# RUN pip install --no-cache torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html\n\n# Create working directory\nRUN mkdir -p /usr/src/app\nWORKDIR /usr/src/app\n\n# Copy contents\nCOPY . /usr/src/app\n\n# Downloads to user config dir\nADD https://ultralytics.com/assets/Arial.ttf /root/.config/Ultralytics/\n\n# Set environment variables\n# ENV HOME=/usr/src/app\n\n\n# Usage Examples -------------------------------------------------------------------------------------------------------\n\n# Build and Push\n# t=ultralytics/yolov5:latest && sudo docker build -t $t . && sudo docker push $t\n\n# Pull and Run\n# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all $t\n\n# Pull and Run with local directory access\n# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v \"$(pwd)\"/datasets:/usr/src/datasets $t\n\n# Kill all\n# sudo docker kill $(sudo docker ps -q)\n\n# Kill all image-based\n# sudo docker kill $(sudo docker ps -qa --filter ancestor=ultralytics/yolov5:latest)\n\n# Bash into running container\n# sudo docker exec -it 5a9b5863d93d bash\n\n# Bash into stopped container\n# id=$(sudo docker ps -qa) && sudo docker start $id && sudo docker exec -it $id bash\n\n# Clean up\n# docker system prune -a --volumes\n\n# Update Ubuntu drivers\n# https://www.maketecheasier.com/install-nvidia-drivers-ubuntu/\n\n# DDP test\n# python -m torch.distributed.run --nproc_per_node 2 --master_port 1 train.py --epochs 3\n\n# GCP VM from Image\n# docker.io/ultralytics/yolov5:latest\n"
  },
  {
    "path": "LICENSE",
    "content": "GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU General Public License is a free, copyleft license for\nsoftware and other kinds of works.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nthe GNU General Public License is intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.  We, the Free Software Foundation, use the\nGNU General Public License for most of our software; it applies also to\nany other work released this way by its authors.  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  To protect your rights, we need to prevent others from denying you\nthese rights or asking you to surrender the rights.  Therefore, you have\ncertain responsibilities if you distribute copies of the software, or if\nyou modify it: responsibilities to respect the freedom of others.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must pass on to the recipients the same\nfreedoms that you received.  You must make sure that they, too, receive\nor can get the source code.  And you must show them these terms so they\nknow their rights.\n\n  Developers that use the GNU GPL protect your rights with two steps:\n(1) assert copyright on the software, and (2) offer you this License\ngiving you legal permission to copy, distribute and/or modify it.\n\n  For the developers' and authors' protection, the GPL clearly explains\nthat there is no warranty for this free software.  For both users' and\nauthors' sake, the GPL requires that modified versions be marked as\nchanged, so that their problems will not be attributed erroneously to\nauthors of previous versions.\n\n  Some devices are designed to deny users access to install or run\nmodified versions of the software inside them, although the manufacturer\ncan do so.  This is fundamentally incompatible with the aim of\nprotecting users' freedom to change the software.  The systematic\npattern of such abuse occurs in the area of products for individuals to\nuse, which is precisely where it is most unacceptable.  Therefore, we\nhave designed this version of the GPL to prohibit the practice for those\nproducts.  If such problems arise substantially in other domains, we\nstand ready to extend this provision to those domains in future versions\nof the GPL, as needed to protect the freedom of users.\n\n  Finally, every program is threatened constantly by software patents.\nStates should not allow patents to restrict development and use of\nsoftware on general-purpose computers, but in those that do, we wish to\navoid the special danger that patents applied to a free program could\nmake it effectively proprietary.  To prevent this, the GPL assures that\npatents cannot be used to render the program non-free.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Use with the GNU Affero General Public License.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU Affero General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the special requirements of the GNU Affero General Public License,\nsection 13, concerning interaction through a network will apply to the\ncombination as such.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with this program.  If not, see <http://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If the program does terminal interaction, make it output a short\nnotice like this when it starts in an interactive mode:\n\n    <program>  Copyright (C) <year>  <name of author>\n    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, your program's commands\nmight be different; for a GUI interface, you would use an \"about box\".\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU GPL, see\n<http://www.gnu.org/licenses/>.\n\n  The GNU General Public License does not permit incorporating your program\ninto proprietary programs.  If your program is a subroutine library, you\nmay consider it more useful to permit linking proprietary applications with\nthe library.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.  But first, please read\n<http://www.gnu.org/philosophy/why-not-lgpl.html>.\n"
  },
  {
    "path": "README.md",
    "content": "## 仓库说明：\n\n本仓库是针对基于EASY-EAI-Nano(RV1126)从PC端模型训练、模型单步测试、pytorch模型转换为onnx模型的流程说明，并以口罩检测为例子说明。而模型如何部署到硬件主板上，完整的在线文档教程可以查看以下在线文档的链接：\n\n## 环境说明：\n\npython version >= 3.6\n\npytorch version >= 1.7\n\nonnx verison >= 1.11\n\n## 准备数据\n口罩检测数据百度链接：https://pan.baidu.com/s/1vtxWurn1Mqu-wJ017eaQrw 提取码：6666 \n\n数据集解压后(脚本在数据集里面)，执行以下脚本生成train.txt和valid.txt：\n```python\npython list_dataset_file.py\n```\n\n\n## 训练模型\n训练一个口罩检测模型，需要修改\"data/mask.yaml\"里面的train.txt和valid.txt的路径。训练脚本如下所示：\n```python\npython train.py --data mask.yaml --cfg yolov5s.yaml --weights \"\" --batch-size 64\n                                       yolov5m                                40\n                                       yolov5l                                24\n                                       yolov5x                                16\n```\n训练完成后会在\n\n## 模型预测\n测试训练好的模型：\n```python\npython detect.py --source data/images --weights ./runs/train/exp/weights/best.pt --conf 0.5\n```\n测试结果会在\"runs/detect\"生成：\n<img src=\"./photo/image.jpg\">\n\n\n## 模型导出\n执行以下指令把pt模型转换为onnx模型，同时会生成best.anchors.txt：\n```python\npython export.py --include onnx --rknpu RV1126 --weights ./runs/train/exp/weights/best.pt\n```\n\n\n### EASY-EAI-Nano基于NPU运行速度测试(单位:ms)：\n\n| 模型(640x640输入)         | EASY-EAI-Nano(RV1126)  |\n| :---------------------- | :-----------------------------------------:  |\n| yolov5s int8量化 |   52    |\n| yolov5m int8量化 |   93    |\n\n\n\n## 参考库：\n\nhttps://github.com/ultralytics/yolov5\n\nhttps://github.com/soloIife/yolov5_for_rknn\n\n\n## 技术交流群：\n\nQQ群：810456486\n\n\n\n"
  },
  {
    "path": "data/Argoverse.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/\n# Example usage: python train.py --data Argoverse.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── Argoverse  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\npath: ../datasets/Argoverse  # dataset root dir\ntrain: Argoverse-1.1/images/train/  # train images (relative to 'path') 39384 images\nval: Argoverse-1.1/images/val/  # val images (relative to 'path') 15062 images\ntest: Argoverse-1.1/images/test/  # test images (optional) https://eval.ai/web/challenges/challenge-page/800/overview\n\n# Classes\nnc: 8  # number of classes\nnames: ['person',  'bicycle',  'car',  'motorcycle',  'bus',  'truck',  'traffic_light',  'stop_sign']  # class names\n\n\n# Download script/URL (optional) ---------------------------------------------------------------------------------------\ndownload: |\n  import json\n\n  from tqdm import tqdm\n  from utils.general import download, Path\n\n\n  def argoverse2yolo(set):\n      labels = {}\n      a = json.load(open(set, \"rb\"))\n      for annot in tqdm(a['annotations'], desc=f\"Converting {set} to YOLOv5 format...\"):\n          img_id = annot['image_id']\n          img_name = a['images'][img_id]['name']\n          img_label_name = img_name[:-3] + \"txt\"\n\n          cls = annot['category_id']  # instance class id\n          x_center, y_center, width, height = annot['bbox']\n          x_center = (x_center + width / 2) / 1920.0  # offset and scale\n          y_center = (y_center + height / 2) / 1200.0  # offset and scale\n          width /= 1920.0  # scale\n          height /= 1200.0  # scale\n\n          img_dir = set.parents[2] / 'Argoverse-1.1' / 'labels' / a['seq_dirs'][a['images'][annot['image_id']]['sid']]\n          if not img_dir.exists():\n              img_dir.mkdir(parents=True, exist_ok=True)\n\n          k = str(img_dir / img_label_name)\n          if k not in labels:\n              labels[k] = []\n          labels[k].append(f\"{cls} {x_center} {y_center} {width} {height}\\n\")\n\n      for k in labels:\n          with open(k, \"w\") as f:\n              f.writelines(labels[k])\n\n\n  # Download\n  dir = Path('../datasets/Argoverse')  # dataset root dir\n  urls = ['https://argoverse-hd.s3.us-east-2.amazonaws.com/Argoverse-HD-Full.zip']\n  download(urls, dir=dir, delete=False)\n\n  # Convert\n  annotations_dir = 'Argoverse-HD/annotations/'\n  (dir / 'Argoverse-1.1' / 'tracking').rename(dir / 'Argoverse-1.1' / 'images')  # rename 'tracking' to 'images'\n  for d in \"train.json\", \"val.json\":\n      argoverse2yolo(dir / annotations_dir / d)  # convert VisDrone annotations to YOLO labels\n"
  },
  {
    "path": "data/GlobalWheat2020.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Global Wheat 2020 dataset http://www.global-wheat.com/\n# Example usage: python train.py --data GlobalWheat2020.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── GlobalWheat2020  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\npath: ../datasets/GlobalWheat2020  # dataset root dir\ntrain: # train images (relative to 'path') 3422 images\n  - images/arvalis_1\n  - images/arvalis_2\n  - images/arvalis_3\n  - images/ethz_1\n  - images/rres_1\n  - images/inrae_1\n  - images/usask_1\nval: # val images (relative to 'path') 748 images (WARNING: train set contains ethz_1)\n  - images/ethz_1\ntest: # test images (optional) 1276 images\n  - images/utokyo_1\n  - images/utokyo_2\n  - images/nau_1\n  - images/uq_1\n\n# Classes\nnc: 1  # number of classes\nnames: ['wheat_head']  # class names\n\n\n# Download script/URL (optional) ---------------------------------------------------------------------------------------\ndownload: |\n  from utils.general import download, Path\n\n  # Download\n  dir = Path(yaml['path'])  # dataset root dir\n  urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip',\n          'https://github.com/ultralytics/yolov5/releases/download/v1.0/GlobalWheat2020_labels.zip']\n  download(urls, dir=dir)\n\n  # Make Directories\n  for p in 'annotations', 'images', 'labels':\n      (dir / p).mkdir(parents=True, exist_ok=True)\n\n  # Move\n  for p in 'arvalis_1', 'arvalis_2', 'arvalis_3', 'ethz_1', 'rres_1', 'inrae_1', 'usask_1', \\\n           'utokyo_1', 'utokyo_2', 'nau_1', 'uq_1':\n      (dir / p).rename(dir / 'images' / p)  # move to /images\n      f = (dir / p).with_suffix('.json')  # json file\n      if f.exists():\n          f.rename((dir / 'annotations' / p).with_suffix('.json'))  # move to /annotations\n"
  },
  {
    "path": "data/Objects365.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Objects365 dataset https://www.objects365.org/\n# Example usage: python train.py --data Objects365.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── Objects365  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\npath: ../datasets/Objects365  # dataset root dir\ntrain: images/train  # train images (relative to 'path') 1742289 images\nval: images/val # val images (relative to 'path') 80000 images\ntest:  # test images (optional)\n\n# Classes\nnc: 365  # number of classes\nnames: ['Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',\n        'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',\n        'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',\n        'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',\n        'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle',\n        'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird',\n        'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck',\n        'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning',\n        'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife',\n        'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock',\n        'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish',\n        'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan',\n        'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard',\n        'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign',\n        'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat',\n        'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard',\n        'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry',\n        'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks',\n        'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors',\n        'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape',\n        'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck',\n        'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette',\n        'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket',\n        'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine',\n        'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine',\n        'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon',\n        'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse',\n        'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball',\n        'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin',\n        'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts',\n        'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit',\n        'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD',\n        'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder',\n        'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips',\n        'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab',\n        'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal',\n        'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart',\n        'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',\n        'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',\n        'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',\n        'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis']\n\n\n# Download script/URL (optional) ---------------------------------------------------------------------------------------\ndownload: |\n  from pycocotools.coco import COCO\n  from tqdm import tqdm\n\n  from utils.general import Path, download, np, xyxy2xywhn\n\n  # Make Directories\n  dir = Path(yaml['path'])  # dataset root dir\n  for p in 'images', 'labels':\n      (dir / p).mkdir(parents=True, exist_ok=True)\n      for q in 'train', 'val':\n          (dir / p / q).mkdir(parents=True, exist_ok=True)\n\n  # Train, Val Splits\n  for split, patches in [('train', 50 + 1), ('val', 43 + 1)]:\n      print(f\"Processing {split} in {patches} patches ...\")\n      images, labels = dir / 'images' / split, dir / 'labels' / split\n\n      # Download\n      url = f\"https://dorc.ks3-cn-beijing.ksyun.com/data-set/2020Objects365%E6%95%B0%E6%8D%AE%E9%9B%86/{split}/\"\n      if split == 'train':\n          download([f'{url}zhiyuan_objv2_{split}.tar.gz'], dir=dir, delete=False)  # annotations json\n          download([f'{url}patch{i}.tar.gz' for i in range(patches)], dir=images, curl=True, delete=False, threads=8)\n      elif split == 'val':\n          download([f'{url}zhiyuan_objv2_{split}.json'], dir=dir, delete=False)  # annotations json\n          download([f'{url}images/v1/patch{i}.tar.gz' for i in range(15 + 1)], dir=images, curl=True, delete=False, threads=8)\n          download([f'{url}images/v2/patch{i}.tar.gz' for i in range(16, patches)], dir=images, curl=True, delete=False, threads=8)\n\n      # Move\n      for f in tqdm(images.rglob('*.jpg'), desc=f'Moving {split} images'):\n          f.rename(images / f.name)  # move to /images/{split}\n\n      # Labels\n      coco = COCO(dir / f'zhiyuan_objv2_{split}.json')\n      names = [x[\"name\"] for x in coco.loadCats(coco.getCatIds())]\n      for cid, cat in enumerate(names):\n          catIds = coco.getCatIds(catNms=[cat])\n          imgIds = coco.getImgIds(catIds=catIds)\n          for im in tqdm(coco.loadImgs(imgIds), desc=f'Class {cid + 1}/{len(names)} {cat}'):\n              width, height = im[\"width\"], im[\"height\"]\n              path = Path(im[\"file_name\"])  # image filename\n              try:\n                  with open(labels / path.with_suffix('.txt').name, 'a') as file:\n                      annIds = coco.getAnnIds(imgIds=im[\"id\"], catIds=catIds, iscrowd=None)\n                      for a in coco.loadAnns(annIds):\n                          x, y, w, h = a['bbox']  # bounding box in xywh (xy top-left corner)\n                          xyxy = np.array([x, y, x + w, y + h])[None]  # pixels(1,4)\n                          x, y, w, h = xyxy2xywhn(xyxy, w=width, h=height, clip=True)[0]  # normalized and clipped\n                          file.write(f\"{cid} {x:.5f} {y:.5f} {w:.5f} {h:.5f}\\n\")\n              except Exception as e:\n                  print(e)\n"
  },
  {
    "path": "data/SKU-110K.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19\n# Example usage: python train.py --data SKU-110K.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── SKU-110K  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\npath: ../datasets/SKU-110K  # dataset root dir\ntrain: train.txt  # train images (relative to 'path')  8219 images\nval: val.txt  # val images (relative to 'path')  588 images\ntest: test.txt  # test images (optional)  2936 images\n\n# Classes\nnc: 1  # number of classes\nnames: ['object']  # class names\n\n\n# Download script/URL (optional) ---------------------------------------------------------------------------------------\ndownload: |\n  import shutil\n  from tqdm import tqdm\n  from utils.general import np, pd, Path, download, xyxy2xywh\n\n  # Download\n  dir = Path(yaml['path'])  # dataset root dir\n  parent = Path(dir.parent)  # download dir\n  urls = ['http://trax-geometry.s3.amazonaws.com/cvpr_challenge/SKU110K_fixed.tar.gz']\n  download(urls, dir=parent, delete=False)\n\n  # Rename directories\n  if dir.exists():\n      shutil.rmtree(dir)\n  (parent / 'SKU110K_fixed').rename(dir)  # rename dir\n  (dir / 'labels').mkdir(parents=True, exist_ok=True)  # create labels dir\n\n  # Convert labels\n  names = 'image', 'x1', 'y1', 'x2', 'y2', 'class', 'image_width', 'image_height'  # column names\n  for d in 'annotations_train.csv', 'annotations_val.csv', 'annotations_test.csv':\n      x = pd.read_csv(dir / 'annotations' / d, names=names).values  # annotations\n      images, unique_images = x[:, 0], np.unique(x[:, 0])\n      with open((dir / d).with_suffix('.txt').__str__().replace('annotations_', ''), 'w') as f:\n          f.writelines(f'./images/{s}\\n' for s in unique_images)\n      for im in tqdm(unique_images, desc=f'Converting {dir / d}'):\n          cls = 0  # single-class dataset\n          with open((dir / 'labels' / im).with_suffix('.txt'), 'a') as f:\n              for r in x[images == im]:\n                  w, h = r[6], r[7]  # image width, height\n                  xywh = xyxy2xywh(np.array([[r[1] / w, r[2] / h, r[3] / w, r[4] / h]]))[0]  # instance\n                  f.write(f\"{cls} {xywh[0]:.5f} {xywh[1]:.5f} {xywh[2]:.5f} {xywh[3]:.5f}\\n\")  # write label\n"
  },
  {
    "path": "data/VisDrone.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset\n# Example usage: python train.py --data VisDrone.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── VisDrone  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\npath: ../datasets/VisDrone  # dataset root dir\ntrain: VisDrone2019-DET-train/images  # train images (relative to 'path')  6471 images\nval: VisDrone2019-DET-val/images  # val images (relative to 'path')  548 images\ntest: VisDrone2019-DET-test-dev/images  # test images (optional)  1610 images\n\n# Classes\nnc: 10  # number of classes\nnames: ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor']\n\n\n# Download script/URL (optional) ---------------------------------------------------------------------------------------\ndownload: |\n  from utils.general import download, os, Path\n\n  def visdrone2yolo(dir):\n      from PIL import Image\n      from tqdm import tqdm\n\n      def convert_box(size, box):\n          # Convert VisDrone box to YOLO xywh box\n          dw = 1. / size[0]\n          dh = 1. / size[1]\n          return (box[0] + box[2] / 2) * dw, (box[1] + box[3] / 2) * dh, box[2] * dw, box[3] * dh\n\n      (dir / 'labels').mkdir(parents=True, exist_ok=True)  # make labels directory\n      pbar = tqdm((dir / 'annotations').glob('*.txt'), desc=f'Converting {dir}')\n      for f in pbar:\n          img_size = Image.open((dir / 'images' / f.name).with_suffix('.jpg')).size\n          lines = []\n          with open(f, 'r') as file:  # read annotation.txt\n              for row in [x.split(',') for x in file.read().strip().splitlines()]:\n                  if row[4] == '0':  # VisDrone 'ignored regions' class 0\n                      continue\n                  cls = int(row[5]) - 1\n                  box = convert_box(img_size, tuple(map(int, row[:4])))\n                  lines.append(f\"{cls} {' '.join(f'{x:.6f}' for x in box)}\\n\")\n                  with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl:\n                      fl.writelines(lines)  # write label.txt\n\n\n  # Download\n  dir = Path(yaml['path'])  # dataset root dir\n  urls = ['https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-train.zip',\n          'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip',\n          'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip',\n          'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip']\n  download(urls, dir=dir)\n\n  # Convert\n  for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev':\n      visdrone2yolo(dir / d)  # convert VisDrone annotations to YOLO labels\n"
  },
  {
    "path": "data/coco.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# COCO 2017 dataset http://cocodataset.org\n# Example usage: python train.py --data coco.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── coco  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\ntrain: D:/ai_project/coco/coco2017labels/coco/train2017.txt  # 118287 images\nval: D:/ai_project/coco/coco2017labels/coco/val2017.txt  # 5000 images\ntest: D:/ai_project/coco/coco2017labels/coco/test-dev2017.txt  # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794\n\n# Classes\nnc: 80  # number of classes\nnames: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',\n        'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',\n        'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',\n        'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',\n        'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',\n        'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',\n        'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',\n        'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',\n        'hair drier', 'toothbrush']  # class names\n\n\n# Download script/URL (optional)\n"
  },
  {
    "path": "data/coco128.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)\n# Example usage: python train.py --data coco128.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── coco128  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\npath: ../datasets/coco128  # dataset root dir\ntrain: images/train2017  # train images (relative to 'path') 128 images\nval: images/train2017  # val images (relative to 'path') 128 images\ntest:  # test images (optional)\n\n# Classes\nnc: 80  # number of classes\nnames: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',\n        'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',\n        'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',\n        'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',\n        'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',\n        'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',\n        'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',\n        'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',\n        'hair drier', 'toothbrush']  # class names\n\n\n# Download script/URL (optional)\ndownload: https://ultralytics.com/assets/coco128.zip\n"
  },
  {
    "path": "data/hyps/hyp.finetune.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for VOC finetuning\n# python train.py --batch 64 --weights yolov5m.pt --data VOC.yaml --img 512 --epochs 50\n# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials\n\n# Hyperparameter Evolution Results\n# Generations: 306\n#                   P         R     mAP.5 mAP.5:.95       box       obj       cls\n# Metrics:        0.6     0.936     0.896     0.684    0.0115   0.00805   0.00146\n\nlr0: 0.0032\nlrf: 0.12\nmomentum: 0.843\nweight_decay: 0.00036\nwarmup_epochs: 2.0\nwarmup_momentum: 0.5\nwarmup_bias_lr: 0.05\nbox: 0.0296\ncls: 0.243\ncls_pw: 0.631\nobj: 0.301\nobj_pw: 0.911\niou_t: 0.2\nanchor_t: 2.91\n# anchors: 3.63\nfl_gamma: 0.0\nhsv_h: 0.0138\nhsv_s: 0.664\nhsv_v: 0.464\ndegrees: 0.373\ntranslate: 0.245\nscale: 0.898\nshear: 0.602\nperspective: 0.0\nflipud: 0.00856\nfliplr: 0.5\nmosaic: 1.0\nmixup: 0.243\ncopy_paste: 0.0\n"
  },
  {
    "path": "data/hyps/hyp.finetune_objects365.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\nlr0: 0.00258\nlrf: 0.17\nmomentum: 0.779\nweight_decay: 0.00058\nwarmup_epochs: 1.33\nwarmup_momentum: 0.86\nwarmup_bias_lr: 0.0711\nbox: 0.0539\ncls: 0.299\ncls_pw: 0.825\nobj: 0.632\nobj_pw: 1.0\niou_t: 0.2\nanchor_t: 3.44\nanchors: 3.2\nfl_gamma: 0.0\nhsv_h: 0.0188\nhsv_s: 0.704\nhsv_v: 0.36\ndegrees: 0.0\ntranslate: 0.0902\nscale: 0.491\nshear: 0.0\nperspective: 0.0\nflipud: 0.0\nfliplr: 0.5\nmosaic: 1.0\nmixup: 0.0\ncopy_paste: 0.0\n"
  },
  {
    "path": "data/hyps/hyp.scratch-high.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for high-augmentation COCO training from scratch\n# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300\n# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials\n\nlr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)\nlrf: 0.2  # final OneCycleLR learning rate (lr0 * lrf)\nmomentum: 0.937  # SGD momentum/Adam beta1\nweight_decay: 0.0005  # optimizer weight decay 5e-4\nwarmup_epochs: 3.0  # warmup epochs (fractions ok)\nwarmup_momentum: 0.8  # warmup initial momentum\nwarmup_bias_lr: 0.1  # warmup initial bias lr\nbox: 0.05  # box loss gain\ncls: 0.3  # cls loss gain\ncls_pw: 1.0  # cls BCELoss positive_weight\nobj: 0.7  # obj loss gain (scale with pixels)\nobj_pw: 1.0  # obj BCELoss positive_weight\niou_t: 0.20  # IoU training threshold\nanchor_t: 4.0  # anchor-multiple threshold\n# anchors: 3  # anchors per output layer (0 to ignore)\nfl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)\nhsv_h: 0.015  # image HSV-Hue augmentation (fraction)\nhsv_s: 0.7  # image HSV-Saturation augmentation (fraction)\nhsv_v: 0.4  # image HSV-Value augmentation (fraction)\ndegrees: 0.0  # image rotation (+/- deg)\ntranslate: 0.1  # image translation (+/- fraction)\nscale: 0.9  # image scale (+/- gain)\nshear: 0.0  # image shear (+/- deg)\nperspective: 0.0  # image perspective (+/- fraction), range 0-0.001\nflipud: 0.0  # image flip up-down (probability)\nfliplr: 0.5  # image flip left-right (probability)\nmosaic: 1.0  # image mosaic (probability)\nmixup: 0.1  # image mixup (probability)\ncopy_paste: 0.1  # segment copy-paste (probability)\n"
  },
  {
    "path": "data/hyps/hyp.scratch-low.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for low-augmentation COCO training from scratch\n# python train.py --batch 64 --cfg yolov5n6.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --linear\n# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials\n\nlr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)\nlrf: 0.01  # final OneCycleLR learning rate (lr0 * lrf)\nmomentum: 0.937  # SGD momentum/Adam beta1\nweight_decay: 0.0005  # optimizer weight decay 5e-4\nwarmup_epochs: 3.0  # warmup epochs (fractions ok)\nwarmup_momentum: 0.8  # warmup initial momentum\nwarmup_bias_lr: 0.1  # warmup initial bias lr\nbox: 0.05  # box loss gain\ncls: 0.5  # cls loss gain\ncls_pw: 1.0  # cls BCELoss positive_weight\nobj: 1.0  # obj loss gain (scale with pixels)\nobj_pw: 1.0  # obj BCELoss positive_weight\niou_t: 0.20  # IoU training threshold\nanchor_t: 4.0  # anchor-multiple threshold\n# anchors: 3  # anchors per output layer (0 to ignore)\nfl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)\nhsv_h: 0.015  # image HSV-Hue augmentation (fraction)\nhsv_s: 0.7  # image HSV-Saturation augmentation (fraction)\nhsv_v: 0.4  # image HSV-Value augmentation (fraction)\ndegrees: 0.0  # image rotation (+/- deg)\ntranslate: 0.1  # image translation (+/- fraction)\nscale: 0.5  # image scale (+/- gain)\nshear: 0.0  # image shear (+/- deg)\nperspective: 0.0  # image perspective (+/- fraction), range 0-0.001\nflipud: 0.0  # image flip up-down (probability)\nfliplr: 0.5  # image flip left-right (probability)\nmosaic: 1.0  # image mosaic (probability)\nmixup: 0.0  # image mixup (probability)\ncopy_paste: 0.0  # segment copy-paste (probability)\n"
  },
  {
    "path": "data/hyps/hyp.scratch-med.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for medium-augmentation COCO training from scratch\n# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300\n# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials\n\nlr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)\nlrf: 0.1  # final OneCycleLR learning rate (lr0 * lrf)\nmomentum: 0.937  # SGD momentum/Adam beta1\nweight_decay: 0.0005  # optimizer weight decay 5e-4\nwarmup_epochs: 3.0  # warmup epochs (fractions ok)\nwarmup_momentum: 0.8  # warmup initial momentum\nwarmup_bias_lr: 0.1  # warmup initial bias lr\nbox: 0.05  # box loss gain\ncls: 0.3  # cls loss gain\ncls_pw: 1.0  # cls BCELoss positive_weight\nobj: 0.7  # obj loss gain (scale with pixels)\nobj_pw: 1.0  # obj BCELoss positive_weight\niou_t: 0.20  # IoU training threshold\nanchor_t: 4.0  # anchor-multiple threshold\n# anchors: 3  # anchors per output layer (0 to ignore)\nfl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)\nhsv_h: 0.015  # image HSV-Hue augmentation (fraction)\nhsv_s: 0.7  # image HSV-Saturation augmentation (fraction)\nhsv_v: 0.4  # image HSV-Value augmentation (fraction)\ndegrees: 0.0  # image rotation (+/- deg)\ntranslate: 0.1  # image translation (+/- fraction)\nscale: 0.9  # image scale (+/- gain)\nshear: 0.0  # image shear (+/- deg)\nperspective: 0.0  # image perspective (+/- fraction), range 0-0.001\nflipud: 0.0  # image flip up-down (probability)\nfliplr: 0.5  # image flip left-right (probability)\nmosaic: 1.0  # image mosaic (probability)\nmixup: 0.1  # image mixup (probability)\ncopy_paste: 0.0  # segment copy-paste (probability)\n"
  },
  {
    "path": "data/hyps/hyp.scratch.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Hyperparameters for COCO training from scratch\n# python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300\n# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials\n\nlr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)\nlrf: 0.1  # final OneCycleLR learning rate (lr0 * lrf)\nmomentum: 0.937  # SGD momentum/Adam beta1\nweight_decay: 0.0005  # optimizer weight decay 5e-4\nwarmup_epochs: 3.0  # warmup epochs (fractions ok)\nwarmup_momentum: 0.8  # warmup initial momentum\nwarmup_bias_lr: 0.1  # warmup initial bias lr\nbox: 0.05  # box loss gain\ncls: 0.5  # cls loss gain\ncls_pw: 1.0  # cls BCELoss positive_weight\nobj: 1.0  # obj loss gain (scale with pixels)\nobj_pw: 1.0  # obj BCELoss positive_weight\niou_t: 0.20  # IoU training threshold\nanchor_t: 4.0  # anchor-multiple threshold\n# anchors: 3  # anchors per output layer (0 to ignore)\nfl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)\nhsv_h: 0.015  # image HSV-Hue augmentation (fraction)\nhsv_s: 0.7  # image HSV-Saturation augmentation (fraction)\nhsv_v: 0.4  # image HSV-Value augmentation (fraction)\ndegrees: 0.0  # image rotation (+/- deg)\ntranslate: 0.1  # image translation (+/- fraction)\nscale: 0.5  # image scale (+/- gain)\nshear: 0.0  # image shear (+/- deg)\nperspective: 0.0  # image perspective (+/- fraction), range 0-0.001\nflipud: 0.0  # image flip up-down (probability)\nfliplr: 0.5  # image flip left-right (probability)\nmosaic: 1.0  # image mosaic (probability)\nmixup: 0.0  # image mixup (probability)\ncopy_paste: 0.0  # segment copy-paste (probability)\n"
  },
  {
    "path": "data/mask.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# COCO 2017 dataset http://cocodataset.org\n# Example usage: python train.py --data mask.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── coco  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\ntrain: E:/dataset/mask/train.txt\nval: E:/dataset/mask/valid.txt \ntest: E:/dataset/mask/valid.txt  \n\n# Classes\nnc: 2  # number of classes\nnames: ['head', 'mask']  # class names\n\n\n# Download script/URL (optional)\n"
  },
  {
    "path": "data/scripts/download_weights.sh",
    "content": "#!/bin/bash\n# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Download latest models from https://github.com/ultralytics/yolov5/releases\n# Example usage: bash path/to/download_weights.sh\n# parent\n# └── yolov5\n#     ├── yolov5s.pt  ← downloads here\n#     ├── yolov5m.pt\n#     └── ...\n\npython - <<EOF\nfrom utils.downloads import attempt_download\n\nmodels = ['n', 's', 'm', 'l', 'x']\nmodels.extend([x + '6' for x in models])  # add P6 models\n\nfor x in models:\n    attempt_download(f'yolov5{x}.pt')\n\nEOF\n"
  },
  {
    "path": "data/scripts/get_coco.sh",
    "content": "#!/bin/bash\n# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Download COCO 2017 dataset http://cocodataset.org\n# Example usage: bash data/scripts/get_coco.sh\n# parent\n# ├── yolov5\n# └── datasets\n#     └── coco  ← downloads here\n\n# Download/unzip labels\nd='../datasets' # unzip directory\nurl=https://github.com/ultralytics/yolov5/releases/download/v1.0/\nf='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB\necho 'Downloading' $url$f ' ...'\ncurl -L $url$f -o $f && unzip -q $f -d $d && rm $f &\n\n# Download/unzip images\nd='../datasets/coco/images' # unzip directory\nurl=http://images.cocodataset.org/zips/\nf1='train2017.zip' # 19G, 118k images\nf2='val2017.zip'   # 1G, 5k images\nf3='test2017.zip'  # 7G, 41k images (optional)\nfor f in $f1 $f2; do\n  echo 'Downloading' $url$f '...'\n  curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &\ndone\nwait # finish background tasks\n"
  },
  {
    "path": "data/scripts/get_coco128.sh",
    "content": "#!/bin/bash\n# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)\n# Example usage: bash data/scripts/get_coco128.sh\n# parent\n# ├── yolov5\n# └── datasets\n#     └── coco128  ← downloads here\n\n# Download/unzip images and labels\nd='../datasets' # unzip directory\nurl=https://github.com/ultralytics/yolov5/releases/download/v1.0/\nf='coco128.zip' # or 'coco128-segments.zip', 68 MB\necho 'Downloading' $url$f ' ...'\ncurl -L $url$f -o $f && unzip -q $f -d $d && rm $f &\n\nwait # finish background tasks\n"
  },
  {
    "path": "data/voc.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC\n# Example usage: python train.py --data VOC.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── VOC  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\npath: ../datasets/VOC\ntrain: # train images (relative to 'path')  16551 images\n  - images/train2012\n  - images/train2007\n  - images/val2012\n  - images/val2007\nval: # val images (relative to 'path')  4952 images\n  - images/test2007\ntest: # test images (optional)\n  - images/test2007\n\n# Classes\nnc: 20  # number of classes\nnames: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',\n        'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']  # class names\n\n\n# Download script/URL (optional) ---------------------------------------------------------------------------------------\ndownload: |\n  import xml.etree.ElementTree as ET\n\n  from tqdm import tqdm\n  from utils.general import download, Path\n\n\n  def convert_label(path, lb_path, year, image_id):\n      def convert_box(size, box):\n          dw, dh = 1. / size[0], 1. / size[1]\n          x, y, w, h = (box[0] + box[1]) / 2.0 - 1, (box[2] + box[3]) / 2.0 - 1, box[1] - box[0], box[3] - box[2]\n          return x * dw, y * dh, w * dw, h * dh\n\n      in_file = open(path / f'VOC{year}/Annotations/{image_id}.xml')\n      out_file = open(lb_path, 'w')\n      tree = ET.parse(in_file)\n      root = tree.getroot()\n      size = root.find('size')\n      w = int(size.find('width').text)\n      h = int(size.find('height').text)\n\n      for obj in root.iter('object'):\n          cls = obj.find('name').text\n          if cls in yaml['names'] and not int(obj.find('difficult').text) == 1:\n              xmlbox = obj.find('bndbox')\n              bb = convert_box((w, h), [float(xmlbox.find(x).text) for x in ('xmin', 'xmax', 'ymin', 'ymax')])\n              cls_id = yaml['names'].index(cls)  # class id\n              out_file.write(\" \".join([str(a) for a in (cls_id, *bb)]) + '\\n')\n\n\n  # Download\n  dir = Path(yaml['path'])  # dataset root dir\n  url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'\n  urls = [url + 'VOCtrainval_06-Nov-2007.zip',  # 446MB, 5012 images\n          url + 'VOCtest_06-Nov-2007.zip',  # 438MB, 4953 images\n          url + 'VOCtrainval_11-May-2012.zip']  # 1.95GB, 17126 images\n  download(urls, dir=dir / 'images', delete=False)\n\n  # Convert\n  path = dir / f'images/VOCdevkit'\n  for year, image_set in ('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test'):\n      imgs_path = dir / 'images' / f'{image_set}{year}'\n      lbs_path = dir / 'labels' / f'{image_set}{year}'\n      imgs_path.mkdir(exist_ok=True, parents=True)\n      lbs_path.mkdir(exist_ok=True, parents=True)\n\n      image_ids = open(path / f'VOC{year}/ImageSets/Main/{image_set}.txt').read().strip().split()\n      for id in tqdm(image_ids, desc=f'{image_set}{year}'):\n          f = path / f'VOC{year}/JPEGImages/{id}.jpg'  # old img path\n          lb_path = (lbs_path / f.name).with_suffix('.txt')  # new label path\n          f.rename(imgs_path / f.name)  # move image\n          convert_label(path, lb_path, year, id)  # convert labels to YOLO format\n"
  },
  {
    "path": "data/xView.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# xView 2018 dataset https://challenge.xviewdataset.org\n# --------  DOWNLOAD DATA MANUALLY from URL above and unzip to 'datasets/xView' before running train command!  --------\n# Example usage: python train.py --data xView.yaml\n# parent\n# ├── yolov5\n# └── datasets\n#     └── xView  ← downloads here\n\n\n# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]\npath: ../datasets/xView  # dataset root dir\ntrain: images/autosplit_train.txt  # train images (relative to 'path') 90% of 847 train images\nval: images/autosplit_val.txt  # train images (relative to 'path') 10% of 847 train images\n\n# Classes\nnc: 60  # number of classes\nnames: ['Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',\n        'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',\n        'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',\n        'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',\n        'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane',\n        'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',\n        'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',\n        'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',\n        'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower']  # class names\n\n\n# Download script/URL (optional) ---------------------------------------------------------------------------------------\ndownload: |\n  import json\n  import os\n  from pathlib import Path\n\n  import numpy as np\n  from PIL import Image\n  from tqdm import tqdm\n\n  from utils.datasets import autosplit\n  from utils.general import download, xyxy2xywhn\n\n\n  def convert_labels(fname=Path('xView/xView_train.geojson')):\n      # Convert xView geoJSON labels to YOLO format\n      path = fname.parent\n      with open(fname) as f:\n          print(f'Loading {fname}...')\n          data = json.load(f)\n\n      # Make dirs\n      labels = Path(path / 'labels' / 'train')\n      os.system(f'rm -rf {labels}')\n      labels.mkdir(parents=True, exist_ok=True)\n\n      # xView classes 11-94 to 0-59\n      xview_class2index = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, -1, 9, 10, 11,\n                           12, 13, 14, 15, -1, -1, 16, 17, 18, 19, 20, 21, 22, -1, 23, 24, 25, -1, 26, 27, -1, 28, -1,\n                           29, 30, 31, 32, 33, 34, 35, 36, 37, -1, 38, 39, 40, 41, 42, 43, 44, 45, -1, -1, -1, -1, 46,\n                           47, 48, 49, -1, 50, 51, -1, 52, -1, -1, -1, 53, 54, -1, 55, -1, -1, 56, -1, 57, -1, 58, 59]\n\n      shapes = {}\n      for feature in tqdm(data['features'], desc=f'Converting {fname}'):\n          p = feature['properties']\n          if p['bounds_imcoords']:\n              id = p['image_id']\n              file = path / 'train_images' / id\n              if file.exists():  # 1395.tif missing\n                  try:\n                      box = np.array([int(num) for num in p['bounds_imcoords'].split(\",\")])\n                      assert box.shape[0] == 4, f'incorrect box shape {box.shape[0]}'\n                      cls = p['type_id']\n                      cls = xview_class2index[int(cls)]  # xView class to 0-60\n                      assert 59 >= cls >= 0, f'incorrect class index {cls}'\n\n                      # Write YOLO label\n                      if id not in shapes:\n                          shapes[id] = Image.open(file).size\n                      box = xyxy2xywhn(box[None].astype(np.float), w=shapes[id][0], h=shapes[id][1], clip=True)\n                      with open((labels / id).with_suffix('.txt'), 'a') as f:\n                          f.write(f\"{cls} {' '.join(f'{x:.6f}' for x in box[0])}\\n\")  # write label.txt\n                  except Exception as e:\n                      print(f'WARNING: skipping one label for {file}: {e}')\n\n\n  # Download manually from https://challenge.xviewdataset.org\n  dir = Path(yaml['path'])  # dataset root dir\n  # urls = ['https://d307kc0mrhucc3.cloudfront.net/train_labels.zip',  # train labels\n  #         'https://d307kc0mrhucc3.cloudfront.net/train_images.zip',  # 15G, 847 train images\n  #         'https://d307kc0mrhucc3.cloudfront.net/val_images.zip']  # 5G, 282 val images (no labels)\n  # download(urls, dir=dir, delete=False)\n\n  # Convert labels\n  convert_labels(dir / 'xView_train.geojson')\n\n  # Move images\n  images = Path(dir / 'images')\n  images.mkdir(parents=True, exist_ok=True)\n  Path(dir / 'train_images').rename(dir / 'images' / 'train')\n  Path(dir / 'val_images').rename(dir / 'images' / 'val')\n\n  # Split\n  autosplit(dir / 'images' / 'train')\n"
  },
  {
    "path": "detect.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nRun inference on images, videos, directories, streams, etc.\n\nUsage:\n    $ python path/to/detect.py --weights yolov5s.pt --source 0  # webcam\n                                                             img.jpg  # image\n                                                             vid.mp4  # video\n                                                             path/  # directory\n                                                             path/*.jpg  # glob\n                                                             'https://youtu.be/Zgi9g1ksQHc'  # YouTube\n                                                             'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream\n\"\"\"\n\nimport argparse\nimport os\nimport sys\nfrom pathlib import Path\n\nimport cv2\nimport torch\nimport torch.backends.cudnn as cudnn\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[0]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\nROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative\n\nfrom models.common import DetectMultiBackend\nfrom utils.datasets import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams\nfrom utils.general import (LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr,\n                           increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)\nfrom utils.plots import Annotator, colors, save_one_box\nfrom utils.torch_utils import select_device, time_sync\n\n\n@torch.no_grad()\ndef run(weights=ROOT / 'yolov5s.pt',  # model.pt path(s)\n        source=ROOT / 'data/images',  # file/dir/URL/glob, 0 for webcam\n        imgsz=(640, 640),  # inference size (height, width)\n        conf_thres=0.25,  # confidence threshold\n        iou_thres=0.45,  # NMS IOU threshold\n        max_det=1000,  # maximum detections per image\n        device='',  # cuda device, i.e. 0 or 0,1,2,3 or cpu\n        view_img=False,  # show results\n        save_txt=False,  # save results to *.txt\n        save_conf=False,  # save confidences in --save-txt labels\n        save_crop=False,  # save cropped prediction boxes\n        nosave=False,  # do not save images/videos\n        classes=None,  # filter by class: --class 0, or --class 0 2 3\n        agnostic_nms=False,  # class-agnostic NMS\n        augment=False,  # augmented inference\n        visualize=False,  # visualize features\n        update=False,  # update all models\n        project=ROOT / 'runs/detect',  # save results to project/name\n        name='exp',  # save results to project/name\n        exist_ok=False,  # existing project/name ok, do not increment\n        line_thickness=3,  # bounding box thickness (pixels)\n        hide_labels=False,  # hide labels\n        hide_conf=False,  # hide confidences\n        half=False,  # use FP16 half-precision inference\n        dnn=False,  # use OpenCV DNN for ONNX inference\n        ):\n    source = str(source)\n    save_img = not nosave and not source.endswith('.txt')  # save inference images\n    is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)\n    is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))\n    webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)\n    if is_url and is_file:\n        source = check_file(source)  # download\n\n    # Directories\n    save_dir = increment_path(Path(project) / name, exist_ok=exist_ok)  # increment run\n    (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True)  # make dir\n\n    # Load model\n    device = select_device(device)\n    model = DetectMultiBackend(weights, device=device, dnn=dnn)\n    stride, names, pt, jit, onnx, engine = model.stride, model.names, model.pt, model.jit, model.onnx, model.engine\n    imgsz = check_img_size(imgsz, s=stride)  # check image size\n\n    # Half\n    half &= (pt or jit or engine) and device.type != 'cpu'  # half precision only supported by PyTorch on CUDA\n    if pt or jit:\n        model.model.half() if half else model.model.float()\n\n    # Dataloader\n    if webcam:\n        view_img = check_imshow()\n        cudnn.benchmark = True  # set True to speed up constant image size inference\n        dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt)\n        bs = len(dataset)  # batch_size\n    else:\n        dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt)\n        bs = 1  # batch_size\n    vid_path, vid_writer = [None] * bs, [None] * bs\n\n    # Run inference\n    model.warmup(imgsz=(1, 3, *imgsz), half=half)  # warmup\n    dt, seen = [0.0, 0.0, 0.0], 0\n    for path, im, im0s, vid_cap, s in dataset:\n        t1 = time_sync()\n        im = torch.from_numpy(im).to(device)\n        im = im.half() if half else im.float()  # uint8 to fp16/32\n        im /= 255  # 0 - 255 to 0.0 - 1.0\n        if len(im.shape) == 3:\n            im = im[None]  # expand for batch dim\n        t2 = time_sync()\n        dt[0] += t2 - t1\n\n        # Inference\n        visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False\n        pred = model(im, augment=augment, visualize=visualize)\n        t3 = time_sync()\n        dt[1] += t3 - t2\n\n        # NMS\n        pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)\n        dt[2] += time_sync() - t3\n\n        # Second-stage classifier (optional)\n        # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)\n\n        # Process predictions\n        for i, det in enumerate(pred):  # per image\n            seen += 1\n            if webcam:  # batch_size >= 1\n                p, im0, frame = path[i], im0s[i].copy(), dataset.count\n                s += f'{i}: '\n            else:\n                p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)\n\n            p = Path(p)  # to Path\n            save_path = str(save_dir / p.name)  # im.jpg\n            txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # im.txt\n            s += '%gx%g ' % im.shape[2:]  # print string\n            gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh\n            imc = im0.copy() if save_crop else im0  # for save_crop\n            annotator = Annotator(im0, line_width=line_thickness, example=str(names))\n            if len(det):\n                # Rescale boxes from img_size to im0 size\n                det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()\n\n                # Print results\n                for c in det[:, -1].unique():\n                    n = (det[:, -1] == c).sum()  # detections per class\n                    s += f\"{n} {names[int(c)]}{'s' * (n > 1)}, \"  # add to string\n\n                # Write results\n                for *xyxy, conf, cls in reversed(det):\n                    if save_txt:  # Write to file\n                        xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh\n                        line = (cls, *xywh, conf) if save_conf else (cls, *xywh)  # label format\n                        with open(txt_path + '.txt', 'a') as f:\n                            f.write(('%g ' * len(line)).rstrip() % line + '\\n')\n\n                    if save_img or save_crop or view_img:  # Add bbox to image\n                        c = int(cls)  # integer class\n                        label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')\n                        annotator.box_label(xyxy, label, color=colors(c, True))\n                        if save_crop:\n                            save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)\n\n            # Print time (inference-only)\n            LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')\n\n            # Stream results\n            im0 = annotator.result()\n            if view_img:\n                cv2.imshow(str(p), im0)\n                cv2.waitKey(1)  # 1 millisecond\n\n            # Save results (image with detections)\n            if save_img:\n                if dataset.mode == 'image':\n                    cv2.imwrite(save_path, im0)\n                else:  # 'video' or 'stream'\n                    if vid_path[i] != save_path:  # new video\n                        vid_path[i] = save_path\n                        if isinstance(vid_writer[i], cv2.VideoWriter):\n                            vid_writer[i].release()  # release previous video writer\n                        if vid_cap:  # video\n                            fps = vid_cap.get(cv2.CAP_PROP_FPS)\n                            w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n                            h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n                        else:  # stream\n                            fps, w, h = 30, im0.shape[1], im0.shape[0]\n                            save_path += '.mp4'\n                        vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))\n                    vid_writer[i].write(im0)\n\n    # Print results\n    t = tuple(x / seen * 1E3 for x in dt)  # speeds per image\n    LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)\n    if save_txt or save_img:\n        s = f\"\\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}\" if save_txt else ''\n        LOGGER.info(f\"Results saved to {colorstr('bold', save_dir)}{s}\")\n    if update:\n        strip_optimizer(weights)  # update model (to fix SourceChangeWarning)\n\n\ndef parse_opt():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)')\n    parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam')\n    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')\n    parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')\n    parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')\n    parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')\n    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')\n    parser.add_argument('--view-img', action='store_true', help='show results')\n    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')\n    parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')\n    parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')\n    parser.add_argument('--nosave', action='store_true', help='do not save images/videos')\n    parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')\n    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')\n    parser.add_argument('--augment', action='store_true', help='augmented inference')\n    parser.add_argument('--visualize', action='store_true', help='visualize features')\n    parser.add_argument('--update', action='store_true', help='update all models')\n    parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name')\n    parser.add_argument('--name', default='exp', help='save results to project/name')\n    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')\n    parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')\n    parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')\n    parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')\n    parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')\n    parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')\n    opt = parser.parse_args()\n    opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1  # expand\n    print_args(FILE.stem, opt)\n    return opt\n\n\ndef main(opt):\n    check_requirements(exclude=('tensorboard', 'thop'))\n    run(**vars(opt))\n\n\nif __name__ == \"__main__\":\n    opt = parse_opt()\n    main(opt)\n"
  },
  {
    "path": "export.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nExport a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit\n\nFormat                  | Example                   | `--include ...` argument\n---                     | ---                       | ---\nPyTorch                 | yolov5s.pt                | -\nTorchScript             | yolov5s.torchscript       | `torchscript`\nONNX                    | yolov5s.onnx              | `onnx`\nCoreML                  | yolov5s.mlmodel           | `coreml`\nTensorFlow SavedModel   | yolov5s_saved_model/      | `saved_model`\nTensorFlow GraphDef     | yolov5s.pb                | `pb`\nTensorFlow Lite         | yolov5s.tflite            | `tflite`\nTensorFlow.js           | yolov5s_web_model/        | `tfjs`\nTensorRT                | yolov5s.engine            | `engine`\n\nUsage:\n    $ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml saved_model pb tflite tfjs\n\nInference:\n    $ python path/to/detect.py --weights yolov5s.pt\n                                         yolov5s.torchscript\n                                         yolov5s.onnx\n                                         yolov5s.mlmodel  (under development)\n                                         yolov5s_saved_model\n                                         yolov5s.pb\n                                         yolov5s.tflite\n                                         yolov5s.engine\n\nTensorFlow.js:\n    $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example\n    $ npm install\n    $ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model\n    $ npm start\n\"\"\"\n\nimport argparse\nimport json\nimport os\nimport subprocess\nimport sys\nimport time\nfrom pathlib import Path\n\n# activate rknn hack\nif len(sys.argv)>=3 and '--rknpu' in sys.argv:\n    _index = sys.argv.index('--rknpu')\n    if sys.argv[_index+1].upper() in ['RK1808', 'RV1109', 'RV1126','RK3399PRO']:\n        os.environ['RKNN_model_hack'] = 'npu_1'\n    elif sys.argv[_index+1].upper() in ['RK3566', 'RK3568', 'RK3588','RK3588S']:\n        os.environ['RKNN_model_hack'] = 'npu_2'\n    else:\n        assert False,\"{} not recognized\".format(sys.argv[_index+1])\n\nimport torch\nimport torch.nn as nn\nfrom torch.utils.mobile_optimizer import optimize_for_mobile\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[0]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\nROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative\n\nfrom models.common import Conv\nfrom models.experimental import attempt_load\nfrom models.yolo import Detect\nfrom utils.activations import SiLU\nfrom utils.datasets import LoadImages\nfrom utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, colorstr, file_size, print_args,\n                           url2file)\nfrom utils.torch_utils import select_device\n\n\ndef export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')):\n    # YOLOv5 TorchScript model export\n    try:\n        LOGGER.info(f'\\n{prefix} starting export with torch {torch.__version__}...')\n        f = file.with_suffix('.torchscript')\n\n        ts = torch.jit.trace(model, im, strict=False)\n        d = {\"shape\": im.shape, \"stride\": int(max(model.stride)), \"names\": model.names}\n        extra_files = {'config.txt': json.dumps(d)}  # torch._C.ExtraFilesMap()\n        (optimize_for_mobile(ts) if optimize else ts).save(str(f), _extra_files=extra_files)\n\n        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')\n    except Exception as e:\n        LOGGER.info(f'{prefix} export failure: {e}')\n\n\ndef export_onnx(model, im, file, opset, train, dynamic, simplify, prefix=colorstr('ONNX:')):\n    # YOLOv5 ONNX export\n    try:\n        check_requirements(('onnx',))\n        import onnx\n\n        LOGGER.info(f'\\n{prefix} starting export with onnx {onnx.__version__}...')\n        f = file.with_suffix('.onnx')\n\n        torch.onnx.export(model, im, f, verbose=False, opset_version=opset,\n                          training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,\n                          do_constant_folding=not train,\n                          input_names=['images'],\n                          output_names=['output'],\n                          dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # shape(1,3,640,640)\n                                        'output': {0: 'batch', 1: 'anchors'}  # shape(1,25200,85)\n                                        } if dynamic else None)\n\n        # Checks\n        model_onnx = onnx.load(f)  # load onnx model\n        onnx.checker.check_model(model_onnx)  # check onnx model\n        # LOGGER.info(onnx.helper.printable_graph(model_onnx.graph))  # print\n\n        # Simplify\n        if simplify:\n            try:\n                check_requirements(('onnx-simplifier',))\n                import onnxsim\n\n                LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')\n                model_onnx, check = onnxsim.simplify(\n                    model_onnx,\n                    dynamic_input_shape=dynamic,\n                    input_shapes={'images': list(im.shape)} if dynamic else None)\n                assert check, 'assert check failed'\n                onnx.save(model_onnx, f)\n            except Exception as e:\n                LOGGER.info(f'{prefix} simplifier failure: {e}')\n        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')\n        LOGGER.info(f\"{prefix} run --dynamic ONNX model inference with: 'python detect.py --weights {f}'\")\n    except Exception as e:\n        LOGGER.info(f'{prefix} export failure: {e}')\n\n\ndef export_coreml(model, im, file, prefix=colorstr('CoreML:')):\n    # YOLOv5 CoreML export\n    ct_model = None\n    try:\n        check_requirements(('coremltools',))\n        import coremltools as ct\n\n        LOGGER.info(f'\\n{prefix} starting export with coremltools {ct.__version__}...')\n        f = file.with_suffix('.mlmodel')\n\n        model.train()  # CoreML exports should be placed in model.train() mode\n        ts = torch.jit.trace(model, im, strict=False)  # TorchScript model\n        ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])\n        ct_model.save(f)\n\n        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')\n    except Exception as e:\n        LOGGER.info(f'\\n{prefix} export failure: {e}')\n\n    return ct_model\n\n\ndef export_saved_model(model, im, file, dynamic,\n                       tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45,\n                       conf_thres=0.25, prefix=colorstr('TensorFlow saved_model:')):\n    # YOLOv5 TensorFlow saved_model export\n    keras_model = None\n    try:\n        import tensorflow as tf\n        from tensorflow import keras\n\n        from models.tf import TFDetect, TFModel\n\n        LOGGER.info(f'\\n{prefix} starting export with tensorflow {tf.__version__}...')\n        f = str(file).replace('.pt', '_saved_model')\n        batch_size, ch, *imgsz = list(im.shape)  # BCHW\n\n        tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)\n        im = tf.zeros((batch_size, *imgsz, 3))  # BHWC order for TensorFlow\n        y = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)\n        inputs = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)\n        outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)\n        keras_model = keras.Model(inputs=inputs, outputs=outputs)\n        keras_model.trainable = False\n        keras_model.summary()\n        keras_model.save(f, save_format='tf')\n\n        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')\n    except Exception as e:\n        LOGGER.info(f'\\n{prefix} export failure: {e}')\n\n    return keras_model\n\n\ndef export_pb(keras_model, im, file, prefix=colorstr('TensorFlow GraphDef:')):\n    # YOLOv5 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow\n    try:\n        import tensorflow as tf\n        from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2\n\n        LOGGER.info(f'\\n{prefix} starting export with tensorflow {tf.__version__}...')\n        f = file.with_suffix('.pb')\n\n        m = tf.function(lambda x: keras_model(x))  # full model\n        m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))\n        frozen_func = convert_variables_to_constants_v2(m)\n        frozen_func.graph.as_graph_def()\n        tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)\n\n        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')\n    except Exception as e:\n        LOGGER.info(f'\\n{prefix} export failure: {e}')\n\n\ndef export_tflite(keras_model, im, file, int8, data, ncalib, prefix=colorstr('TensorFlow Lite:')):\n    # YOLOv5 TensorFlow Lite export\n    try:\n        import tensorflow as tf\n\n        from models.tf import representative_dataset_gen\n\n        LOGGER.info(f'\\n{prefix} starting export with tensorflow {tf.__version__}...')\n        batch_size, ch, *imgsz = list(im.shape)  # BCHW\n        f = str(file).replace('.pt', '-fp16.tflite')\n\n        converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)\n        converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]\n        converter.target_spec.supported_types = [tf.float16]\n        converter.optimizations = [tf.lite.Optimize.DEFAULT]\n        if int8:\n            dataset = LoadImages(check_dataset(data)['train'], img_size=imgsz, auto=False)  # representative data\n            converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib)\n            converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]\n            converter.target_spec.supported_types = []\n            converter.inference_input_type = tf.uint8  # or tf.int8\n            converter.inference_output_type = tf.uint8  # or tf.int8\n            converter.experimental_new_quantizer = False\n            f = str(file).replace('.pt', '-int8.tflite')\n\n        tflite_model = converter.convert()\n        open(f, \"wb\").write(tflite_model)\n        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')\n\n    except Exception as e:\n        LOGGER.info(f'\\n{prefix} export failure: {e}')\n\n\ndef export_tfjs(keras_model, im, file, prefix=colorstr('TensorFlow.js:')):\n    # YOLOv5 TensorFlow.js export\n    try:\n        check_requirements(('tensorflowjs',))\n        import re\n\n        import tensorflowjs as tfjs\n\n        LOGGER.info(f'\\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')\n        f = str(file).replace('.pt', '_web_model')  # js dir\n        f_pb = file.with_suffix('.pb')  # *.pb path\n        f_json = f + '/model.json'  # *.json path\n\n        cmd = f\"tensorflowjs_converter --input_format=tf_frozen_model \" \\\n              f\"--output_node_names='Identity,Identity_1,Identity_2,Identity_3' {f_pb} {f}\"\n        subprocess.run(cmd, shell=True)\n\n        json = open(f_json).read()\n        with open(f_json, 'w') as j:  # sort JSON Identity_* in ascending order\n            subst = re.sub(\n                r'{\"outputs\": {\"Identity.?.?\": {\"name\": \"Identity.?.?\"}, '\n                r'\"Identity.?.?\": {\"name\": \"Identity.?.?\"}, '\n                r'\"Identity.?.?\": {\"name\": \"Identity.?.?\"}, '\n                r'\"Identity.?.?\": {\"name\": \"Identity.?.?\"}}}',\n                r'{\"outputs\": {\"Identity\": {\"name\": \"Identity\"}, '\n                r'\"Identity_1\": {\"name\": \"Identity_1\"}, '\n                r'\"Identity_2\": {\"name\": \"Identity_2\"}, '\n                r'\"Identity_3\": {\"name\": \"Identity_3\"}}}',\n                json)\n            j.write(subst)\n\n        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')\n    except Exception as e:\n        LOGGER.info(f'\\n{prefix} export failure: {e}')\n\n\ndef export_engine(model, im, file, train, half, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')):\n    try:\n        check_requirements(('tensorrt',))\n        import tensorrt as trt\n\n        opset = (12, 13)[trt.__version__[0] == '8']  # test on TensorRT 7.x and 8.x\n        export_onnx(model, im, file, opset, train, False, simplify)\n        onnx = file.with_suffix('.onnx')\n        assert onnx.exists(), f'failed to export ONNX file: {onnx}'\n\n        LOGGER.info(f'\\n{prefix} starting export with TensorRT {trt.__version__}...')\n        f = file.with_suffix('.engine')  # TensorRT engine file\n        logger = trt.Logger(trt.Logger.INFO)\n        if verbose:\n            logger.min_severity = trt.Logger.Severity.VERBOSE\n\n        builder = trt.Builder(logger)\n        config = builder.create_builder_config()\n        config.max_workspace_size = workspace * 1 << 30\n\n        flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))\n        network = builder.create_network(flag)\n        parser = trt.OnnxParser(network, logger)\n        if not parser.parse_from_file(str(onnx)):\n            raise RuntimeError(f'failed to load ONNX file: {onnx}')\n\n        inputs = [network.get_input(i) for i in range(network.num_inputs)]\n        outputs = [network.get_output(i) for i in range(network.num_outputs)]\n        LOGGER.info(f'{prefix} Network Description:')\n        for inp in inputs:\n            LOGGER.info(f'{prefix}\\tinput \"{inp.name}\" with shape {inp.shape} and dtype {inp.dtype}')\n        for out in outputs:\n            LOGGER.info(f'{prefix}\\toutput \"{out.name}\" with shape {out.shape} and dtype {out.dtype}')\n\n        half &= builder.platform_has_fast_fp16\n        LOGGER.info(f'{prefix} building FP{16 if half else 32} engine in {f}')\n        if half:\n            config.set_flag(trt.BuilderFlag.FP16)\n        with builder.build_engine(network, config) as engine, open(f, 'wb') as t:\n            t.write(engine.serialize())\n        LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')\n\n    except Exception as e:\n        LOGGER.info(f'\\n{prefix} export failure: {e}')\n\n\n@torch.no_grad()\ndef run(data=ROOT / 'data/coco128.yaml',  # 'dataset.yaml path'\n        weights=ROOT / 'yolov5s.pt',  # weights path\n        imgsz=(640, 640),  # image (height, width)\n        batch_size=1,  # batch size\n        device='cpu',  # cuda device, i.e. 0 or 0,1,2,3 or cpu\n        include=('torchscript', 'onnx', 'coreml'),  # include formats\n        half=False,  # FP16 half-precision export\n        inplace=False,  # set YOLOv5 Detect() inplace=True\n        train=False,  # model.train() mode\n        optimize=False,  # TorchScript: optimize for mobile\n        int8=False,  # CoreML/TF INT8 quantization\n        dynamic=False,  # ONNX/TF: dynamic axes\n        simplify=False,  # ONNX: simplify model\n        opset=14,  # ONNX: opset version\n        verbose=False,  # TensorRT: verbose log\n        workspace=4,  # TensorRT: workspace size (GB)\n        nms=False,  # TF: add NMS to model\n        agnostic_nms=False,  # TF: add agnostic NMS to model\n        topk_per_class=100,  # TF.js NMS: topk per class to keep\n        topk_all=100,  # TF.js NMS: topk for all classes to keep\n        iou_thres=0.45,  # TF.js NMS: IoU threshold\n        conf_thres=0.25,  # TF.js NMS: confidence threshold\n        rknn_friendly = False,\n        ):\n    t = time.time()\n    include = [x.lower() for x in include]\n    tf_exports = list(x in include for x in ('saved_model', 'pb', 'tflite', 'tfjs'))  # TensorFlow exports\n    imgsz *= 2 if len(imgsz) == 1 else 1  # expand\n    file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights)\n\n    # Load PyTorch model\n    device = select_device(device)\n    assert not (device.type == 'cpu' and half), '--half only compatible with GPU export, i.e. use --device 0'\n    model = attempt_load(weights, map_location=device, inplace=True, fuse=True)  # load FP32 model\n    nc, names = model.nc, model.names  # number of classes, class names\n\n    # Input\n    gs = int(max(model.stride))  # grid size (max stride)\n    imgsz = [check_img_size(x, gs) for x in imgsz]  # verify img_size are gs-multiples\n    im = torch.zeros(batch_size, 3, *imgsz).to(device)  # image size(1,3,320,192) BCHW iDetection\n\n    # Update model\n    if half:\n        im, model = im.half(), model.half()  # to FP16\n    model.train() if train else model.eval()  # training mode = no Detect() layer grid construction\n    for k, m in model.named_modules():\n        if isinstance(m, Conv):  # assign export-friendly activations\n            if isinstance(m.act, nn.SiLU):\n                m.act = SiLU()\n        elif isinstance(m, Detect):\n            m.inplace = inplace\n            m.onnx_dynamic = dynamic\n            # m.forward = m.forward_export  # assign forward (optional)\n\n        if os.getenv('RKNN_model_hack', '0') == 'npu_1':\n            from models.common import Focus\n            from models.common_rk_plug_in import surrogate_focus\n            if isinstance(model.model[0], Focus):\n                # For yolo v5 version\n                surrogate_focous = surrogate_focus(int(model.model[0].conv.conv.weight.shape[1]/4),\n                                                model.model[0].conv.conv.weight.shape[0],\n                                                k=tuple(model.model[0].conv.conv.weight.shape[2:4]),\n                                                s=model.model[0].conv.conv.stride,\n                                                p=model.model[0].conv.conv.padding,\n                                                g=model.model[0].conv.conv.groups,\n                                                act=True)\n                surrogate_focous.conv.conv.weight = model.model[0].conv.conv.weight\n                surrogate_focous.conv.conv.bias = model.model[0].conv.conv.bias\n                surrogate_focous.conv.act = model.model[0].conv.act\n                temp_i = model.model[0].i\n                temp_f = model.model[0].f\n\n                model.model[0] = surrogate_focous\n                model.model[0].i = temp_i\n                model.model[0].f = temp_f\n                model.model[0].eval()\n            elif isinstance(model.model[0], Conv) and model.model[0].conv.kernel_size == (6, 6):\n                # For yolo v6 version\n                surrogate_focous = surrogate_focus(model.model[0].conv.weight.shape[1],\n                                                model.model[0].conv.weight.shape[0],\n                                                k=(3,3), # 6/2, 6/2\n                                                s=1,\n                                                p=(1,1), # 2/2, 2/2\n                                                g=model.model[0].conv.groups,\n                                                act=hasattr(model.model[0], 'act'))\n                surrogate_focous.conv.conv.weight[:,:3,:,:] = model.model[0].conv.weight[:,:,::2,::2]\n                surrogate_focous.conv.conv.weight[:,3:6,:,:] = model.model[0].conv.weight[:,:,1::2,::2]\n                surrogate_focous.conv.conv.weight[:,6:9,:,:] = model.model[0].conv.weight[:,:,::2,1::2]\n                surrogate_focous.conv.conv.weight[:,9:,:,:] = model.model[0].conv.weight[:,:,1::2,1::2]\n                surrogate_focous.conv.conv.bias = model.model[0].conv.bias\n                surrogate_focous.conv.act = model.model[0].act\n                temp_i = model.model[0].i\n                temp_f = model.model[0].f\n\n                model.model[0] = surrogate_focous\n                model.model[0].i = temp_i\n                model.model[0].f = temp_f\n                model.model[0].eval()\n\n    # save anchors\n    if isinstance(model.model[-1], Detect):\n        print('---> save anchors for RKNN')\n        RK_anchors = model.model[-1].stride.reshape(3,1).repeat(1,3).reshape(-1,1)* model.model[-1].anchors.reshape(9,2)\n        RK_anchors = RK_anchors.tolist()\n        print(RK_anchors)\n        with open(file.with_suffix('.anchors.txt'), 'w') as anf:\n            anf.write(str(RK_anchors))\n\n    for _ in range(2):\n        y = model(im)  # dry runs\n    LOGGER.info(f\"\\n{colorstr('PyTorch:')} starting from {file} ({file_size(file):.1f} MB)\")\n\n    # Exports\n    if 'torchscript' in include:\n        export_torchscript(model, im, file, optimize)\n    if 'onnx' in include:\n        export_onnx(model, im, file, opset, train, dynamic, simplify)\n    if 'engine' in include:\n        export_engine(model, im, file, train, half, simplify, workspace, verbose)\n    if 'coreml' in include:\n        export_coreml(model, im, file)\n\n    # TensorFlow Exports\n    if any(tf_exports):\n        pb, tflite, tfjs = tf_exports[1:]\n        assert not (tflite and tfjs), 'TFLite and TF.js models must be exported separately, please pass only one type.'\n        model = export_saved_model(model, im, file, dynamic, tf_nms=nms or agnostic_nms or tfjs,\n                                   agnostic_nms=agnostic_nms or tfjs, topk_per_class=topk_per_class, topk_all=topk_all,\n                                   conf_thres=conf_thres, iou_thres=iou_thres)  # keras model\n        if pb or tfjs:  # pb prerequisite to tfjs\n            export_pb(model, im, file)\n        if tflite:\n            export_tflite(model, im, file, int8=int8, data=data, ncalib=100)\n        if tfjs:\n            export_tfjs(model, im, file)\n\n    # Finish\n    LOGGER.info(f'\\nExport complete ({time.time() - t:.2f}s)'\n                f\"\\nResults saved to {colorstr('bold', file.parent.resolve())}\"\n                f'\\nVisualize with https://netron.app')\n\n\ndef parse_opt():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')\n    parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)')\n    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')\n    parser.add_argument('--batch-size', type=int, default=1, help='batch size')\n    parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')\n    parser.add_argument('--half', action='store_true', help='FP16 half-precision export')\n    parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')\n    parser.add_argument('--train', action='store_true', help='model.train() mode')\n    parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')\n    parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')\n    parser.add_argument('--dynamic', action='store_true', help='ONNX/TF: dynamic axes')\n    parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')\n    parser.add_argument('--opset', type=int, default=12, help='ONNX: opset version')\n    parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log')\n    parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)')\n    parser.add_argument('--nms', action='store_true', help='TF: add NMS to model')\n    parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model')\n    parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')\n    parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')\n    parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')\n    parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')\n    parser.add_argument('--include', nargs='+',\n                        default=['torchscript'],\n                        help='available formats are (torchscript, onnx, engine, coreml, saved_model, pb, tflite, tfjs)')\n    parser.add_argument('--rknpu', default=None, help='RKNN npu platform')\n    opt = parser.parse_args()\n    print_args(FILE.stem, opt)\n    return opt\n\n\ndef main(opt):\n    for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]):\n        run(**vars(opt))\n\n\nif __name__ == \"__main__\":\n    opt = parse_opt()\n    del opt.rknpu\n    main(opt)\n"
  },
  {
    "path": "hubconf.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5/\n\nUsage:\n    import torch\n    model = torch.hub.load('ultralytics/yolov5', 'yolov5s')\n    model = torch.hub.load('ultralytics/yolov5:master', 'custom', 'path/to/yolov5s.onnx')  # file from branch\n\"\"\"\n\nimport torch\n\n\ndef _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    \"\"\"Creates a specified YOLOv5 model\n\n    Arguments:\n        name (str): name of model, i.e. 'yolov5s'\n        pretrained (bool): load pretrained weights into the model\n        channels (int): number of input channels\n        classes (int): number of model classes\n        autoshape (bool): apply YOLOv5 .autoshape() wrapper to model\n        verbose (bool): print all information to screen\n        device (str, torch.device, None): device to use for model parameters\n\n    Returns:\n        YOLOv5 pytorch model\n    \"\"\"\n    from pathlib import Path\n\n    from models.common import AutoShape, DetectMultiBackend\n    from models.yolo import Model\n    from utils.downloads import attempt_download\n    from utils.general import check_requirements, intersect_dicts, set_logging\n    from utils.torch_utils import select_device\n\n    check_requirements(exclude=('tensorboard', 'thop', 'opencv-python'))\n    set_logging(verbose=verbose)\n\n    name = Path(name)\n    path = name.with_suffix('.pt') if name.suffix == '' else name  # checkpoint path\n    try:\n        device = select_device(('0' if torch.cuda.is_available() else 'cpu') if device is None else device)\n\n        if pretrained and channels == 3 and classes == 80:\n            model = DetectMultiBackend(path, device=device)  # download/load FP32 model\n            # model = models.experimental.attempt_load(path, map_location=device)  # download/load FP32 model\n        else:\n            cfg = list((Path(__file__).parent / 'models').rglob(f'{path.stem}.yaml'))[0]  # model.yaml path\n            model = Model(cfg, channels, classes)  # create model\n            if pretrained:\n                ckpt = torch.load(attempt_download(path), map_location=device)  # load\n                csd = ckpt['model'].float().state_dict()  # checkpoint state_dict as FP32\n                csd = intersect_dicts(csd, model.state_dict(), exclude=['anchors'])  # intersect\n                model.load_state_dict(csd, strict=False)  # load\n                if len(ckpt['model'].names) == classes:\n                    model.names = ckpt['model'].names  # set class names attribute\n        if autoshape:\n            model = AutoShape(model)  # for file/URI/PIL/cv2/np inputs and NMS\n        return model.to(device)\n\n    except Exception as e:\n        help_url = 'https://github.com/ultralytics/yolov5/issues/36'\n        s = 'Cache may be out of date, try `force_reload=True`. See %s for help.' % help_url\n        raise Exception(s) from e\n\n\ndef custom(path='path/to/model.pt', autoshape=True, verbose=True, device=None):\n    # YOLOv5 custom or local model\n    return _create(path, autoshape=autoshape, verbose=verbose, device=device)\n\n\ndef yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-nano model https://github.com/ultralytics/yolov5\n    return _create('yolov5n', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-small model https://github.com/ultralytics/yolov5\n    return _create('yolov5s', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-medium model https://github.com/ultralytics/yolov5\n    return _create('yolov5m', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-large model https://github.com/ultralytics/yolov5\n    return _create('yolov5l', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-xlarge model https://github.com/ultralytics/yolov5\n    return _create('yolov5x', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-nano-P6 model https://github.com/ultralytics/yolov5\n    return _create('yolov5n6', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-small-P6 model https://github.com/ultralytics/yolov5\n    return _create('yolov5s6', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-medium-P6 model https://github.com/ultralytics/yolov5\n    return _create('yolov5m6', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-large-P6 model https://github.com/ultralytics/yolov5\n    return _create('yolov5l6', pretrained, channels, classes, autoshape, verbose, device)\n\n\ndef yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):\n    # YOLOv5-xlarge-P6 model https://github.com/ultralytics/yolov5\n    return _create('yolov5x6', pretrained, channels, classes, autoshape, verbose, device)\n\n\nif __name__ == '__main__':\n    model = _create(name='yolov5s', pretrained=True, channels=3, classes=80, autoshape=True, verbose=True)  # pretrained\n    # model = custom(path='path/to/model.pt')  # custom\n\n    # Verify inference\n    from pathlib import Path\n\n    import cv2\n    import numpy as np\n    from PIL import Image\n\n    imgs = ['data/images/zidane.jpg',  # filename\n            Path('data/images/zidane.jpg'),  # Path\n            'https://ultralytics.com/images/zidane.jpg',  # URI\n            cv2.imread('data/images/bus.jpg')[:, :, ::-1],  # OpenCV\n            Image.open('data/images/bus.jpg'),  # PIL\n            np.zeros((320, 640, 3))]  # numpy\n\n    results = model(imgs, size=320)  # batched inference\n    results.print()\n    results.save()\n"
  },
  {
    "path": "models/__init__.py",
    "content": ""
  },
  {
    "path": "models/common.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nCommon modules\n\"\"\"\nimport os\nimport json\nimport math\nimport platform\nimport warnings\nfrom collections import OrderedDict, namedtuple\nfrom copy import copy\nfrom pathlib import Path\n\nimport cv2\nimport numpy as np\nimport pandas as pd\nimport requests\nimport torch\nimport torch.nn as nn\nfrom PIL import Image\nfrom torch.cuda import amp\n\nfrom utils.datasets import exif_transpose, letterbox\nfrom utils.general import (LOGGER, check_requirements, check_suffix, colorstr, increment_path, make_divisible,\n                           non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh)\nfrom utils.plots import Annotator, colors, save_one_box\nfrom utils.torch_utils import copy_attr, time_sync\n\n\ndef autopad(k, p=None):  # kernel, padding\n    # Pad to 'same'\n    if p is None:\n        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad\n    return p\n\n\nclass Conv(nn.Module):\n    # Standard convolution\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups\n        super().__init__()\n        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)\n        self.bn = nn.BatchNorm2d(c2)\n        self.act = nn.ReLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())\n\n    def forward(self, x):\n        return self.act(self.bn(self.conv(x)))\n\n    def forward_fuse(self, x):\n        return self.act(self.conv(x))\n\n\nclass DWConv(Conv):\n    # Depth-wise convolution class\n    def __init__(self, c1, c2, k=1, s=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups\n        super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act)\n\n\nclass TransformerLayer(nn.Module):\n    # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)\n    def __init__(self, c, num_heads):\n        super().__init__()\n        self.q = nn.Linear(c, c, bias=False)\n        self.k = nn.Linear(c, c, bias=False)\n        self.v = nn.Linear(c, c, bias=False)\n        self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)\n        self.fc1 = nn.Linear(c, c, bias=False)\n        self.fc2 = nn.Linear(c, c, bias=False)\n\n    def forward(self, x):\n        x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x\n        x = self.fc2(self.fc1(x)) + x\n        return x\n\n\nclass TransformerBlock(nn.Module):\n    # Vision Transformer https://arxiv.org/abs/2010.11929\n    def __init__(self, c1, c2, num_heads, num_layers):\n        super().__init__()\n        self.conv = None\n        if c1 != c2:\n            self.conv = Conv(c1, c2)\n        self.linear = nn.Linear(c2, c2)  # learnable position embedding\n        self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers)))\n        self.c2 = c2\n\n    def forward(self, x):\n        if self.conv is not None:\n            x = self.conv(x)\n        b, _, w, h = x.shape\n        p = x.flatten(2).permute(2, 0, 1)\n        return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h)\n\n\nclass Bottleneck(nn.Module):\n    # Standard bottleneck\n    def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = Conv(c_, c2, 3, 1, g=g)\n        self.add = shortcut and c1 == c2\n\n    def forward(self, x):\n        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))\n\n\nclass BottleneckCSP(nn.Module):\n    # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks\n    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)\n        self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)\n        self.cv4 = Conv(2 * c_, c2, 1, 1)\n        self.bn = nn.BatchNorm2d(2 * c_)  # applied to cat(cv2, cv3)\n        self.act = nn.ReLU()\n        self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))\n\n    def forward(self, x):\n        y1 = self.cv3(self.m(self.cv1(x)))\n        y2 = self.cv2(x)\n        return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))\n\n\nclass C3(nn.Module):\n    # CSP Bottleneck with 3 convolutions\n    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = Conv(c1, c_, 1, 1)\n        self.cv3 = Conv(2 * c_, c2, 1)  # act=FReLU(c2)\n        self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))\n        # self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])\n\n    def forward(self, x):\n        return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))\n\n\nclass C3TR(C3):\n    # C3 module with TransformerBlock()\n    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):\n        super().__init__(c1, c2, n, shortcut, g, e)\n        c_ = int(c2 * e)\n        self.m = TransformerBlock(c_, c_, 4, n)\n\n\nclass C3SPP(C3):\n    # C3 module with SPP()\n    def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):\n        super().__init__(c1, c2, n, shortcut, g, e)\n        c_ = int(c2 * e)\n        self.m = SPP(c_, c_, k)\n\n\nclass C3Ghost(C3):\n    # C3 module with GhostBottleneck()\n    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):\n        super().__init__(c1, c2, n, shortcut, g, e)\n        c_ = int(c2 * e)  # hidden channels\n        self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))\n\nif os.getenv('RKNN_model_hack', '0') == '0':\n    class SPP(nn.Module):\n        # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729\n        def __init__(self, c1, c2, k=(5, 9, 13)):\n            super().__init__()\n            c_ = c1 // 2  # hidden channels\n            self.cv1 = Conv(c1, c_, 1, 1)\n            self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)\n            self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])\n\n        def forward(self, x):\n            x = self.cv1(x)\n            with warnings.catch_warnings():\n                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning\n                return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))\nelif os.getenv('RKNN_model_hack', '0') in ['npu_1', 'npu_2']:\n    # TODO remove this hack when rknn-toolkit1/2 add this optimize rules\n    class SPP(nn.Module):\n        def __init__(self, c1, c2, k=(5, 9, 13)):\n            super().__init__()\n            c_ = c1 // 2  # hidden channels\n            self.cv1 = Conv(c1, c_, 1, 1)\n            self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)\n            self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])\n            for value in k:\n                assert (value%2 == 1) and (value!= 1), \"value in [{}] only support odd number for RKNN model hack\"\n\n        def forward(self, x):\n            x = self.cv1(x)\n            with warnings.catch_warnings():\n                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning\n                y = [x]\n                for maxpool in self.m:\n                    kernel_size = maxpool.kernel_size\n                    m = x\n                    for i in range(math.floor(kernel_size/2)):\n                        m = torch.nn.functional.max_pool2d(m, 3, 1, 1)\n                    y = [*y, m]\n            return self.cv2(torch.cat(y, 1))\n\nif os.getenv('RKNN_model_hack', '0') in ['0','npu_2']:\n    class SPPF(nn.Module):\n        # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher\n        def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))\n            super().__init__()\n            c_ = c1 // 2  # hidden channels\n            self.cv1 = Conv(c1, c_, 1, 1)\n            self.cv2 = Conv(c_ * 4, c2, 1, 1)\n            self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)\n\n        def forward(self, x):\n            x = self.cv1(x)\n            with warnings.catch_warnings():\n                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning\n                y1 = self.m(x)\n                y2 = self.m(y1)\n                return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))\nelif os.getenv('RKNN_model_hack', '0') == 'npu_1':\n    class SPPF(nn.Module):\n        # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher\n        def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))\n            super().__init__()\n            c_ = c1 // 2  # hidden channels\n            self.cv1 = Conv(c1, c_, 1, 1)\n            self.cv2 = Conv(c_ * 4, c2, 1, 1)\n            self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)\n\n        def forward(self, x):\n            x = self.cv1(x)\n            with warnings.catch_warnings():\n                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning\n                y1 = self.m(x)\n                y2 = self.m(y1)\n\n            with warnings.catch_warnings():\n                warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning\n                y = [x]\n                kernel_size = self.m.kernel_size\n                _3x3_stack = math.floor(kernel_size/2)\n                for i in range(3):\n                    m = y[-1]\n                    for _ in range(_3x3_stack):\n                        m = torch.nn.functional.max_pool2d(m, 3, 1, 1)\n                    y = [*y, m]\n            return self.cv2(torch.cat(y, 1))\n\n\nclass Focus(nn.Module):\n    # Focus wh information into c-space\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups\n        super().__init__()\n        self.conv = Conv(c1 * 4, c2, k, s, p, g, act)\n        # self.contract = Contract(gain=2)\n\n    def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)\n        return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))\n        # return self.conv(self.contract(x))\n\n\nclass GhostConv(nn.Module):\n    # Ghost Convolution https://github.com/huawei-noah/ghostnet\n    def __init__(self, c1, c2, k=1, s=1, g=1, act=True):  # ch_in, ch_out, kernel, stride, groups\n        super().__init__()\n        c_ = c2 // 2  # hidden channels\n        self.cv1 = Conv(c1, c_, k, s, None, g, act)\n        self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)\n\n    def forward(self, x):\n        y = self.cv1(x)\n        return torch.cat([y, self.cv2(y)], 1)\n\n\nclass GhostBottleneck(nn.Module):\n    # Ghost Bottleneck https://github.com/huawei-noah/ghostnet\n    def __init__(self, c1, c2, k=3, s=1):  # ch_in, ch_out, kernel, stride\n        super().__init__()\n        c_ = c2 // 2\n        self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1),  # pw\n                                  DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(),  # dw\n                                  GhostConv(c_, c2, 1, 1, act=False))  # pw-linear\n        self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),\n                                      Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()\n\n    def forward(self, x):\n        return self.conv(x) + self.shortcut(x)\n\n\nclass Contract(nn.Module):\n    # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)\n    def __init__(self, gain=2):\n        super().__init__()\n        self.gain = gain\n\n    def forward(self, x):\n        b, c, h, w = x.size()  # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'\n        s = self.gain\n        x = x.view(b, c, h // s, s, w // s, s)  # x(1,64,40,2,40,2)\n        x = x.permute(0, 3, 5, 1, 2, 4).contiguous()  # x(1,2,2,64,40,40)\n        return x.view(b, c * s * s, h // s, w // s)  # x(1,256,40,40)\n\n\nclass Expand(nn.Module):\n    # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)\n    def __init__(self, gain=2):\n        super().__init__()\n        self.gain = gain\n\n    def forward(self, x):\n        b, c, h, w = x.size()  # assert C / s ** 2 == 0, 'Indivisible gain'\n        s = self.gain\n        x = x.view(b, s, s, c // s ** 2, h, w)  # x(1,2,2,16,80,80)\n        x = x.permute(0, 3, 4, 1, 5, 2).contiguous()  # x(1,16,80,2,80,2)\n        return x.view(b, c // s ** 2, h * s, w * s)  # x(1,16,160,160)\n\n\nclass Concat(nn.Module):\n    # Concatenate a list of tensors along dimension\n    def __init__(self, dimension=1):\n        super().__init__()\n        self.d = dimension\n\n    def forward(self, x):\n        return torch.cat(x, self.d)\n\n\nclass DetectMultiBackend(nn.Module):\n    # YOLOv5 MultiBackend class for python inference on various backends\n    def __init__(self, weights='yolov5s.pt', device=None, dnn=False):\n        # Usage:\n        #   PyTorch:      weights = *.pt\n        #   TorchScript:            *.torchscript\n        #   CoreML:                 *.mlmodel\n        #   TensorFlow:             *_saved_model\n        #   TensorFlow:             *.pb\n        #   TensorFlow Lite:        *.tflite\n        #   ONNX Runtime:           *.onnx\n        #   OpenCV DNN:             *.onnx with dnn=True\n        #   TensorRT:               *.engine\n        #   RKNN:                   *.rknn\n        from models.experimental import attempt_download, attempt_load  # scoped to avoid circular import\n\n        super().__init__()\n        w = str(weights[0] if isinstance(weights, list) else weights)\n        suffix = Path(w).suffix.lower()\n        suffixes = ['.pt', '.torchscript', '.onnx', '.engine', '.tflite', '.pb', '', '.mlmodel', '.rknn']\n        check_suffix(w, suffixes)  # check weights have acceptable suffix\n        pt, jit, onnx, engine, tflite, pb, saved_model, coreml, rknn_model = (suffix == x for x in suffixes)  # backend booleans\n        stride, names = 64, [f'class{i}' for i in range(1000)]  # assign defaults\n        attempt_download(w)  # download if not local\n\n        if jit:  # TorchScript\n            LOGGER.info(f'Loading {w} for TorchScript inference...')\n            extra_files = {'config.txt': ''}  # model metadata\n            model = torch.jit.load(w, _extra_files=extra_files)\n            if extra_files['config.txt']:\n                d = json.loads(extra_files['config.txt'])  # extra_files dict\n                stride, names = int(d['stride']), d['names']\n        elif pt:  # PyTorch\n            model = attempt_load(weights, map_location=device)\n            stride = int(model.stride.max())  # model stride\n            names = model.module.names if hasattr(model, 'module') else model.names  # get class names\n            self.model = model  # explicitly assign for to(), cpu(), cuda(), half()\n        elif coreml:  # CoreML\n            LOGGER.info(f'Loading {w} for CoreML inference...')\n            import coremltools as ct\n            model = ct.models.MLModel(w)\n        elif dnn:  # ONNX OpenCV DNN\n            LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...')\n            check_requirements(('opencv-python>=4.5.4',))\n            net = cv2.dnn.readNetFromONNX(w)\n        elif onnx:  # ONNX Runtime\n            LOGGER.info(f'Loading {w} for ONNX Runtime inference...')\n            cuda = torch.cuda.is_available()\n            check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))\n            import onnxruntime\n            providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']\n            session = onnxruntime.InferenceSession(w, providers=providers)\n        elif engine:  # TensorRT\n            LOGGER.info(f'Loading {w} for TensorRT inference...')\n            import tensorrt as trt  # https://developer.nvidia.com/nvidia-tensorrt-download\n            Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))\n            logger = trt.Logger(trt.Logger.INFO)\n            with open(w, 'rb') as f, trt.Runtime(logger) as runtime:\n                model = runtime.deserialize_cuda_engine(f.read())\n            bindings = OrderedDict()\n            for index in range(model.num_bindings):\n                name = model.get_binding_name(index)\n                dtype = trt.nptype(model.get_binding_dtype(index))\n                shape = tuple(model.get_binding_shape(index))\n                data = torch.from_numpy(np.empty(shape, dtype=np.dtype(dtype))).to(device)\n                bindings[name] = Binding(name, dtype, shape, data, int(data.data_ptr()))\n            binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())\n            context = model.create_execution_context()\n            batch_size = bindings['images'].shape[0]\n        elif rknn_model:\n            # TODO if post-process in model, then we can add code here.\n            pass\n        else:  # TensorFlow model (TFLite, pb, saved_model)\n            if pb:  # https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt\n                LOGGER.info(f'Loading {w} for TensorFlow *.pb inference...')\n                import tensorflow as tf\n\n                def wrap_frozen_graph(gd, inputs, outputs):\n                    x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=\"\"), [])  # wrapped\n                    return x.prune(tf.nest.map_structure(x.graph.as_graph_element, inputs),\n                                   tf.nest.map_structure(x.graph.as_graph_element, outputs))\n\n                graph_def = tf.Graph().as_graph_def()\n                graph_def.ParseFromString(open(w, 'rb').read())\n                frozen_func = wrap_frozen_graph(gd=graph_def, inputs=\"x:0\", outputs=\"Identity:0\")\n            elif saved_model:\n                LOGGER.info(f'Loading {w} for TensorFlow saved_model inference...')\n                import tensorflow as tf\n                model = tf.keras.models.load_model(w)\n            elif tflite:  # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python\n                if 'edgetpu' in w.lower():\n                    LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...')\n                    import tflite_runtime.interpreter as tfli\n                    delegate = {'Linux': 'libedgetpu.so.1',  # install https://coral.ai/software/#edgetpu-runtime\n                                'Darwin': 'libedgetpu.1.dylib',\n                                'Windows': 'edgetpu.dll'}[platform.system()]\n                    interpreter = tfli.Interpreter(model_path=w, experimental_delegates=[tfli.load_delegate(delegate)])\n                else:\n                    LOGGER.info(f'Loading {w} for TensorFlow Lite inference...')\n                    import tensorflow as tf\n                    interpreter = tf.lite.Interpreter(model_path=w)  # load TFLite model\n                interpreter.allocate_tensors()  # allocate\n                input_details = interpreter.get_input_details()  # inputs\n                output_details = interpreter.get_output_details()  # outputs\n        self.__dict__.update(locals())  # assign all variables to self\n\n    def forward(self, im, augment=False, visualize=False, val=False):\n        # YOLOv5 MultiBackend inference\n        b, ch, h, w = im.shape  # batch, channel, height, width\n        if self.pt:  # PyTorch\n            y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize)\n            return y if val else y[0]\n        elif self.coreml:  # CoreML\n            im = im.permute(0, 2, 3, 1).cpu().numpy()  # torch BCHW to numpy BHWC shape(1,320,192,3)\n            im = Image.fromarray((im[0] * 255).astype('uint8'))\n            # im = im.resize((192, 320), Image.ANTIALIAS)\n            y = self.model.predict({'image': im})  # coordinates are xywh normalized\n            box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]])  # xyxy pixels\n            conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)\n            y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1)\n        elif self.onnx:  # ONNX\n            im = im.cpu().numpy()  # torch to numpy\n            if self.dnn:  # ONNX OpenCV DNN\n                self.net.setInput(im)\n                y = self.net.forward()\n            else:  # ONNX Runtime\n                y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0]\n        elif self.engine:  # TensorRT\n            assert im.shape == self.bindings['images'].shape, (im.shape, self.bindings['images'].shape)\n            self.binding_addrs['images'] = int(im.data_ptr())\n            self.context.execute_v2(list(self.binding_addrs.values()))\n            y = self.bindings['output'].data\n        elif self.rknn_model:\n            # TODO if post-process in model, then we can add code here.\n            pass\n        else:  # TensorFlow model (TFLite, pb, saved_model)\n            im = im.permute(0, 2, 3, 1).cpu().numpy()  # torch BCHW to numpy BHWC shape(1,320,192,3)\n            if self.pb:\n                y = self.frozen_func(x=self.tf.constant(im)).numpy()\n            elif self.saved_model:\n                y = self.model(im, training=False).numpy()\n            elif self.tflite:\n                input, output = self.input_details[0], self.output_details[0]\n                int8 = input['dtype'] == np.uint8  # is TFLite quantized uint8 model\n                if int8:\n                    scale, zero_point = input['quantization']\n                    im = (im / scale + zero_point).astype(np.uint8)  # de-scale\n                self.interpreter.set_tensor(input['index'], im)\n                self.interpreter.invoke()\n                y = self.interpreter.get_tensor(output['index'])\n                if int8:\n                    scale, zero_point = output['quantization']\n                    y = (y.astype(np.float32) - zero_point) * scale  # re-scale\n            y[..., 0] *= w  # x\n            y[..., 1] *= h  # y\n            y[..., 2] *= w  # w\n            y[..., 3] *= h  # h\n        y = torch.tensor(y) if isinstance(y, np.ndarray) else y\n        return (y, []) if val else y\n\n    def warmup(self, imgsz=(1, 3, 640, 640), half=False):\n        # Warmup model by running inference once\n        if self.pt or self.engine or self.onnx:  # warmup types\n            if isinstance(self.device, torch.device) and self.device.type != 'cpu':  # only warmup GPU models\n                im = torch.zeros(*imgsz).to(self.device).type(torch.half if half else torch.float)  # input image\n                self.forward(im)  # warmup\n\n\nclass AutoShape(nn.Module):\n    # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS\n    conf = 0.25  # NMS confidence threshold\n    iou = 0.45  # NMS IoU threshold\n    agnostic = False  # NMS class-agnostic\n    multi_label = False  # NMS multiple labels per box\n    classes = None  # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs\n    max_det = 1000  # maximum number of detections per image\n    amp = False  # Automatic Mixed Precision (AMP) inference\n\n    def __init__(self, model):\n        super().__init__()\n        LOGGER.info('Adding AutoShape... ')\n        copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=())  # copy attributes\n        self.dmb = isinstance(model, DetectMultiBackend)  # DetectMultiBackend() instance\n        self.pt = not self.dmb or model.pt  # PyTorch model\n        self.model = model.eval()\n\n    def _apply(self, fn):\n        # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers\n        self = super()._apply(fn)\n        if self.pt:\n            m = self.model.model.model[-1] if self.dmb else self.model.model[-1]  # Detect()\n            m.stride = fn(m.stride)\n            m.grid = list(map(fn, m.grid))\n            if isinstance(m.anchor_grid, list):\n                m.anchor_grid = list(map(fn, m.anchor_grid))\n        return self\n\n    @torch.no_grad()\n    def forward(self, imgs, size=640, augment=False, profile=False):\n        # Inference from various sources. For height=640, width=1280, RGB images example inputs are:\n        #   file:       imgs = 'data/images/zidane.jpg'  # str or PosixPath\n        #   URI:             = 'https://ultralytics.com/images/zidane.jpg'\n        #   OpenCV:          = cv2.imread('image.jpg')[:,:,::-1]  # HWC BGR to RGB x(640,1280,3)\n        #   PIL:             = Image.open('image.jpg') or ImageGrab.grab()  # HWC x(640,1280,3)\n        #   numpy:           = np.zeros((640,1280,3))  # HWC\n        #   torch:           = torch.zeros(16,3,320,640)  # BCHW (scaled to size=640, 0-1 values)\n        #   multiple:        = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...]  # list of images\n\n        t = [time_sync()]\n        p = next(self.model.parameters()) if self.pt else torch.zeros(1)  # for device and type\n        autocast = self.amp and (p.device.type != 'cpu')  # Automatic Mixed Precision (AMP) inference\n        if isinstance(imgs, torch.Tensor):  # torch\n            with amp.autocast(enabled=autocast):\n                return self.model(imgs.to(p.device).type_as(p), augment, profile)  # inference\n\n        # Pre-process\n        n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs])  # number of images, list of images\n        shape0, shape1, files = [], [], []  # image and inference shapes, filenames\n        for i, im in enumerate(imgs):\n            f = f'image{i}'  # filename\n            if isinstance(im, (str, Path)):  # filename or uri\n                im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im\n                im = np.asarray(exif_transpose(im))\n            elif isinstance(im, Image.Image):  # PIL Image\n                im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f\n            files.append(Path(f).with_suffix('.jpg').name)\n            if im.shape[0] < 5:  # image in CHW\n                im = im.transpose((1, 2, 0))  # reverse dataloader .transpose(2, 0, 1)\n            im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3)  # enforce 3ch input\n            s = im.shape[:2]  # HWC\n            shape0.append(s)  # image shape\n            g = (size / max(s))  # gain\n            shape1.append([y * g for y in s])\n            imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im)  # update\n        shape1 = [make_divisible(x, self.stride) for x in np.stack(shape1, 0).max(0)]  # inference shape\n        x = [letterbox(im, new_shape=shape1 if self.pt else size, auto=False)[0] for im in imgs]  # pad\n        x = np.stack(x, 0) if n > 1 else x[0][None]  # stack\n        x = np.ascontiguousarray(x.transpose((0, 3, 1, 2)))  # BHWC to BCHW\n        x = torch.from_numpy(x).to(p.device).type_as(p) / 255  # uint8 to fp16/32\n        t.append(time_sync())\n\n        with amp.autocast(enabled=autocast):\n            # Inference\n            y = self.model(x, augment, profile)  # forward\n            t.append(time_sync())\n\n            # Post-process\n            y = non_max_suppression(y if self.dmb else y[0], self.conf, iou_thres=self.iou, classes=self.classes,\n                                    agnostic=self.agnostic, multi_label=self.multi_label, max_det=self.max_det)  # NMS\n            for i in range(n):\n                scale_coords(shape1, y[i][:, :4], shape0[i])\n\n            t.append(time_sync())\n            return Detections(imgs, y, files, t, self.names, x.shape)\n\n\nclass Detections:\n    # YOLOv5 detections class for inference results\n    def __init__(self, imgs, pred, files, times=(0, 0, 0, 0), names=None, shape=None):\n        super().__init__()\n        d = pred[0].device  # device\n        gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs]  # normalizations\n        self.imgs = imgs  # list of images as numpy arrays\n        self.pred = pred  # list of tensors pred[0] = (xyxy, conf, cls)\n        self.names = names  # class names\n        self.files = files  # image filenames\n        self.times = times  # profiling times\n        self.xyxy = pred  # xyxy pixels\n        self.xywh = [xyxy2xywh(x) for x in pred]  # xywh pixels\n        self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)]  # xyxy normalized\n        self.xywhn = [x / g for x, g in zip(self.xywh, gn)]  # xywh normalized\n        self.n = len(self.pred)  # number of images (batch size)\n        self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3))  # timestamps (ms)\n        self.s = shape  # inference BCHW shape\n\n    def display(self, pprint=False, show=False, save=False, crop=False, render=False, save_dir=Path('')):\n        crops = []\n        for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):\n            s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} '  # string\n            if pred.shape[0]:\n                for c in pred[:, -1].unique():\n                    n = (pred[:, -1] == c).sum()  # detections per class\n                    s += f\"{n} {self.names[int(c)]}{'s' * (n > 1)}, \"  # add to string\n                if show or save or render or crop:\n                    annotator = Annotator(im, example=str(self.names))\n                    for *box, conf, cls in reversed(pred):  # xyxy, confidence, class\n                        label = f'{self.names[int(cls)]} {conf:.2f}'\n                        if crop:\n                            file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None\n                            crops.append({'box': box, 'conf': conf, 'cls': cls, 'label': label,\n                                          'im': save_one_box(box, im, file=file, save=save)})\n                        else:  # all others\n                            annotator.box_label(box, label, color=colors(cls))\n                    im = annotator.im\n            else:\n                s += '(no detections)'\n\n            im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im  # from np\n            if pprint:\n                LOGGER.info(s.rstrip(', '))\n            if show:\n                im.show(self.files[i])  # show\n            if save:\n                f = self.files[i]\n                im.save(save_dir / f)  # save\n                if i == self.n - 1:\n                    LOGGER.info(f\"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}\")\n            if render:\n                self.imgs[i] = np.asarray(im)\n        if crop:\n            if save:\n                LOGGER.info(f'Saved results to {save_dir}\\n')\n            return crops\n\n    def print(self):\n        self.display(pprint=True)  # print results\n        LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' %\n                    self.t)\n\n    def show(self):\n        self.display(show=True)  # show results\n\n    def save(self, save_dir='runs/detect/exp'):\n        save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True)  # increment save_dir\n        self.display(save=True, save_dir=save_dir)  # save results\n\n    def crop(self, save=True, save_dir='runs/detect/exp'):\n        save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None\n        return self.display(crop=True, save=save, save_dir=save_dir)  # crop results\n\n    def render(self):\n        self.display(render=True)  # render results\n        return self.imgs\n\n    def pandas(self):\n        # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])\n        new = copy(self)  # return copy\n        ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name'  # xyxy columns\n        cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name'  # xywh columns\n        for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):\n            a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)]  # update\n            setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])\n        return new\n\n    def tolist(self):\n        # return a list of Detections objects, i.e. 'for result in results.tolist():'\n        r = range(self.n)  # iterable\n        x = [Detections([self.imgs[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r]\n        # for d in x:\n        #    for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:\n        #        setattr(d, k, getattr(d, k)[0])  # pop out of list\n        return x\n\n    def __len__(self):\n        return self.n\n\n\nclass Classify(nn.Module):\n    # Classification head, i.e. x(b,c1,20,20) to x(b,c2)\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1):  # ch_in, ch_out, kernel, stride, padding, groups\n        super().__init__()\n        self.aap = nn.AdaptiveAvgPool2d(1)  # to x(b,c1,1,1)\n        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g)  # to x(b,c2,1,1)\n        self.flat = nn.Flatten()\n\n    def forward(self, x):\n        z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1)  # cat if list\n        return self.flat(self.conv(z))  # flatten to x(b,c2)\n"
  },
  {
    "path": "models/common_rk_plug_in.py",
    "content": "# This file contains modules common to various models\n\nimport torch\nimport torch.nn as nn\nfrom models.common import Conv\n\n\nclass surrogate_focus(nn.Module):\n    # surrogate_focus wh information into c-space\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups\n        super(surrogate_focus, self).__init__()\n        self.conv = Conv(c1 * 4, c2, k, s, p, g, act)\n\n        with torch.no_grad():\n            self.conv1 = nn.Conv2d(3, 3, (2, 2), groups=3, bias=False, stride=(2, 2))\n            self.conv1.weight[:, :, 0, 0] = 1\n            self.conv1.weight[:, :, 0, 1] = 0\n            self.conv1.weight[:, :, 1, 0] = 0\n            self.conv1.weight[:, :, 1, 1] = 0\n\n            self.conv2 = nn.Conv2d(3, 3, (2, 2), groups=3, bias=False, stride=(2, 2))\n            self.conv2.weight[:, :, 0, 0] = 0\n            self.conv2.weight[:, :, 0, 1] = 0\n            self.conv2.weight[:, :, 1, 0] = 1\n            self.conv2.weight[:, :, 1, 1] = 0\n\n            self.conv3 = nn.Conv2d(3, 3, (2, 2), groups=3, bias=False, stride=(2, 2))\n            self.conv3.weight[:, :, 0, 0] = 0\n            self.conv3.weight[:, :, 0, 1] = 1\n            self.conv3.weight[:, :, 1, 0] = 0\n            self.conv3.weight[:, :, 1, 1] = 0\n\n            self.conv4 = nn.Conv2d(3, 3, (2, 2), groups=3, bias=False, stride=(2, 2))\n            self.conv4.weight[:, :, 0, 0] = 0\n            self.conv4.weight[:, :, 0, 1] = 0\n            self.conv4.weight[:, :, 1, 0] = 0\n            self.conv4.weight[:, :, 1, 1] = 1\n\n    def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)\n        return self.conv(torch.cat([self.conv1(x), self.conv2(x), self.conv3(x), self.conv4(x)], 1))\n\n\nclass preprocess_conv_layer(nn.Module):\n    \"\"\"docstring for preprocess_conv_layer\"\"\"\n    #   input_module 为输入模型，即为想要导出模型\n    #   mean_value 的值可以是 [m1, m2, m3] 或 常数m\n    #   std_value 的值可以是 [s1, s2, s3] 或 常数s\n    #   BGR2RGB的操作默认为首先执行，既替代的原有操作顺序为\n    #       BGR2RGB -> minus mean -> minus std (与rknn config 设置保持一致) -> nhwc2nchw\n    #\n    #   使用示例-伪代码：\n    #       from add_preprocess_conv_layer import preprocess_conv_layer\n    #       model_A = create_model()\n    #       model_output = preprocess_co_nv_layer(model_A, mean_value, std_value, BGR2RGB)\n    #       onnx_export(model_output)\n    #\n    #   量化时：\n    #       rknn.config的中 channel_mean_value 、reorder_channel 均不赋值。\n    #\n    #   部署代码：\n    #       rknn_input 的属性\n    #           pass_through = 1\n    #\n    #   另外：\n    #       由于加入permute操作，c端输入为opencv mat(hwc格式)即可，无需在外部将hwc改成chw格式。\n    #\n\n    def __init__(self, input_module, mean_value, std_value, BGR2RGB=False):\n        super(preprocess_conv_layer, self).__init__()\n        if isinstance(mean_value, int):\n            mean_value = [mean_value for i in range(3)]\n        if isinstance(std_value, int):\n            std_value = [std_value for i in range(3)]\n\n        assert len(mean_value) <= 3, 'mean_value should be int, or list with 3 element'\n        assert len(std_value) <= 3, 'std_value should be int, or list with 3 element'\n\n        self.input_module = input_module\n\n        with torch.no_grad():\n            self.conv1 = nn.Conv2d(3, 3, (1, 1), groups=1, bias=True, stride=(1, 1))\n\n            if BGR2RGB is False:\n                self.conv1.weight[:, :, :, :] = 0\n                self.conv1.weight[0, 0, :, :] = 1/std_value[0]\n                self.conv1.weight[1, 1, :, :] = 1/std_value[1]\n                self.conv1.weight[2, 2, :, :] = 1/std_value[2]\n            elif BGR2RGB is True:\n                self.conv1.weight[:, :, :, :] = 0\n                self.conv1.weight[0, 2, :, :] = 1/std_value[0]\n                self.conv1.weight[1, 1, :, :] = 1/std_value[1]\n                self.conv1.weight[2, 0, :, :] = 1/std_value[2]\n\n            self.conv1.bias[0] = -mean_value[0]/std_value[0]\n            self.conv1.bias[1] = -mean_value[1]/std_value[1]\n            self.conv1.bias[2] = -mean_value[2]/std_value[2]\n\n        self.conv1.eval()\n\n    def forward(self, x):\n        x = x.permute(0, 3, 1, 2)  # NHWC -> NCHW, apply for rknn_pass_through\n        x = self.conv1(x)\n        return self.input_module(x)"
  },
  {
    "path": "models/experimental.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nExperimental modules\n\"\"\"\nimport math\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom models.common import Conv\nfrom utils.downloads import attempt_download\n\n\nclass CrossConv(nn.Module):\n    # Cross Convolution Downsample\n    def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):\n        # ch_in, ch_out, kernel, stride, groups, expansion, shortcut\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, (1, k), (1, s))\n        self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)\n        self.add = shortcut and c1 == c2\n\n    def forward(self, x):\n        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))\n\n\nclass Sum(nn.Module):\n    # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070\n    def __init__(self, n, weight=False):  # n: number of inputs\n        super().__init__()\n        self.weight = weight  # apply weights boolean\n        self.iter = range(n - 1)  # iter object\n        if weight:\n            self.w = nn.Parameter(-torch.arange(1.0, n) / 2, requires_grad=True)  # layer weights\n\n    def forward(self, x):\n        y = x[0]  # no weight\n        if self.weight:\n            w = torch.sigmoid(self.w) * 2\n            for i in self.iter:\n                y = y + x[i + 1] * w[i]\n        else:\n            for i in self.iter:\n                y = y + x[i + 1]\n        return y\n\n\nclass MixConv2d(nn.Module):\n    # Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595\n    def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):  # ch_in, ch_out, kernel, stride, ch_strategy\n        super().__init__()\n        n = len(k)  # number of convolutions\n        if equal_ch:  # equal c_ per group\n            i = torch.linspace(0, n - 1E-6, c2).floor()  # c2 indices\n            c_ = [(i == g).sum() for g in range(n)]  # intermediate channels\n        else:  # equal weight.numel() per group\n            b = [c2] + [0] * n\n            a = np.eye(n + 1, n, k=-1)\n            a -= np.roll(a, 1, axis=1)\n            a *= np.array(k) ** 2\n            a[0] = 1\n            c_ = np.linalg.lstsq(a, b, rcond=None)[0].round()  # solve for equal weight indices, ax = b\n\n        self.m = nn.ModuleList(\n            [nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)])\n        self.bn = nn.BatchNorm2d(c2)\n        self.act = nn.ReLU()\n\n    def forward(self, x):\n        return self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))\n\n\nclass Ensemble(nn.ModuleList):\n    # Ensemble of models\n    def __init__(self):\n        super().__init__()\n\n    def forward(self, x, augment=False, profile=False, visualize=False):\n        y = []\n        for module in self:\n            y.append(module(x, augment, profile, visualize)[0])\n        # y = torch.stack(y).max(0)[0]  # max ensemble\n        # y = torch.stack(y).mean(0)  # mean ensemble\n        y = torch.cat(y, 1)  # nms ensemble\n        return y, None  # inference, train output\n\n\ndef attempt_load(weights, map_location=None, inplace=True, fuse=True):\n    from models.yolo import Detect, Model\n\n    # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a\n    model = Ensemble()\n    for w in weights if isinstance(weights, list) else [weights]:\n        ckpt = torch.load(attempt_download(w), map_location=map_location)  # load\n        if fuse:\n            model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval())  # FP32 model\n        else:\n            model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().eval())  # without layer fuse\n\n    # Compatibility updates\n    for m in model.modules():\n        if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]:\n            m.inplace = inplace  # pytorch 1.7.0 compatibility\n            if type(m) is Detect:\n                if not isinstance(m.anchor_grid, list):  # new Detect Layer compatibility\n                    delattr(m, 'anchor_grid')\n                    setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)\n        elif type(m) is Conv:\n            m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatibility\n\n    if len(model) == 1:\n        return model[-1]  # return model\n    else:\n        print(f'Ensemble created with {weights}\\n')\n        for k in ['names']:\n            setattr(model, k, getattr(model[-1], k))\n        model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride  # max stride\n        return model  # return ensemble\n"
  },
  {
    "path": "models/hub/anchors.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Default anchors for COCO data\n\n\n# P5 -------------------------------------------------------------------------------------------------------------------\n# P5-640:\nanchors_p5_640:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n\n# P6 -------------------------------------------------------------------------------------------------------------------\n# P6-640:  thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11,  21,19,  17,41,  43,32,  39,70,  86,64,  65,131,  134,130,  120,265,  282,180,  247,354,  512,387\nanchors_p6_640:\n  - [9,11,  21,19,  17,41]  # P3/8\n  - [43,32,  39,70,  86,64]  # P4/16\n  - [65,131,  134,130,  120,265]  # P5/32\n  - [282,180,  247,354,  512,387]  # P6/64\n\n# P6-1280:  thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27,  44,40,  38,94,  96,68,  86,152,  180,137,  140,301,  303,264,  238,542,  436,615,  739,380,  925,792\nanchors_p6_1280:\n  - [19,27,  44,40,  38,94]  # P3/8\n  - [96,68,  86,152,  180,137]  # P4/16\n  - [140,301,  303,264,  238,542]  # P5/32\n  - [436,615,  739,380,  925,792]  # P6/64\n\n# P6-1920:  thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41,  67,59,  57,141,  144,103,  129,227,  270,205,  209,452,  455,396,  358,812,  653,922,  1109,570,  1387,1187\nanchors_p6_1920:\n  - [28,41,  67,59,  57,141]  # P3/8\n  - [144,103,  129,227,  270,205]  # P4/16\n  - [209,452,  455,396,  358,812]  # P5/32\n  - [653,922,  1109,570,  1387,1187]  # P6/64\n\n\n# P7 -------------------------------------------------------------------------------------------------------------------\n# P7-640:  thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11,  13,30,  29,20,  30,46,  61,38,  39,92,  78,80,  146,66,  79,163,  149,150,  321,143,  157,303,  257,402,  359,290,  524,372\nanchors_p7_640:\n  - [11,11,  13,30,  29,20]  # P3/8\n  - [30,46,  61,38,  39,92]  # P4/16\n  - [78,80,  146,66,  79,163]  # P5/32\n  - [149,150,  321,143,  157,303]  # P6/64\n  - [257,402,  359,290,  524,372]  # P7/128\n\n# P7-1280:  thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22,  54,36,  32,77,  70,83,  138,71,  75,173,  165,159,  148,334,  375,151,  334,317,  251,626,  499,474,  750,326,  534,814,  1079,818\nanchors_p7_1280:\n  - [19,22,  54,36,  32,77]  # P3/8\n  - [70,83,  138,71,  75,173]  # P4/16\n  - [165,159,  148,334,  375,151]  # P5/32\n  - [334,317,  251,626,  499,474]  # P6/64\n  - [750,326,  534,814,  1079,818]  # P7/128\n\n# P7-1920:  thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34,  81,55,  47,115,  105,124,  207,107,  113,259,  247,238,  222,500,  563,227,  501,476,  376,939,  749,711,  1126,489,  801,1222,  1618,1227\nanchors_p7_1920:\n  - [29,34,  81,55,  47,115]  # P3/8\n  - [105,124,  207,107,  113,259]  # P4/16\n  - [247,238,  222,500,  563,227]  # P5/32\n  - [501,476,  376,939,  749,711]  # P6/64\n  - [1126,489,  801,1222,  1618,1227]  # P7/128\n"
  },
  {
    "path": "models/hub/yolov3-spp.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# darknet53 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [32, 3, 1]],  # 0\n   [-1, 1, Conv, [64, 3, 2]],  # 1-P1/2\n   [-1, 1, Bottleneck, [64]],\n   [-1, 1, Conv, [128, 3, 2]],  # 3-P2/4\n   [-1, 2, Bottleneck, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 5-P3/8\n   [-1, 8, Bottleneck, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 7-P4/16\n   [-1, 8, Bottleneck, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P5/32\n   [-1, 4, Bottleneck, [1024]],  # 10\n  ]\n\n# YOLOv3-SPP head\nhead:\n  [[-1, 1, Bottleneck, [1024, False]],\n   [-1, 1, SPP, [512, [5, 9, 13]]],\n   [-1, 1, Conv, [1024, 3, 1]],\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, Conv, [1024, 3, 1]],  # 15 (P5/32-large)\n\n   [-2, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P4\n   [-1, 1, Bottleneck, [512, False]],\n   [-1, 1, Bottleneck, [512, False]],\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, Conv, [512, 3, 1]],  # 22 (P4/16-medium)\n\n   [-2, 1, Conv, [128, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P3\n   [-1, 1, Bottleneck, [256, False]],\n   [-1, 2, Bottleneck, [256, False]],  # 27 (P3/8-small)\n\n   [[27, 22, 15], 1, Detect, [nc, anchors]],   # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov3-tiny.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors:\n  - [10,14, 23,27, 37,58]  # P4/16\n  - [81,82, 135,169, 344,319]  # P5/32\n\n# YOLOv3-tiny backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [16, 3, 1]],  # 0\n   [-1, 1, nn.MaxPool2d, [2, 2, 0]],  # 1-P1/2\n   [-1, 1, Conv, [32, 3, 1]],\n   [-1, 1, nn.MaxPool2d, [2, 2, 0]],  # 3-P2/4\n   [-1, 1, Conv, [64, 3, 1]],\n   [-1, 1, nn.MaxPool2d, [2, 2, 0]],  # 5-P3/8\n   [-1, 1, Conv, [128, 3, 1]],\n   [-1, 1, nn.MaxPool2d, [2, 2, 0]],  # 7-P4/16\n   [-1, 1, Conv, [256, 3, 1]],\n   [-1, 1, nn.MaxPool2d, [2, 2, 0]],  # 9-P5/32\n   [-1, 1, Conv, [512, 3, 1]],\n   [-1, 1, nn.ZeroPad2d, [[0, 1, 0, 1]]],  # 11\n   [-1, 1, nn.MaxPool2d, [2, 1, 0]],  # 12\n  ]\n\n# YOLOv3-tiny head\nhead:\n  [[-1, 1, Conv, [1024, 3, 1]],\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, Conv, [512, 3, 1]],  # 15 (P5/32-large)\n\n   [-2, 1, Conv, [128, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P4\n   [-1, 1, Conv, [256, 3, 1]],  # 19 (P4/16-medium)\n\n   [[19, 15], 1, Detect, [nc, anchors]],  # Detect(P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov3.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# darknet53 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [32, 3, 1]],  # 0\n   [-1, 1, Conv, [64, 3, 2]],  # 1-P1/2\n   [-1, 1, Bottleneck, [64]],\n   [-1, 1, Conv, [128, 3, 2]],  # 3-P2/4\n   [-1, 2, Bottleneck, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 5-P3/8\n   [-1, 8, Bottleneck, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 7-P4/16\n   [-1, 8, Bottleneck, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P5/32\n   [-1, 4, Bottleneck, [1024]],  # 10\n  ]\n\n# YOLOv3 head\nhead:\n  [[-1, 1, Bottleneck, [1024, False]],\n   [-1, 1, Conv, [512, [1, 1]]],\n   [-1, 1, Conv, [1024, 3, 1]],\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, Conv, [1024, 3, 1]],  # 15 (P5/32-large)\n\n   [-2, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P4\n   [-1, 1, Bottleneck, [512, False]],\n   [-1, 1, Bottleneck, [512, False]],\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, Conv, [512, 3, 1]],  # 22 (P4/16-medium)\n\n   [-2, 1, Conv, [128, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P3\n   [-1, 1, Bottleneck, [256, False]],\n   [-1, 2, Bottleneck, [256, False]],  # 27 (P3/8-small)\n\n   [[27, 22, 15], 1, Detect, [nc, anchors]],   # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5-bifpn.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 BiFPN head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14, 6], 1, Concat, [1]],  # cat P4 <--- BiFPN change\n   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5-fpn.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 FPN head\nhead:\n  [[-1, 3, C3, [1024, False]],  # 10 (P5/32-large)\n\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 3, C3, [512, False]],  # 14 (P4/16-medium)\n\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 3, C3, [256, False]],  # 18 (P3/8-small)\n\n   [[18, 14, 10], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5-p2.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors: 3  # auto-anchor evolves 3 anchors per P output layer\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [128, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 2], 1, Concat, [1]],  # cat backbone P2\n   [-1, 1, C3, [128, False]],  # 21 (P2/4-xsmall)\n\n   [-1, 1, Conv, [128, 3, 2]],\n   [[-1, 18], 1, Concat, [1]],  # cat head P3\n   [-1, 3, C3, [256, False]],  # 24 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 27 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 30 (P5/32-large)\n\n   [[21, 24, 27, 30], 1, Detect, [nc, anchors]],  # Detect(P2, P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5-p6.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors: 3  # auto-anchor 3 anchors per P output layer\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [768, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [768]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P6/64\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 11\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [768, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P5\n   [-1, 3, C3, [768, False]],  # 15\n\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 19\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 23 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 20], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 26 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 16], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [768, False]],  # 29 (P5/32-large)\n\n   [-1, 1, Conv, [768, 3, 2]],\n   [[-1, 12], 1, Concat, [1]],  # cat head P6\n   [-1, 3, C3, [1024, False]],  # 32 (P6/64-xlarge)\n\n   [[23, 26, 29, 32], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5, P6)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5-p7.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors: 3  # auto-anchor 3 anchors per P output layer\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [768, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [768]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P6/64\n   [-1, 3, C3, [1024]],\n   [-1, 1, Conv, [1280, 3, 2]],  # 11-P7/128\n   [-1, 3, C3, [1280]],\n   [-1, 1, SPPF, [1280, 5]],  # 13\n  ]\n\n# YOLOv5 head\nhead:\n  [[-1, 1, Conv, [1024, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 10], 1, Concat, [1]],  # cat backbone P6\n   [-1, 3, C3, [1024, False]],  # 17\n\n   [-1, 1, Conv, [768, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P5\n   [-1, 3, C3, [768, False]],  # 21\n\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 25\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 29 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 26], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 32 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 22], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [768, False]],  # 35 (P5/32-large)\n\n   [-1, 1, Conv, [768, 3, 2]],\n   [[-1, 18], 1, Concat, [1]],  # cat head P6\n   [-1, 3, C3, [1024, False]],  # 38 (P6/64-xlarge)\n\n   [-1, 1, Conv, [1024, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P7\n   [-1, 3, C3, [1280, False]],  # 41 (P7/128-xxlarge)\n\n   [[29, 32, 35, 38, 41], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5, P6, P7)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5-panet.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 PANet head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5l6.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors:\n  - [19,27,  44,40,  38,94]  # P3/8\n  - [96,68,  86,152,  180,137]  # P4/16\n  - [140,301,  303,264,  238,542]  # P5/32\n  - [436,615,  739,380,  925,792]  # P6/64\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [768, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [768]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P6/64\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 11\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [768, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P5\n   [-1, 3, C3, [768, False]],  # 15\n\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 19\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 23 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 20], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 26 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 16], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [768, False]],  # 29 (P5/32-large)\n\n   [-1, 1, Conv, [768, 3, 2]],\n   [[-1, 12], 1, Concat, [1]],  # cat head P6\n   [-1, 3, C3, [1024, False]],  # 32 (P6/64-xlarge)\n\n   [[23, 26, 29, 32], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5, P6)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5m6.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.67  # model depth multiple\nwidth_multiple: 0.75  # layer channel multiple\nanchors:\n  - [19,27,  44,40,  38,94]  # P3/8\n  - [96,68,  86,152,  180,137]  # P4/16\n  - [140,301,  303,264,  238,542]  # P5/32\n  - [436,615,  739,380,  925,792]  # P6/64\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [768, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [768]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P6/64\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 11\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [768, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P5\n   [-1, 3, C3, [768, False]],  # 15\n\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 19\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 23 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 20], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 26 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 16], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [768, False]],  # 29 (P5/32-large)\n\n   [-1, 1, Conv, [768, 3, 2]],\n   [[-1, 12], 1, Concat, [1]],  # cat head P6\n   [-1, 3, C3, [1024, False]],  # 32 (P6/64-xlarge)\n\n   [[23, 26, 29, 32], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5, P6)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5n6.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth multiple\nwidth_multiple: 0.25  # layer channel multiple\nanchors:\n  - [19,27,  44,40,  38,94]  # P3/8\n  - [96,68,  86,152,  180,137]  # P4/16\n  - [140,301,  303,264,  238,542]  # P5/32\n  - [436,615,  739,380,  925,792]  # P6/64\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [768, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [768]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P6/64\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 11\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [768, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P5\n   [-1, 3, C3, [768, False]],  # 15\n\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 19\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 23 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 20], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 26 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 16], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [768, False]],  # 29 (P5/32-large)\n\n   [-1, 1, Conv, [768, 3, 2]],\n   [[-1, 12], 1, Concat, [1]],  # cat head P6\n   [-1, 3, C3, [1024, False]],  # 32 (P6/64-xlarge)\n\n   [[23, 26, 29, 32], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5, P6)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5s-ghost.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth multiple\nwidth_multiple: 0.50  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, GhostConv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3Ghost, [128]],\n   [-1, 1, GhostConv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3Ghost, [256]],\n   [-1, 1, GhostConv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3Ghost, [512]],\n   [-1, 1, GhostConv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3Ghost, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, GhostConv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3Ghost, [512, False]],  # 13\n\n   [-1, 1, GhostConv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3Ghost, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, GhostConv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3Ghost, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, GhostConv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3Ghost, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5s-transformer.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth multiple\nwidth_multiple: 0.50  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3TR, [1024]],  # 9 <--- C3TR() Transformer module\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5s6.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth multiple\nwidth_multiple: 0.50  # layer channel multiple\nanchors:\n  - [19,27,  44,40,  38,94]  # P3/8\n  - [96,68,  86,152,  180,137]  # P4/16\n  - [140,301,  303,264,  238,542]  # P5/32\n  - [436,615,  739,380,  925,792]  # P6/64\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [768, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [768]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P6/64\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 11\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [768, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P5\n   [-1, 3, C3, [768, False]],  # 15\n\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 19\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 23 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 20], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 26 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 16], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [768, False]],  # 29 (P5/32-large)\n\n   [-1, 1, Conv, [768, 3, 2]],\n   [[-1, 12], 1, Concat, [1]],  # cat head P6\n   [-1, 3, C3, [1024, False]],  # 32 (P6/64-xlarge)\n\n   [[23, 26, 29, 32], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5, P6)\n  ]\n"
  },
  {
    "path": "models/hub/yolov5x6.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.33  # model depth multiple\nwidth_multiple: 1.25  # layer channel multiple\nanchors:\n  - [19,27,  44,40,  38,94]  # P3/8\n  - [96,68,  86,152,  180,137]  # P4/16\n  - [140,301,  303,264,  238,542]  # P5/32\n  - [436,615,  739,380,  925,792]  # P6/64\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [768, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [768]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 9-P6/64\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 11\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [768, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 8], 1, Concat, [1]],  # cat backbone P5\n   [-1, 3, C3, [768, False]],  # 15\n\n   [-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 19\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 23 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 20], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 26 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 16], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [768, False]],  # 29 (P5/32-large)\n\n   [-1, 1, Conv, [768, 3, 2]],\n   [[-1, 12], 1, Concat, [1]],  # cat head P6\n   [-1, 3, C3, [1024, False]],  # 32 (P6/64-xlarge)\n\n   [[23, 26, 29, 32], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5, P6)\n  ]\n"
  },
  {
    "path": "models/tf.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nTensorFlow, Keras and TFLite versions of YOLOv5\nAuthored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127\n\nUsage:\n    $ python models/tf.py --weights yolov5s.pt\n\nExport:\n    $ python path/to/export.py --weights yolov5s.pt --include saved_model pb tflite tfjs\n\"\"\"\n\nimport argparse\nimport sys\nfrom copy import deepcopy\nfrom pathlib import Path\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[1]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\n# ROOT = ROOT.relative_to(Path.cwd())  # relative\n\nimport numpy as np\nimport tensorflow as tf\nimport torch\nimport torch.nn as nn\nfrom tensorflow import keras\n\nfrom models.common import C3, SPP, SPPF, Bottleneck, BottleneckCSP, Concat, Conv, DWConv, Focus, autopad\nfrom models.experimental import CrossConv, MixConv2d, attempt_load\nfrom models.yolo import Detect\nfrom utils.activations import SiLU\nfrom utils.general import LOGGER, make_divisible, print_args\n\n\nclass TFBN(keras.layers.Layer):\n    # TensorFlow BatchNormalization wrapper\n    def __init__(self, w=None):\n        super().__init__()\n        self.bn = keras.layers.BatchNormalization(\n            beta_initializer=keras.initializers.Constant(w.bias.numpy()),\n            gamma_initializer=keras.initializers.Constant(w.weight.numpy()),\n            moving_mean_initializer=keras.initializers.Constant(w.running_mean.numpy()),\n            moving_variance_initializer=keras.initializers.Constant(w.running_var.numpy()),\n            epsilon=w.eps)\n\n    def call(self, inputs):\n        return self.bn(inputs)\n\n\nclass TFPad(keras.layers.Layer):\n    def __init__(self, pad):\n        super().__init__()\n        self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]])\n\n    def call(self, inputs):\n        return tf.pad(inputs, self.pad, mode='constant', constant_values=0)\n\n\nclass TFConv(keras.layers.Layer):\n    # Standard convolution\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):\n        # ch_in, ch_out, weights, kernel, stride, padding, groups\n        super().__init__()\n        assert g == 1, \"TF v2.2 Conv2D does not support 'groups' argument\"\n        assert isinstance(k, int), \"Convolution with multiple kernels are not allowed.\"\n        # TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding)\n        # see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch\n\n        conv = keras.layers.Conv2D(\n            c2, k, s, 'SAME' if s == 1 else 'VALID', use_bias=False if hasattr(w, 'bn') else True,\n            kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),\n            bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))\n        self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])\n        self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity\n\n        # YOLOv5 activations\n        if isinstance(w.act, nn.LeakyReLU):\n            self.act = (lambda x: keras.activations.relu(x, alpha=0.1)) if act else tf.identity\n        elif isinstance(w.act, nn.Hardswish):\n            self.act = (lambda x: x * tf.nn.relu6(x + 3) * 0.166666667) if act else tf.identity\n        elif isinstance(w.act, (nn.SiLU, SiLU)):\n            self.act = (lambda x: keras.activations.swish(x)) if act else tf.identity\n        else:\n            raise Exception(f'no matching TensorFlow activation found for {w.act}')\n\n    def call(self, inputs):\n        return self.act(self.bn(self.conv(inputs)))\n\n\nclass TFFocus(keras.layers.Layer):\n    # Focus wh information into c-space\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):\n        # ch_in, ch_out, kernel, stride, padding, groups\n        super().__init__()\n        self.conv = TFConv(c1 * 4, c2, k, s, p, g, act, w.conv)\n\n    def call(self, inputs):  # x(b,w,h,c) -> y(b,w/2,h/2,4c)\n        # inputs = inputs / 255  # normalize 0-255 to 0-1\n        return self.conv(tf.concat([inputs[:, ::2, ::2, :],\n                                    inputs[:, 1::2, ::2, :],\n                                    inputs[:, ::2, 1::2, :],\n                                    inputs[:, 1::2, 1::2, :]], 3))\n\n\nclass TFBottleneck(keras.layers.Layer):\n    # Standard bottleneck\n    def __init__(self, c1, c2, shortcut=True, g=1, e=0.5, w=None):  # ch_in, ch_out, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)\n        self.cv2 = TFConv(c_, c2, 3, 1, g=g, w=w.cv2)\n        self.add = shortcut and c1 == c2\n\n    def call(self, inputs):\n        return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs))\n\n\nclass TFConv2d(keras.layers.Layer):\n    # Substitution for PyTorch nn.Conv2D\n    def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None):\n        super().__init__()\n        assert g == 1, \"TF v2.2 Conv2D does not support 'groups' argument\"\n        self.conv = keras.layers.Conv2D(\n            c2, k, s, 'VALID', use_bias=bias,\n            kernel_initializer=keras.initializers.Constant(w.weight.permute(2, 3, 1, 0).numpy()),\n            bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None, )\n\n    def call(self, inputs):\n        return self.conv(inputs)\n\n\nclass TFBottleneckCSP(keras.layers.Layer):\n    # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks\n    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):\n        # ch_in, ch_out, number, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)\n        self.cv2 = TFConv2d(c1, c_, 1, 1, bias=False, w=w.cv2)\n        self.cv3 = TFConv2d(c_, c_, 1, 1, bias=False, w=w.cv3)\n        self.cv4 = TFConv(2 * c_, c2, 1, 1, w=w.cv4)\n        self.bn = TFBN(w.bn)\n        self.act = lambda x: keras.activations.relu(x, alpha=0.1)\n        self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])\n\n    def call(self, inputs):\n        y1 = self.cv3(self.m(self.cv1(inputs)))\n        y2 = self.cv2(inputs)\n        return self.cv4(self.act(self.bn(tf.concat((y1, y2), axis=3))))\n\n\nclass TFC3(keras.layers.Layer):\n    # CSP Bottleneck with 3 convolutions\n    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):\n        # ch_in, ch_out, number, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)\n        self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2)\n        self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3)\n        self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])\n\n    def call(self, inputs):\n        return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))\n\n\nclass TFSPP(keras.layers.Layer):\n    # Spatial pyramid pooling layer used in YOLOv3-SPP\n    def __init__(self, c1, c2, k=(5, 9, 13), w=None):\n        super().__init__()\n        c_ = c1 // 2  # hidden channels\n        self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)\n        self.cv2 = TFConv(c_ * (len(k) + 1), c2, 1, 1, w=w.cv2)\n        self.m = [keras.layers.MaxPool2D(pool_size=x, strides=1, padding='SAME') for x in k]\n\n    def call(self, inputs):\n        x = self.cv1(inputs)\n        return self.cv2(tf.concat([x] + [m(x) for m in self.m], 3))\n\n\nclass TFSPPF(keras.layers.Layer):\n    # Spatial pyramid pooling-Fast layer\n    def __init__(self, c1, c2, k=5, w=None):\n        super().__init__()\n        c_ = c1 // 2  # hidden channels\n        self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)\n        self.cv2 = TFConv(c_ * 4, c2, 1, 1, w=w.cv2)\n        self.m = keras.layers.MaxPool2D(pool_size=k, strides=1, padding='SAME')\n\n    def call(self, inputs):\n        x = self.cv1(inputs)\n        y1 = self.m(x)\n        y2 = self.m(y1)\n        return self.cv2(tf.concat([x, y1, y2, self.m(y2)], 3))\n\n\nclass TFDetect(keras.layers.Layer):\n    def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None):  # detection layer\n        super().__init__()\n        self.stride = tf.convert_to_tensor(w.stride.numpy(), dtype=tf.float32)\n        self.nc = nc  # number of classes\n        self.no = nc + 5  # number of outputs per anchor\n        self.nl = len(anchors)  # number of detection layers\n        self.na = len(anchors[0]) // 2  # number of anchors\n        self.grid = [tf.zeros(1)] * self.nl  # init grid\n        self.anchors = tf.convert_to_tensor(w.anchors.numpy(), dtype=tf.float32)\n        self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]),\n                                      [self.nl, 1, -1, 1, 2])\n        self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)]\n        self.training = False  # set to False after building model\n        self.imgsz = imgsz\n        for i in range(self.nl):\n            ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]\n            self.grid[i] = self._make_grid(nx, ny)\n\n    def call(self, inputs):\n        z = []  # inference output\n        x = []\n        for i in range(self.nl):\n            x.append(self.m[i](inputs[i]))\n            # x(bs,20,20,255) to x(bs,3,20,20,85)\n            ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]\n            x[i] = tf.transpose(tf.reshape(x[i], [-1, ny * nx, self.na, self.no]), [0, 2, 1, 3])\n\n            if not self.training:  # inference\n                y = tf.sigmoid(x[i])\n                xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i]  # xy\n                wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]\n                # Normalize xywh to 0-1 to reduce calibration error\n                xy /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)\n                wh /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)\n                y = tf.concat([xy, wh, y[..., 4:]], -1)\n                z.append(tf.reshape(y, [-1, self.na * ny * nx, self.no]))\n\n        return x if self.training else (tf.concat(z, 1), x)\n\n    @staticmethod\n    def _make_grid(nx=20, ny=20):\n        # yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])\n        # return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()\n        xv, yv = tf.meshgrid(tf.range(nx), tf.range(ny))\n        return tf.cast(tf.reshape(tf.stack([xv, yv], 2), [1, 1, ny * nx, 2]), dtype=tf.float32)\n\n\nclass TFUpsample(keras.layers.Layer):\n    def __init__(self, size, scale_factor, mode, w=None):  # warning: all arguments needed including 'w'\n        super().__init__()\n        assert scale_factor == 2, \"scale_factor must be 2\"\n        self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * 2, x.shape[2] * 2), method=mode)\n        # self.upsample = keras.layers.UpSampling2D(size=scale_factor, interpolation=mode)\n        # with default arguments: align_corners=False, half_pixel_centers=False\n        # self.upsample = lambda x: tf.raw_ops.ResizeNearestNeighbor(images=x,\n        #                                                            size=(x.shape[1] * 2, x.shape[2] * 2))\n\n    def call(self, inputs):\n        return self.upsample(inputs)\n\n\nclass TFConcat(keras.layers.Layer):\n    def __init__(self, dimension=1, w=None):\n        super().__init__()\n        assert dimension == 1, \"convert only NCHW to NHWC concat\"\n        self.d = 3\n\n    def call(self, inputs):\n        return tf.concat(inputs, self.d)\n\n\ndef parse_model(d, ch, model, imgsz):  # model_dict, input_channels(3)\n    LOGGER.info(f\"\\n{'':>3}{'from':>18}{'n':>3}{'params':>10}  {'module':<40}{'arguments':<30}\")\n    anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']\n    na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors  # number of anchors\n    no = na * (nc + 5)  # number of outputs = anchors * (classes + 5)\n\n    layers, save, c2 = [], [], ch[-1]  # layers, savelist, ch out\n    for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']):  # from, number, module, args\n        m_str = m\n        m = eval(m) if isinstance(m, str) else m  # eval strings\n        for j, a in enumerate(args):\n            try:\n                args[j] = eval(a) if isinstance(a, str) else a  # eval strings\n            except NameError:\n                pass\n\n        n = max(round(n * gd), 1) if n > 1 else n  # depth gain\n        if m in [nn.Conv2d, Conv, Bottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3]:\n            c1, c2 = ch[f], args[0]\n            c2 = make_divisible(c2 * gw, 8) if c2 != no else c2\n\n            args = [c1, c2, *args[1:]]\n            if m in [BottleneckCSP, C3]:\n                args.insert(2, n)\n                n = 1\n        elif m is nn.BatchNorm2d:\n            args = [ch[f]]\n        elif m is Concat:\n            c2 = sum(ch[-1 if x == -1 else x + 1] for x in f)\n        elif m is Detect:\n            args.append([ch[x + 1] for x in f])\n            if isinstance(args[1], int):  # number of anchors\n                args[1] = [list(range(args[1] * 2))] * len(f)\n            args.append(imgsz)\n        else:\n            c2 = ch[f]\n\n        tf_m = eval('TF' + m_str.replace('nn.', ''))\n        m_ = keras.Sequential([tf_m(*args, w=model.model[i][j]) for j in range(n)]) if n > 1 \\\n            else tf_m(*args, w=model.model[i])  # module\n\n        torch_m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args)  # module\n        t = str(m)[8:-2].replace('__main__.', '')  # module type\n        np = sum(x.numel() for x in torch_m_.parameters())  # number params\n        m_.i, m_.f, m_.type, m_.np = i, f, t, np  # attach index, 'from' index, type, number params\n        LOGGER.info(f'{i:>3}{str(f):>18}{str(n):>3}{np:>10}  {t:<40}{str(args):<30}')  # print\n        save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1)  # append to savelist\n        layers.append(m_)\n        ch.append(c2)\n    return keras.Sequential(layers), sorted(save)\n\n\nclass TFModel:\n    def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)):  # model, channels, classes\n        super().__init__()\n        if isinstance(cfg, dict):\n            self.yaml = cfg  # model dict\n        else:  # is *.yaml\n            import yaml  # for torch hub\n            self.yaml_file = Path(cfg).name\n            with open(cfg) as f:\n                self.yaml = yaml.load(f, Loader=yaml.FullLoader)  # model dict\n\n        # Define model\n        if nc and nc != self.yaml['nc']:\n            LOGGER.info(f\"Overriding {cfg} nc={self.yaml['nc']} with nc={nc}\")\n            self.yaml['nc'] = nc  # override yaml value\n        self.model, self.savelist = parse_model(deepcopy(self.yaml), ch=[ch], model=model, imgsz=imgsz)\n\n    def predict(self, inputs, tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45,\n                conf_thres=0.25):\n        y = []  # outputs\n        x = inputs\n        for i, m in enumerate(self.model.layers):\n            if m.f != -1:  # if not from previous layer\n                x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f]  # from earlier layers\n\n            x = m(x)  # run\n            y.append(x if m.i in self.savelist else None)  # save output\n\n        # Add TensorFlow NMS\n        if tf_nms:\n            boxes = self._xywh2xyxy(x[0][..., :4])\n            probs = x[0][:, :, 4:5]\n            classes = x[0][:, :, 5:]\n            scores = probs * classes\n            if agnostic_nms:\n                nms = AgnosticNMS()((boxes, classes, scores), topk_all, iou_thres, conf_thres)\n                return nms, x[1]\n            else:\n                boxes = tf.expand_dims(boxes, 2)\n                nms = tf.image.combined_non_max_suppression(\n                    boxes, scores, topk_per_class, topk_all, iou_thres, conf_thres, clip_boxes=False)\n                return nms, x[1]\n\n        return x[0]  # output only first tensor [1,6300,85] = [xywh, conf, class0, class1, ...]\n        # x = x[0][0]  # [x(1,6300,85), ...] to x(6300,85)\n        # xywh = x[..., :4]  # x(6300,4) boxes\n        # conf = x[..., 4:5]  # x(6300,1) confidences\n        # cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1))  # x(6300,1)  classes\n        # return tf.concat([conf, cls, xywh], 1)\n\n    @staticmethod\n    def _xywh2xyxy(xywh):\n        # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right\n        x, y, w, h = tf.split(xywh, num_or_size_splits=4, axis=-1)\n        return tf.concat([x - w / 2, y - h / 2, x + w / 2, y + h / 2], axis=-1)\n\n\nclass AgnosticNMS(keras.layers.Layer):\n    # TF Agnostic NMS\n    def call(self, input, topk_all, iou_thres, conf_thres):\n        # wrap map_fn to avoid TypeSpec related error https://stackoverflow.com/a/65809989/3036450\n        return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres), input,\n                         fn_output_signature=(tf.float32, tf.float32, tf.float32, tf.int32),\n                         name='agnostic_nms')\n\n    @staticmethod\n    def _nms(x, topk_all=100, iou_thres=0.45, conf_thres=0.25):  # agnostic NMS\n        boxes, classes, scores = x\n        class_inds = tf.cast(tf.argmax(classes, axis=-1), tf.float32)\n        scores_inp = tf.reduce_max(scores, -1)\n        selected_inds = tf.image.non_max_suppression(\n            boxes, scores_inp, max_output_size=topk_all, iou_threshold=iou_thres, score_threshold=conf_thres)\n        selected_boxes = tf.gather(boxes, selected_inds)\n        padded_boxes = tf.pad(selected_boxes,\n                              paddings=[[0, topk_all - tf.shape(selected_boxes)[0]], [0, 0]],\n                              mode=\"CONSTANT\", constant_values=0.0)\n        selected_scores = tf.gather(scores_inp, selected_inds)\n        padded_scores = tf.pad(selected_scores,\n                               paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],\n                               mode=\"CONSTANT\", constant_values=-1.0)\n        selected_classes = tf.gather(class_inds, selected_inds)\n        padded_classes = tf.pad(selected_classes,\n                                paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],\n                                mode=\"CONSTANT\", constant_values=-1.0)\n        valid_detections = tf.shape(selected_inds)[0]\n        return padded_boxes, padded_scores, padded_classes, valid_detections\n\n\ndef representative_dataset_gen(dataset, ncalib=100):\n    # Representative dataset generator for use with converter.representative_dataset, returns a generator of np arrays\n    for n, (path, img, im0s, vid_cap, string) in enumerate(dataset):\n        input = np.transpose(img, [1, 2, 0])\n        input = np.expand_dims(input, axis=0).astype(np.float32)\n        input /= 255\n        yield [input]\n        if n >= ncalib:\n            break\n\n\ndef run(weights=ROOT / 'yolov5s.pt',  # weights path\n        imgsz=(640, 640),  # inference size h,w\n        batch_size=1,  # batch size\n        dynamic=False,  # dynamic batch size\n        ):\n    # PyTorch model\n    im = torch.zeros((batch_size, 3, *imgsz))  # BCHW image\n    model = attempt_load(weights, map_location=torch.device('cpu'), inplace=True, fuse=False)\n    y = model(im)  # inference\n    model.info()\n\n    # TensorFlow model\n    im = tf.zeros((batch_size, *imgsz, 3))  # BHWC image\n    tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)\n    y = tf_model.predict(im)  # inference\n\n    # Keras model\n    im = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)\n    keras_model = keras.Model(inputs=im, outputs=tf_model.predict(im))\n    keras_model.summary()\n\n    LOGGER.info('PyTorch, TensorFlow and Keras models successfully verified.\\nUse export.py for TF model export.')\n\n\ndef parse_opt():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path')\n    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')\n    parser.add_argument('--batch-size', type=int, default=1, help='batch size')\n    parser.add_argument('--dynamic', action='store_true', help='dynamic batch size')\n    opt = parser.parse_args()\n    opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1  # expand\n    print_args(FILE.stem, opt)\n    return opt\n\n\ndef main(opt):\n    run(**vars(opt))\n\n\nif __name__ == \"__main__\":\n    opt = parse_opt()\n    main(opt)\n"
  },
  {
    "path": "models/yolo.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nYOLO-specific modules\n\nUsage:\n    $ python path/to/models/yolo.py --cfg yolov5s.yaml\n\"\"\"\nimport os\nimport argparse\nimport sys\nfrom copy import deepcopy\nfrom pathlib import Path\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[1]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\n# ROOT = ROOT.relative_to(Path.cwd())  # relative\n\nfrom models.common import *\nfrom models.experimental import *\nfrom utils.autoanchor import check_anchor_order\nfrom utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args\nfrom utils.plots import feature_visualization\nfrom utils.torch_utils import fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync\n\ntry:\n    import thop  # for FLOPs computation\nexcept ImportError:\n    thop = None\n\n\nclass Detect(nn.Module):\n    stride = None  # strides computed during build\n    onnx_dynamic = False  # ONNX export parameter\n\n    def __init__(self, nc=80, anchors=(), ch=(), inplace=True):  # detection layer\n        super().__init__()\n        self.nc = nc  # number of classes\n        self.no = nc + 5  # number of outputs per anchor\n        self.nl = len(anchors)  # number of detection layers\n        self.na = len(anchors[0]) // 2  # number of anchors\n        self.grid = [torch.zeros(1)] * self.nl  # init grid\n        self.anchor_grid = [torch.zeros(1)] * self.nl  # init anchor grid\n        self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2))  # shape(nl,na,2)\n        self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch)  # output conv\n        self.inplace = inplace  # use in-place ops (e.g. slice assignment)\n\n    def forward(self, x):\n        z = []  # inference output\n        for i in range(self.nl):\n            x[i] = self.m[i](x[i])  # conv\n\n            if os.getenv('RKNN_model_hack', '0') != '0':\n                continue\n\n            bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)\n            x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()\n\n            if not self.training:  # inference\n                if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:\n                    self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)\n\n                y = x[i].sigmoid()\n                if self.inplace:\n                    y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i]  # xy\n                    y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh\n                else:  # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953\n                    xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i]  # xy\n                    wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh\n                    y = torch.cat((xy, wh, y[..., 4:]), -1)\n                z.append(y.view(bs, -1, self.no))\n\n        if os.getenv('RKNN_model_hack', '0') != '0':\n                return x\n\n        return x if self.training else (torch.cat(z, 1), x)\n\n    def _make_grid(self, nx=20, ny=20, i=0):\n        d = self.anchors[i].device\n        if check_version(torch.__version__, '1.10.0'):  # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility\n            yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)], indexing='ij')\n        else:\n            yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)])\n        grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float()\n        anchor_grid = (self.anchors[i].clone() * self.stride[i]) \\\n            .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float()\n        return grid, anchor_grid\n\n\nclass Model(nn.Module):\n    def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None):  # model, input channels, number of classes\n        super().__init__()\n        if isinstance(cfg, dict):\n            self.yaml = cfg  # model dict\n        else:  # is *.yaml\n            import yaml  # for torch hub\n            self.yaml_file = Path(cfg).name\n            with open(cfg, encoding='ascii', errors='ignore') as f:\n                self.yaml = yaml.safe_load(f)  # model dict\n\n        # Define model\n        ch = self.yaml['ch'] = self.yaml.get('ch', ch)  # input channels\n        if nc and nc != self.yaml['nc']:\n            LOGGER.info(f\"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}\")\n            self.yaml['nc'] = nc  # override yaml value\n        if anchors:\n            LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}')\n            self.yaml['anchors'] = round(anchors)  # override yaml value\n        self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch])  # model, savelist\n        self.names = [str(i) for i in range(self.yaml['nc'])]  # default names\n        self.inplace = self.yaml.get('inplace', True)\n\n        # Build strides, anchors\n        m = self.model[-1]  # Detect()\n        if isinstance(m, Detect):\n            s = 256  # 2x min stride\n            m.inplace = self.inplace\n            m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))])  # forward\n            m.anchors /= m.stride.view(-1, 1, 1)\n            check_anchor_order(m)\n            self.stride = m.stride\n            self._initialize_biases()  # only run once\n\n        # Init weights, biases\n        initialize_weights(self)\n        self.info()\n        LOGGER.info('')\n\n    def forward(self, x, augment=False, profile=False, visualize=False):\n        if augment:\n            return self._forward_augment(x)  # augmented inference, None\n        return self._forward_once(x, profile, visualize)  # single-scale inference, train\n\n    def _forward_augment(self, x):\n        img_size = x.shape[-2:]  # height, width\n        s = [1, 0.83, 0.67]  # scales\n        f = [None, 3, None]  # flips (2-ud, 3-lr)\n        y = []  # outputs\n        for si, fi in zip(s, f):\n            xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))\n            yi = self._forward_once(xi)[0]  # forward\n            # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1])  # save\n            yi = self._descale_pred(yi, fi, si, img_size)\n            y.append(yi)\n        y = self._clip_augmented(y)  # clip augmented tails\n        return torch.cat(y, 1), None  # augmented inference, train\n\n    def _forward_once(self, x, profile=False, visualize=False):\n        y, dt = [], []  # outputs\n        for m in self.model:\n            if m.f != -1:  # if not from previous layer\n                x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f]  # from earlier layers\n            if profile:\n                self._profile_one_layer(m, x, dt)\n            x = m(x)  # run\n            y.append(x if m.i in self.save else None)  # save output\n            if visualize:\n                feature_visualization(x, m.type, m.i, save_dir=visualize)\n        return x\n\n    def _descale_pred(self, p, flips, scale, img_size):\n        # de-scale predictions following augmented inference (inverse operation)\n        if self.inplace:\n            p[..., :4] /= scale  # de-scale\n            if flips == 2:\n                p[..., 1] = img_size[0] - p[..., 1]  # de-flip ud\n            elif flips == 3:\n                p[..., 0] = img_size[1] - p[..., 0]  # de-flip lr\n        else:\n            x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale  # de-scale\n            if flips == 2:\n                y = img_size[0] - y  # de-flip ud\n            elif flips == 3:\n                x = img_size[1] - x  # de-flip lr\n            p = torch.cat((x, y, wh, p[..., 4:]), -1)\n        return p\n\n    def _clip_augmented(self, y):\n        # Clip YOLOv5 augmented inference tails\n        nl = self.model[-1].nl  # number of detection layers (P3-P5)\n        g = sum(4 ** x for x in range(nl))  # grid points\n        e = 1  # exclude layer count\n        i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e))  # indices\n        y[0] = y[0][:, :-i]  # large\n        i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e))  # indices\n        y[-1] = y[-1][:, i:]  # small\n        return y\n\n    def _profile_one_layer(self, m, x, dt):\n        c = isinstance(m, Detect)  # is final layer, copy input as inplace fix\n        o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0  # FLOPs\n        t = time_sync()\n        for _ in range(10):\n            m(x.copy() if c else x)\n        dt.append((time_sync() - t) * 100)\n        if m == self.model[0]:\n            LOGGER.info(f\"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s}  {'module'}\")\n        LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f}  {m.type}')\n        if c:\n            LOGGER.info(f\"{sum(dt):10.2f} {'-':>10s} {'-':>10s}  Total\")\n\n    def _initialize_biases(self, cf=None):  # initialize biases into Detect(), cf is class frequency\n        # https://arxiv.org/abs/1708.02002 section 3.3\n        # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.\n        m = self.model[-1]  # Detect() module\n        for mi, s in zip(m.m, m.stride):  # from\n            b = mi.bias.view(m.na, -1)  # conv.bias(255) to (3,85)\n            b.data[:, 4] += math.log(8 / (640 / s) ** 2)  # obj (8 objects per 640 image)\n            b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum())  # cls\n            mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)\n\n    def _print_biases(self):\n        m = self.model[-1]  # Detect() module\n        for mi in m.m:  # from\n            b = mi.bias.detach().view(m.na, -1).T  # conv.bias(255) to (3,85)\n            LOGGER.info(\n                ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))\n\n    # def _print_weights(self):\n    #     for m in self.model.modules():\n    #         if type(m) is Bottleneck:\n    #             LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2))  # shortcut weights\n\n    def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layers\n        LOGGER.info('Fusing layers... ')\n        for m in self.model.modules():\n            if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):\n                m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv\n                delattr(m, 'bn')  # remove batchnorm\n                m.forward = m.forward_fuse  # update forward\n        self.info()\n        return self\n\n    def info(self, verbose=False, img_size=640):  # print model information\n        model_info(self, verbose, img_size)\n\n    def _apply(self, fn):\n        # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers\n        self = super()._apply(fn)\n        m = self.model[-1]  # Detect()\n        if isinstance(m, Detect):\n            m.stride = fn(m.stride)\n            m.grid = list(map(fn, m.grid))\n            if isinstance(m.anchor_grid, list):\n                m.anchor_grid = list(map(fn, m.anchor_grid))\n        return self\n\n\ndef parse_model(d, ch):  # model_dict, input_channels(3)\n    LOGGER.info(f\"\\n{'':>3}{'from':>18}{'n':>3}{'params':>10}  {'module':<40}{'arguments':<30}\")\n    anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']\n    na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors  # number of anchors\n    no = na * (nc + 5)  # number of outputs = anchors * (classes + 5)\n\n    layers, save, c2 = [], [], ch[-1]  # layers, savelist, ch out\n    for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']):  # from, number, module, args\n        m = eval(m) if isinstance(m, str) else m  # eval strings\n        for j, a in enumerate(args):\n            try:\n                args[j] = eval(a) if isinstance(a, str) else a  # eval strings\n            except NameError:\n                pass\n\n        n = n_ = max(round(n * gd), 1) if n > 1 else n  # depth gain\n        if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,\n                 BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]:\n            c1, c2 = ch[f], args[0]\n            if c2 != no:  # if not output\n                c2 = make_divisible(c2 * gw, 8)\n\n            args = [c1, c2, *args[1:]]\n            if m in [BottleneckCSP, C3, C3TR, C3Ghost]:\n                args.insert(2, n)  # number of repeats\n                n = 1\n        elif m is nn.BatchNorm2d:\n            args = [ch[f]]\n        elif m is Concat:\n            c2 = sum(ch[x] for x in f)\n        elif m is Detect:\n            args.append([ch[x] for x in f])\n            if isinstance(args[1], int):  # number of anchors\n                args[1] = [list(range(args[1] * 2))] * len(f)\n        elif m is Contract:\n            c2 = ch[f] * args[0] ** 2\n        elif m is Expand:\n            c2 = ch[f] // args[0] ** 2\n        else:\n            c2 = ch[f]\n\n        m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args)  # module\n        t = str(m)[8:-2].replace('__main__.', '')  # module type\n        np = sum(x.numel() for x in m_.parameters())  # number params\n        m_.i, m_.f, m_.type, m_.np = i, f, t, np  # attach index, 'from' index, type, number params\n        LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f}  {t:<40}{str(args):<30}')  # print\n        save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1)  # append to savelist\n        layers.append(m_)\n        if i == 0:\n            ch = []\n        ch.append(c2)\n    return nn.Sequential(*layers), sorted(save)\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')\n    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')\n    parser.add_argument('--profile', action='store_true', help='profile model speed')\n    parser.add_argument('--test', action='store_true', help='test all yolo*.yaml')\n    opt = parser.parse_args()\n    opt.cfg = check_yaml(opt.cfg)  # check YAML\n    print_args(FILE.stem, opt)\n    device = select_device(opt.device)\n\n    # Create model\n    model = Model(opt.cfg).to(device)\n    model.train()\n\n    # Profile\n    if opt.profile:\n        img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)\n        y = model(img, profile=True)\n\n    # Test all models\n    if opt.test:\n        for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'):\n            try:\n                _ = Model(cfg)\n            except Exception as e:\n                print(f'Error in {cfg}: {e}')\n\n    # Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898)\n    # from torch.utils.tensorboard import SummaryWriter\n    # tb_writer = SummaryWriter('.')\n    # LOGGER.info(\"Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/\")\n    # tb_writer.add_graph(torch.jit.trace(model, img, strict=False), [])  # add model graph\n"
  },
  {
    "path": "models/yolov5l.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/yolov5m.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.67  # model depth multiple\nwidth_multiple: 0.75  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/yolov5n.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth multiple\nwidth_multiple: 0.25  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/yolov5s.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 0.33  # model depth multiple\nwidth_multiple: 0.50  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "models/yolov5x.yaml",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80  # number of classes\ndepth_multiple: 1.33  # model depth multiple\nwidth_multiple: 1.25  # layer channel multiple\nanchors:\n  - [10,13, 16,30, 33,23]  # P3/8\n  - [30,61, 62,45, 59,119]  # P4/16\n  - [116,90, 156,198, 373,326]  # P5/32\n\n# YOLOv5 v6.0 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2\n   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8\n   [-1, 6, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32\n   [-1, 3, C3, [1024]],\n   [-1, 1, SPPF, [1024, 5]],  # 9\n  ]\n\n# YOLOv5 v6.0 head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 6], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 13\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 14], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 10], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)\n\n   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "requirements.txt",
    "content": "# pip install -r requirements.txt\n\n# Base ----------------------------------------\nmatplotlib>=3.2.2\nnumpy>=1.18.5\nopencv-python>=4.1.2\nPillow>=7.1.2\nPyYAML>=5.3.1\nrequests>=2.23.0\nscipy>=1.4.1\ntorch>=1.7.0\ntorchvision>=0.8.1\ntqdm>=4.41.0\n\n# Logging -------------------------------------\ntensorboard>=2.4.1\n# wandb\n\n# Plotting ------------------------------------\npandas>=1.1.4\nseaborn>=0.11.0\n\n# Export --------------------------------------\n# coremltools>=4.1  # CoreML export\n# onnx>=1.9.0  # ONNX export\n# onnx-simplifier>=0.3.6  # ONNX simplifier\n# scikit-learn==0.19.2  # CoreML quantization\n# tensorflow>=2.4.1  # TFLite export\n# tensorflowjs>=3.9.0  # TF.js export\n\n# Extras --------------------------------------\n# albumentations>=1.0.3\n# Cython  # for pycocotools https://github.com/cocodataset/cocoapi/issues/172\n# pycocotools>=2.0  # COCO mAP\n# roboflow\nthop  # FLOPs computation\n"
  },
  {
    "path": "setup.cfg",
    "content": "# Project-wide configuration file, can be used for package metadata and other toll configurations\n# Example usage: global configuration for PEP8 (via flake8) setting or default pytest arguments\n\n[metadata]\nlicense_file = LICENSE\ndescription-file = README.md\n\n\n[tool:pytest]\nnorecursedirs =\n    .git\n    dist\n    build\naddopts =\n    --doctest-modules\n    --durations=25\n    --color=yes\n\n\n[flake8]\nmax-line-length = 120\nexclude = .tox,*.egg,build,temp\nselect = E,W,F\ndoctests = True\nverbose = 2\n# https://pep8.readthedocs.io/en/latest/intro.html#error-codes\nformat = pylint\n# see: https://www.flake8rules.com/\nignore =\n    E731  # Do not assign a lambda expression, use a def\n    F405\n    E402\n    F841\n    E741\n    F821\n    E722\n    F401\n    W504\n    E127\n    W504\n    E231\n    E501\n    F403\n    E302\n    F541\n\n\n[isort]\n# https://pycqa.github.io/isort/docs/configuration/options.html\nline_length = 120\nmulti_line_output = 0\n"
  },
  {
    "path": "train.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nTrain a YOLOv5 model on a custom dataset\n\nUsage:\n    $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640\n\"\"\"\nimport argparse\nimport math\nimport os\nimport random\nimport sys\nimport time\nfrom copy import deepcopy\nfrom datetime import datetime\nfrom pathlib import Path\n\nimport numpy as np\nimport torch\nimport torch.distributed as dist\nimport torch.nn as nn\nimport yaml\nfrom torch.cuda import amp\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nfrom torch.optim import SGD, Adam, lr_scheduler\nfrom tqdm import tqdm\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[0]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\nROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative\n\nimport val  # for end-of-epoch mAP\nfrom models.experimental import attempt_load\nfrom models.yolo import Model\nfrom utils.autoanchor import check_anchors\nfrom utils.autobatch import check_train_batch_size\nfrom utils.callbacks import Callbacks\nfrom utils.datasets import create_dataloader\nfrom utils.downloads import attempt_download\nfrom utils.general import (LOGGER, check_dataset, check_file, check_git_status, check_img_size, check_requirements,\n                           check_suffix, check_yaml, colorstr, get_latest_run, increment_path, init_seeds,\n                           intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, one_cycle,\n                           print_args, print_mutation, strip_optimizer)\nfrom utils.loggers import Loggers\nfrom utils.loggers.wandb.wandb_utils import check_wandb_resume\nfrom utils.loss import ComputeLoss\nfrom utils.metrics import fitness\nfrom utils.plots import plot_evolve, plot_labels\nfrom utils.torch_utils import EarlyStopping, ModelEMA, de_parallel, select_device, torch_distributed_zero_first\n\nLOCAL_RANK = int(os.getenv('LOCAL_RANK', -1))  # https://pytorch.org/docs/stable/elastic/run.html\nRANK = int(os.getenv('RANK', -1))\nWORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))\n\n\ndef train(hyp,  # path/to/hyp.yaml or hyp dictionary\n          opt,\n          device,\n          callbacks\n          ):\n    save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze, = \\\n        Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \\\n        opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze\n\n    # Directories\n    w = save_dir / 'weights'  # weights dir\n    (w.parent if evolve else w).mkdir(parents=True, exist_ok=True)  # make dir\n    last, best = w / 'last.pt', w / 'best.pt'\n\n    # Hyperparameters\n    if isinstance(hyp, str):\n        with open(hyp, errors='ignore') as f:\n            hyp = yaml.safe_load(f)  # load hyps dict\n    LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))\n\n    # Save run settings\n    if not evolve:\n        with open(save_dir / 'hyp.yaml', 'w') as f:\n            yaml.safe_dump(hyp, f, sort_keys=False)\n        with open(save_dir / 'opt.yaml', 'w') as f:\n            yaml.safe_dump(vars(opt), f, sort_keys=False)\n\n    # Loggers\n    data_dict = None\n    if RANK in [-1, 0]:\n        loggers = Loggers(save_dir, weights, opt, hyp, LOGGER)  # loggers instance\n        if loggers.wandb:\n            data_dict = loggers.wandb.data_dict\n            if resume:\n                weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp\n\n        # Register actions\n        for k in methods(loggers):\n            callbacks.register_action(k, callback=getattr(loggers, k))\n\n    # Config\n    plots = not evolve  # create plots\n    cuda = device.type != 'cpu'\n    init_seeds(1 + RANK)\n    with torch_distributed_zero_first(LOCAL_RANK):\n        data_dict = data_dict or check_dataset(data)  # check if None\n    train_path, val_path = data_dict['train'], data_dict['val']\n    nc = 1 if single_cls else int(data_dict['nc'])  # number of classes\n    names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names']  # class names\n    assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}'  # check\n    is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt')  # COCO dataset\n\n    # Model\n    check_suffix(weights, '.pt')  # check weights\n    pretrained = weights.endswith('.pt')\n    if pretrained:\n        with torch_distributed_zero_first(LOCAL_RANK):\n            weights = attempt_download(weights)  # download if not found locally\n        ckpt = torch.load(weights, map_location=device)  # load checkpoint\n        model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)  # create\n        exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else []  # exclude keys\n        csd = ckpt['model'].float().state_dict()  # checkpoint state_dict as FP32\n        csd = intersect_dicts(csd, model.state_dict(), exclude=exclude)  # intersect\n        model.load_state_dict(csd, strict=False)  # load\n        LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}')  # report\n    else:\n        model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)  # create\n\n    # Freeze\n    freeze = [f'model.{x}.' for x in range(freeze)]  # layers to freeze\n    for k, v in model.named_parameters():\n        v.requires_grad = True  # train all layers\n        if any(x in k for x in freeze):\n            LOGGER.info(f'freezing {k}')\n            v.requires_grad = False\n\n    # Image size\n    gs = max(int(model.stride.max()), 32)  # grid size (max stride)\n    imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2)  # verify imgsz is gs-multiple\n\n    # Batch size\n    if RANK == -1 and batch_size == -1:  # single-GPU only, estimate best batch size\n        batch_size = check_train_batch_size(model, imgsz)\n\n    # Optimizer\n    nbs = 64  # nominal batch size\n    accumulate = max(round(nbs / batch_size), 1)  # accumulate loss before optimizing\n    hyp['weight_decay'] *= batch_size * accumulate / nbs  # scale weight_decay\n    LOGGER.info(f\"Scaled weight_decay = {hyp['weight_decay']}\")\n\n    g0, g1, g2 = [], [], []  # optimizer parameter groups\n    for v in model.modules():\n        if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):  # bias\n            g2.append(v.bias)\n        if isinstance(v, nn.BatchNorm2d):  # weight (no decay)\n            g0.append(v.weight)\n        elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):  # weight (with decay)\n            g1.append(v.weight)\n\n    if opt.adam:\n        optimizer = Adam(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999))  # adjust beta1 to momentum\n    else:\n        optimizer = SGD(g0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)\n\n    optimizer.add_param_group({'params': g1, 'weight_decay': hyp['weight_decay']})  # add g1 with weight_decay\n    optimizer.add_param_group({'params': g2})  # add g2 (biases)\n    LOGGER.info(f\"{colorstr('optimizer:')} {type(optimizer).__name__} with parameter groups \"\n                f\"{len(g0)} weight, {len(g1)} weight (no decay), {len(g2)} bias\")\n    del g0, g1, g2\n\n    # Scheduler\n    if opt.linear_lr:\n        lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf']  # linear\n    else:\n        lf = one_cycle(1, hyp['lrf'], epochs)  # cosine 1->hyp['lrf']\n    scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)  # plot_lr_scheduler(optimizer, scheduler, epochs)\n\n    # EMA\n    ema = ModelEMA(model) if RANK in [-1, 0] else None\n\n    # Resume\n    start_epoch, best_fitness = 0, 0.0\n    if pretrained:\n        # Optimizer\n        if ckpt['optimizer'] is not None:\n            optimizer.load_state_dict(ckpt['optimizer'])\n            best_fitness = ckpt['best_fitness']\n\n        # EMA\n        if ema and ckpt.get('ema'):\n            ema.ema.load_state_dict(ckpt['ema'].float().state_dict())\n            ema.updates = ckpt['updates']\n\n        # Epochs\n        start_epoch = ckpt['epoch'] + 1\n        if resume:\n            assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.'\n        if epochs < start_epoch:\n            LOGGER.info(f\"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.\")\n            epochs += ckpt['epoch']  # finetune additional epochs\n\n        del ckpt, csd\n\n    # DP mode\n    if cuda and RANK == -1 and torch.cuda.device_count() > 1:\n        LOGGER.warning('WARNING: DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\\n'\n                       'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.')\n        model = torch.nn.DataParallel(model)\n\n    # SyncBatchNorm\n    if opt.sync_bn and cuda and RANK != -1:\n        model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)\n        LOGGER.info('Using SyncBatchNorm()')\n\n    # Trainloader\n    train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls,\n                                              hyp=hyp, augment=True, cache=opt.cache, rect=opt.rect, rank=LOCAL_RANK,\n                                              workers=workers, image_weights=opt.image_weights, quad=opt.quad,\n                                              prefix=colorstr('train: '), shuffle=True)\n    mlc = int(np.concatenate(dataset.labels, 0)[:, 0].max())  # max label class\n    nb = len(train_loader)  # number of batches\n    assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}'\n\n    # Process 0\n    if RANK in [-1, 0]:\n        val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls,\n                                       hyp=hyp, cache=None if noval else opt.cache, rect=True, rank=-1,\n                                       workers=workers, pad=0.5,\n                                       prefix=colorstr('val: '))[0]\n\n        if not resume:\n            labels = np.concatenate(dataset.labels, 0)\n            # c = torch.tensor(labels[:, 0])  # classes\n            # cf = torch.bincount(c.long(), minlength=nc) + 1.  # frequency\n            # model._initialize_biases(cf.to(device))\n            if plots:\n                plot_labels(labels, names, save_dir)\n\n            # Anchors\n            if not opt.noautoanchor:\n                check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)\n            model.half().float()  # pre-reduce anchor precision\n\n        callbacks.run('on_pretrain_routine_end')\n\n    # DDP mode\n    if cuda and RANK != -1:\n        model = DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)\n\n    # Model attributes\n    nl = de_parallel(model).model[-1].nl  # number of detection layers (to scale hyps)\n    hyp['box'] *= 3 / nl  # scale to layers\n    hyp['cls'] *= nc / 80 * 3 / nl  # scale to classes and layers\n    hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl  # scale to image size and layers\n    hyp['label_smoothing'] = opt.label_smoothing\n    model.nc = nc  # attach number of classes to model\n    model.hyp = hyp  # attach hyperparameters to model\n    model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc  # attach class weights\n    model.names = names\n\n    # Start training\n    t0 = time.time()\n    nw = max(round(hyp['warmup_epochs'] * nb), 1000)  # number of warmup iterations, max(3 epochs, 1k iterations)\n    # nw = min(nw, (epochs - start_epoch) / 2 * nb)  # limit warmup to < 1/2 of training\n    last_opt_step = -1\n    maps = np.zeros(nc)  # mAP per class\n    results = (0, 0, 0, 0, 0, 0, 0)  # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)\n    scheduler.last_epoch = start_epoch - 1  # do not move\n    scaler = amp.GradScaler(enabled=cuda)\n    stopper = EarlyStopping(patience=opt.patience)\n    compute_loss = ComputeLoss(model)  # init loss class\n    LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\\n'\n                f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\\n'\n                f\"Logging results to {colorstr('bold', save_dir)}\\n\"\n                f'Starting training for {epochs} epochs...')\n    for epoch in range(start_epoch, epochs):  # epoch ------------------------------------------------------------------\n        model.train()\n\n        # Update image weights (optional, single-GPU only)\n        if opt.image_weights:\n            cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc  # class weights\n            iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw)  # image weights\n            dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n)  # rand weighted idx\n\n        # Update mosaic border (optional)\n        # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)\n        # dataset.mosaic_border = [b - imgsz, -b]  # height, width borders\n\n        mloss = torch.zeros(3, device=device)  # mean losses\n        if RANK != -1:\n            train_loader.sampler.set_epoch(epoch)\n        pbar = enumerate(train_loader)\n        LOGGER.info(('\\n' + '%10s' * 7) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'labels', 'img_size'))\n        if RANK in [-1, 0]:\n            pbar = tqdm(pbar, total=nb, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}')  # progress bar\n        optimizer.zero_grad()\n        for i, (imgs, targets, paths, _) in pbar:  # batch -------------------------------------------------------------\n            ni = i + nb * epoch  # number integrated batches (since train start)\n            imgs = imgs.to(device, non_blocking=True).float() / 255  # uint8 to float32, 0-255 to 0.0-1.0\n\n            # Warmup\n            if ni <= nw:\n                xi = [0, nw]  # x interp\n                # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0])  # iou loss ratio (obj_loss = 1.0 or iou)\n                accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round())\n                for j, x in enumerate(optimizer.param_groups):\n                    # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0\n                    x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])\n                    if 'momentum' in x:\n                        x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])\n\n            # Multi-scale\n            if opt.multi_scale:\n                sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs  # size\n                sf = sz / max(imgs.shape[2:])  # scale factor\n                if sf != 1:\n                    ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]]  # new shape (stretched to gs-multiple)\n                    imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)\n\n            # Forward\n            with amp.autocast(enabled=cuda):\n                pred = model(imgs)  # forward\n                loss, loss_items = compute_loss(pred, targets.to(device))  # loss scaled by batch_size\n                if RANK != -1:\n                    loss *= WORLD_SIZE  # gradient averaged between devices in DDP mode\n                if opt.quad:\n                    loss *= 4.\n\n            # Backward\n            scaler.scale(loss).backward()\n\n            # Optimize\n            if ni - last_opt_step >= accumulate:\n                scaler.step(optimizer)  # optimizer.step\n                scaler.update()\n                optimizer.zero_grad()\n                if ema:\n                    ema.update(model)\n                last_opt_step = ni\n\n            # Log\n            if RANK in [-1, 0]:\n                mloss = (mloss * i + loss_items) / (i + 1)  # update mean losses\n                mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G'  # (GB)\n                pbar.set_description(('%10s' * 2 + '%10.4g' * 5) % (\n                    f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1]))\n                callbacks.run('on_train_batch_end', ni, model, imgs, targets, paths, plots, opt.sync_bn)\n            # end batch ------------------------------------------------------------------------------------------------\n\n        # Scheduler\n        lr = [x['lr'] for x in optimizer.param_groups]  # for loggers\n        scheduler.step()\n\n        if RANK in [-1, 0]:\n            # mAP\n            callbacks.run('on_train_epoch_end', epoch=epoch)\n            ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights'])\n            final_epoch = (epoch + 1 == epochs) or stopper.possible_stop\n            if not noval or final_epoch:  # Calculate mAP\n                results, maps, _ = val.run(data_dict,\n                                           batch_size=batch_size // WORLD_SIZE * 2,\n                                           imgsz=imgsz,\n                                           model=ema.ema,\n                                           single_cls=single_cls,\n                                           dataloader=val_loader,\n                                           save_dir=save_dir,\n                                           plots=False,\n                                           callbacks=callbacks,\n                                           compute_loss=compute_loss)\n\n            # Update best mAP\n            fi = fitness(np.array(results).reshape(1, -1))  # weighted combination of [P, R, mAP@.5, mAP@.5-.95]\n            if fi > best_fitness:\n                best_fitness = fi\n            log_vals = list(mloss) + list(results) + lr\n            callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi)\n\n            # Save model\n            if (not nosave) or (final_epoch and not evolve):  # if save\n                ckpt = {'epoch': epoch,\n                        'best_fitness': best_fitness,\n                        'model': deepcopy(de_parallel(model)).half(),\n                        'ema': deepcopy(ema.ema).half(),\n                        'updates': ema.updates,\n                        'optimizer': optimizer.state_dict(),\n                        'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None,\n                        'date': datetime.now().isoformat()}\n\n                # Save last, best and delete\n                torch.save(ckpt, last)\n                if best_fitness == fi:\n                    torch.save(ckpt, best)\n                if (epoch > 0) and (opt.save_period > 0) and (epoch % opt.save_period == 0):\n                    torch.save(ckpt, w / f'epoch{epoch}.pt')\n                del ckpt\n                callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi)\n\n            # Stop Single-GPU\n            if RANK == -1 and stopper(epoch=epoch, fitness=fi):\n                break\n\n            # Stop DDP TODO: known issues shttps://github.com/ultralytics/yolov5/pull/4576\n            # stop = stopper(epoch=epoch, fitness=fi)\n            # if RANK == 0:\n            #    dist.broadcast_object_list([stop], 0)  # broadcast 'stop' to all ranks\n\n        # Stop DPP\n        # with torch_distributed_zero_first(RANK):\n        # if stop:\n        #    break  # must break all DDP ranks\n\n        # end epoch ----------------------------------------------------------------------------------------------------\n    # end training -----------------------------------------------------------------------------------------------------\n    if RANK in [-1, 0]:\n        LOGGER.info(f'\\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.')\n        for f in last, best:\n            if f.exists():\n                strip_optimizer(f)  # strip optimizers\n                if f is best:\n                    LOGGER.info(f'\\nValidating {f}...')\n                    results, _, _ = val.run(data_dict,\n                                            batch_size=batch_size // WORLD_SIZE * 2,\n                                            imgsz=imgsz,\n                                            model=attempt_load(f, device).half(),\n                                            iou_thres=0.65 if is_coco else 0.60,  # best pycocotools results at 0.65\n                                            single_cls=single_cls,\n                                            dataloader=val_loader,\n                                            save_dir=save_dir,\n                                            save_json=is_coco,\n                                            verbose=True,\n                                            plots=True,\n                                            callbacks=callbacks,\n                                            compute_loss=compute_loss)  # val best model with plots\n                    if is_coco:\n                        callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi)\n\n        callbacks.run('on_train_end', last, best, plots, epoch, results)\n        LOGGER.info(f\"Results saved to {colorstr('bold', save_dir)}\")\n\n    torch.cuda.empty_cache()\n    return results\n\n\ndef parse_opt(known=False):\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path')\n    parser.add_argument('--cfg', type=str, default='', help='model.yaml path')\n    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')\n    parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch.yaml', help='hyperparameters path')\n    parser.add_argument('--epochs', type=int, default=300)\n    parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')\n    parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')\n    parser.add_argument('--rect', action='store_true', help='rectangular training')\n    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')\n    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')\n    parser.add_argument('--noval', action='store_true', help='only validate final epoch')\n    parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')\n    parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')\n    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')\n    parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in \"ram\" (default) or \"disk\"')\n    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')\n    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')\n    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')\n    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')\n    parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')\n    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')\n    parser.add_argument('--workers', type=int, default=2, help='max dataloader workers (per RANK in DDP mode)')\n    parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')\n    parser.add_argument('--name', default='exp', help='save to project/name')\n    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')\n    parser.add_argument('--quad', action='store_true', help='quad dataloader')\n    parser.add_argument('--linear-lr', action='store_true', help='linear LR')\n    parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')\n    parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')\n    parser.add_argument('--freeze', type=int, default=0, help='Number of layers to freeze. backbone=10, all=24')\n    parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')\n    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')\n\n    # Weights & Biases arguments\n    parser.add_argument('--entity', default=None, help='W&B: Entity')\n    parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, \"val\" option')\n    parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')\n    parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')\n\n    opt = parser.parse_known_args()[0] if known else parser.parse_args()\n    return opt\n\n\ndef main(opt, callbacks=Callbacks()):\n    # Checks\n    if RANK in [-1, 0]:\n        print_args(FILE.stem, opt)\n        check_git_status()\n        check_requirements(exclude=['thop'])\n\n    # Resume\n    if opt.resume and not check_wandb_resume(opt) and not opt.evolve:  # resume an interrupted run\n        ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run()  # specified or most recent path\n        assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'\n        with open(Path(ckpt).parent.parent / 'opt.yaml', errors='ignore') as f:\n            opt = argparse.Namespace(**yaml.safe_load(f))  # replace\n        opt.cfg, opt.weights, opt.resume = '', ckpt, True  # reinstate\n        LOGGER.info(f'Resuming training from {ckpt}')\n    else:\n        opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \\\n            check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project)  # checks\n        assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'\n        if opt.evolve:\n            opt.project = str(ROOT / 'runs/evolve')\n            opt.exist_ok, opt.resume = opt.resume, False  # pass resume to exist_ok and disable resume\n        opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))\n\n    # DDP mode\n    device = select_device(opt.device, batch_size=opt.batch_size)\n    if LOCAL_RANK != -1:\n        assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command'\n        assert opt.batch_size % WORLD_SIZE == 0, '--batch-size must be multiple of CUDA device count'\n        assert not opt.image_weights, '--image-weights argument is not compatible with DDP training'\n        assert not opt.evolve, '--evolve argument is not compatible with DDP training'\n        torch.cuda.set_device(LOCAL_RANK)\n        device = torch.device('cuda', LOCAL_RANK)\n        dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\")\n\n    # Train\n    if not opt.evolve:\n        train(opt.hyp, opt, device, callbacks)\n        if WORLD_SIZE > 1 and RANK == 0:\n            LOGGER.info('Destroying process group... ')\n            dist.destroy_process_group()\n\n    # Evolve hyperparameters (optional)\n    else:\n        # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)\n        meta = {'lr0': (1, 1e-5, 1e-1),  # initial learning rate (SGD=1E-2, Adam=1E-3)\n                'lrf': (1, 0.01, 1.0),  # final OneCycleLR learning rate (lr0 * lrf)\n                'momentum': (0.3, 0.6, 0.98),  # SGD momentum/Adam beta1\n                'weight_decay': (1, 0.0, 0.001),  # optimizer weight decay\n                'warmup_epochs': (1, 0.0, 5.0),  # warmup epochs (fractions ok)\n                'warmup_momentum': (1, 0.0, 0.95),  # warmup initial momentum\n                'warmup_bias_lr': (1, 0.0, 0.2),  # warmup initial bias lr\n                'box': (1, 0.02, 0.2),  # box loss gain\n                'cls': (1, 0.2, 4.0),  # cls loss gain\n                'cls_pw': (1, 0.5, 2.0),  # cls BCELoss positive_weight\n                'obj': (1, 0.2, 4.0),  # obj loss gain (scale with pixels)\n                'obj_pw': (1, 0.5, 2.0),  # obj BCELoss positive_weight\n                'iou_t': (0, 0.1, 0.7),  # IoU training threshold\n                'anchor_t': (1, 2.0, 8.0),  # anchor-multiple threshold\n                'anchors': (2, 2.0, 10.0),  # anchors per output grid (0 to ignore)\n                'fl_gamma': (0, 0.0, 2.0),  # focal loss gamma (efficientDet default gamma=1.5)\n                'hsv_h': (1, 0.0, 0.1),  # image HSV-Hue augmentation (fraction)\n                'hsv_s': (1, 0.0, 0.9),  # image HSV-Saturation augmentation (fraction)\n                'hsv_v': (1, 0.0, 0.9),  # image HSV-Value augmentation (fraction)\n                'degrees': (1, 0.0, 45.0),  # image rotation (+/- deg)\n                'translate': (1, 0.0, 0.9),  # image translation (+/- fraction)\n                'scale': (1, 0.0, 0.9),  # image scale (+/- gain)\n                'shear': (1, 0.0, 10.0),  # image shear (+/- deg)\n                'perspective': (0, 0.0, 0.001),  # image perspective (+/- fraction), range 0-0.001\n                'flipud': (1, 0.0, 1.0),  # image flip up-down (probability)\n                'fliplr': (0, 0.0, 1.0),  # image flip left-right (probability)\n                'mosaic': (1, 0.0, 1.0),  # image mixup (probability)\n                'mixup': (1, 0.0, 1.0),  # image mixup (probability)\n                'copy_paste': (1, 0.0, 1.0)}  # segment copy-paste (probability)\n\n        with open(opt.hyp, errors='ignore') as f:\n            hyp = yaml.safe_load(f)  # load hyps dict\n            if 'anchors' not in hyp:  # anchors commented in hyp.yaml\n                hyp['anchors'] = 3\n        opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir)  # only val/save final epoch\n        # ei = [isinstance(x, (int, float)) for x in hyp.values()]  # evolvable indices\n        evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv'\n        if opt.bucket:\n            os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {save_dir}')  # download evolve.csv if exists\n\n        for _ in range(opt.evolve):  # generations to evolve\n            if evolve_csv.exists():  # if evolve.csv exists: select best hyps and mutate\n                # Select parent(s)\n                parent = 'single'  # parent selection method: 'single' or 'weighted'\n                x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1)\n                n = min(5, len(x))  # number of previous results to consider\n                x = x[np.argsort(-fitness(x))][:n]  # top n mutations\n                w = fitness(x) - fitness(x).min() + 1E-6  # weights (sum > 0)\n                if parent == 'single' or len(x) == 1:\n                    # x = x[random.randint(0, n - 1)]  # random selection\n                    x = x[random.choices(range(n), weights=w)[0]]  # weighted selection\n                elif parent == 'weighted':\n                    x = (x * w.reshape(n, 1)).sum(0) / w.sum()  # weighted combination\n\n                # Mutate\n                mp, s = 0.8, 0.2  # mutation probability, sigma\n                npr = np.random\n                npr.seed(int(time.time()))\n                g = np.array([meta[k][0] for k in hyp.keys()])  # gains 0-1\n                ng = len(meta)\n                v = np.ones(ng)\n                while all(v == 1):  # mutate until a change occurs (prevent duplicates)\n                    v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)\n                for i, k in enumerate(hyp.keys()):  # plt.hist(v.ravel(), 300)\n                    hyp[k] = float(x[i + 7] * v[i])  # mutate\n\n            # Constrain to limits\n            for k, v in meta.items():\n                hyp[k] = max(hyp[k], v[1])  # lower limit\n                hyp[k] = min(hyp[k], v[2])  # upper limit\n                hyp[k] = round(hyp[k], 5)  # significant digits\n\n            # Train mutation\n            results = train(hyp.copy(), opt, device, callbacks)\n\n            # Write mutation results\n            print_mutation(results, hyp.copy(), save_dir, opt.bucket)\n\n        # Plot results\n        plot_evolve(evolve_csv)\n        LOGGER.info(f'Hyperparameter evolution finished\\n'\n                    f\"Results saved to {colorstr('bold', save_dir)}\\n\"\n                    f'Use best hyperparameters example: $ python train.py --hyp {evolve_yaml}')\n\n\ndef run(**kwargs):\n    # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt')\n    opt = parse_opt(True)\n    for k, v in kwargs.items():\n        setattr(opt, k, v)\n    main(opt)\n\n\nif __name__ == \"__main__\":\n    opt = parse_opt()\n    main(opt)\n"
  },
  {
    "path": "tutorial.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"YOLOv5 Tutorial\",\n      \"provenance\": [],\n      \"collapsed_sections\": [],\n      \"include_colab_link\": true\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"accelerator\": \"GPU\",\n    \"widgets\": {\n      \"application/vnd.jupyter.widget-state+json\": {\n        \"eb95db7cae194218b3fcefb439b6352f\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HBoxModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HBoxView\",\n            \"_dom_classes\": [],\n            \"_model_name\": \"HBoxModel\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"box_style\": \"\",\n            \"layout\": \"IPY_MODEL_769ecde6f2e64bacb596ce972f8d3d2d\",\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"children\": [\n              \"IPY_MODEL_384a001876054c93b0af45cd1e960bfe\",\n              \"IPY_MODEL_dded0aeae74440f7ba2ffa0beb8dd612\",\n              \"IPY_MODEL_5296d28be75740b2892ae421bbec3657\"\n            ]\n          }\n        },\n        \"769ecde6f2e64bacb596ce972f8d3d2d\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"384a001876054c93b0af45cd1e960bfe\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_9f09facb2a6c4a7096810d327c8b551c\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \"100%\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_25621cff5d16448cb7260e839fd0f543\"\n          }\n        },\n        \"dded0aeae74440f7ba2ffa0beb8dd612\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"FloatProgressModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"ProgressView\",\n            \"style\": \"IPY_MODEL_0ce7164fc0c74bb9a2b5c7037375a727\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"FloatProgressModel\",\n            \"bar_style\": \"success\",\n            \"max\": 818322941,\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": 818322941,\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"orientation\": \"horizontal\",\n            \"min\": 0,\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_c4c4593c10904cb5b8a5724d60c7e181\"\n          }\n        },\n        \"5296d28be75740b2892ae421bbec3657\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_473371611126476c88d5d42ec7031ed6\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \" 780M/780M [00:11&lt;00:00, 91.9MB/s]\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_65efdfd0d26c46e79c8c5ff3b77126cc\"\n          }\n        },\n        \"9f09facb2a6c4a7096810d327c8b551c\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"25621cff5d16448cb7260e839fd0f543\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"0ce7164fc0c74bb9a2b5c7037375a727\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"ProgressStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"ProgressStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"bar_color\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"c4c4593c10904cb5b8a5724d60c7e181\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"473371611126476c88d5d42ec7031ed6\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"65efdfd0d26c46e79c8c5ff3b77126cc\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        }\n      }\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"view-in-github\",\n        \"colab_type\": \"text\"\n      },\n      \"source\": [\n        \"<a href=\\\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\\\" target=\\\"_parent\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"/></a>\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"t6MPjfT5NrKQ\"\n      },\n      \"source\": [\n        \"<a align=\\\"left\\\" href=\\\"https://ultralytics.com/yolov5\\\" target=\\\"_blank\\\">\\n\",\n        \"<img width=\\\"1024\\\", src=\\\"https://user-images.githubusercontent.com/26833433/125273437-35b3fc00-e30d-11eb-9079-46f313325424.png\\\"></a>\\n\",\n        \"\\n\",\n        \"This is the **official YOLOv5 🚀 notebook** by **Ultralytics**, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). \\n\",\n        \"For more information please visit https://github.com/ultralytics/yolov5 and https://ultralytics.com. Thank you!\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"7mGmQbAO5pQb\"\n      },\n      \"source\": [\n        \"# Setup\\n\",\n        \"\\n\",\n        \"Clone repo, install dependencies and check PyTorch and GPU.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"wbvMlHd_QwMG\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"3809e5a9-dd41-4577-fe62-5531abf7cca2\"\n      },\n      \"source\": [\n        \"!git clone https://github.com/ultralytics/yolov5  # clone\\n\",\n        \"%cd yolov5\\n\",\n        \"%pip install -qr requirements.txt  # install\\n\",\n        \"\\n\",\n        \"import torch\\n\",\n        \"from yolov5 import utils\\n\",\n        \"display = utils.notebook_init()  # checks\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"YOLOv5 🚀 v6.0-48-g84a8099 torch 1.10.0+cu102 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)\\n\",\n            \"Setup complete ✅\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"4JnkELT0cIJg\"\n      },\n      \"source\": [\n        \"# 1. Inference\\n\",\n        \"\\n\",\n        \"`detect.py` runs YOLOv5 inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and saving results to `runs/detect`. Example inference sources are:\\n\",\n        \"\\n\",\n        \"```shell\\n\",\n        \"python detect.py --source 0  # webcam\\n\",\n        \"                          img.jpg  # image \\n\",\n        \"                          vid.mp4  # video\\n\",\n        \"                          path/  # directory\\n\",\n        \"                          path/*.jpg  # glob\\n\",\n        \"                          'https://youtu.be/Zgi9g1ksQHc'  # YouTube\\n\",\n        \"                          'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream\\n\",\n        \"```\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"zR9ZbuQCH7FX\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"8f7e6588-215d-4ebd-93af-88b871e770a7\"\n      },\n      \"source\": [\n        \"!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images\\n\",\n        \"display.Image(filename='runs/detect/exp/zidane.jpg', width=600)\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\u001b[34m\\u001b[1mdetect: \\u001b[0mweights=['yolov5s.pt'], source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False\\n\",\n            \"YOLOv5 🚀 v6.0-48-g84a8099 torch 1.10.0+cu102 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)\\n\",\n            \"\\n\",\n            \"Fusing layers... \\n\",\n            \"Model Summary: 213 layers, 7225885 parameters, 0 gradients\\n\",\n            \"image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, Done. (0.007s)\\n\",\n            \"image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 1 tie, Done. (0.007s)\\n\",\n            \"Speed: 0.5ms pre-process, 6.9ms inference, 1.3ms NMS per image at shape (1, 3, 640, 640)\\n\",\n            \"Results saved to \\u001b[1mruns/detect/exp\\u001b[0m\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"hkAzDWJ7cWTr\"\n      },\n      \"source\": [\n        \"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\\n\",\n        \"<img align=\\\"left\\\" src=\\\"https://user-images.githubusercontent.com/26833433/127574988-6a558aa1-d268-44b9-bf6b-62d4c605cc72.jpg\\\" width=\\\"600\\\">\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"0eq1SMWl6Sfn\"\n      },\n      \"source\": [\n        \"# 2. Validate\\n\",\n        \"Validate a model's accuracy on [COCO](https://cocodataset.org/#home) val or test-dev datasets. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation.\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"eyTZYGgRjnMc\"\n      },\n      \"source\": [\n        \"## COCO val\\n\",\n        \"Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L14) dataset (1GB - 5000 images), and test model accuracy.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"WQPtK1QYVaD_\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 48,\n          \"referenced_widgets\": [\n            \"eb95db7cae194218b3fcefb439b6352f\",\n            \"769ecde6f2e64bacb596ce972f8d3d2d\",\n            \"384a001876054c93b0af45cd1e960bfe\",\n            \"dded0aeae74440f7ba2ffa0beb8dd612\",\n            \"5296d28be75740b2892ae421bbec3657\",\n            \"9f09facb2a6c4a7096810d327c8b551c\",\n            \"25621cff5d16448cb7260e839fd0f543\",\n            \"0ce7164fc0c74bb9a2b5c7037375a727\",\n            \"c4c4593c10904cb5b8a5724d60c7e181\",\n            \"473371611126476c88d5d42ec7031ed6\",\n            \"65efdfd0d26c46e79c8c5ff3b77126cc\"\n          ]\n        },\n        \"outputId\": \"bcf9a448-1f9b-4a41-ad49-12f181faf05a\"\n      },\n      \"source\": [\n        \"# Download COCO val\\n\",\n        \"torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017val.zip', 'tmp.zip')\\n\",\n        \"!unzip -q tmp.zip -d ../datasets && rm tmp.zip\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"display_data\",\n          \"data\": {\n            \"application/vnd.jupyter.widget-view+json\": {\n              \"model_id\": \"eb95db7cae194218b3fcefb439b6352f\",\n              \"version_minor\": 0,\n              \"version_major\": 2\n            },\n            \"text/plain\": [\n              \"  0%|          | 0.00/780M [00:00<?, ?B/s]\"\n            ]\n          },\n          \"metadata\": {}\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"X58w8JLpMnjH\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"74f1dfa9-6b6d-4b36-f67e-bbae243869f9\"\n      },\n      \"source\": [\n        \"# Run YOLOv5x on COCO val\\n\",\n        \"!python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\u001b[34m\\u001b[1mval: \\u001b[0mdata=/content/yolov5/data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True\\n\",\n            \"YOLOv5 🚀 v6.0-48-g84a8099 torch 1.10.0+cu102 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)\\n\",\n            \"\\n\",\n            \"Downloading https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5x.pt to yolov5x.pt...\\n\",\n            \"100% 166M/166M [00:03<00:00, 54.1MB/s]\\n\",\n            \"\\n\",\n            \"Fusing layers... \\n\",\n            \"Model Summary: 444 layers, 86705005 parameters, 0 gradients\\n\",\n            \"\\u001b[34m\\u001b[1mval: \\u001b[0mScanning '../datasets/coco/val2017' images and labels...4952 found, 48 missing, 0 empty, 0 corrupted: 100% 5000/5000 [00:01<00:00, 2636.64it/s]\\n\",\n            \"\\u001b[34m\\u001b[1mval: \\u001b[0mNew cache created: ../datasets/coco/val2017.cache\\n\",\n            \"               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 157/157 [01:12<00:00,  2.17it/s]\\n\",\n            \"                 all       5000      36335      0.729       0.63      0.683      0.496\\n\",\n            \"Speed: 0.1ms pre-process, 4.9ms inference, 1.9ms NMS per image at shape (32, 3, 640, 640)\\n\",\n            \"\\n\",\n            \"Evaluating pycocotools mAP... saving runs/val/exp/yolov5x_predictions.json...\\n\",\n            \"loading annotations into memory...\\n\",\n            \"Done (t=0.46s)\\n\",\n            \"creating index...\\n\",\n            \"index created!\\n\",\n            \"Loading and preparing results...\\n\",\n            \"DONE (t=5.15s)\\n\",\n            \"creating index...\\n\",\n            \"index created!\\n\",\n            \"Running per image evaluation...\\n\",\n            \"Evaluate annotation type *bbox*\\n\",\n            \"DONE (t=90.39s).\\n\",\n            \"Accumulating evaluation results...\\n\",\n            \"DONE (t=14.54s).\\n\",\n            \" Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.507\\n\",\n            \" Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.689\\n\",\n            \" Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.552\\n\",\n            \" Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.345\\n\",\n            \" Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.559\\n\",\n            \" Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.652\\n\",\n            \" Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.381\\n\",\n            \" Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.630\\n\",\n            \" Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.682\\n\",\n            \" Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.526\\n\",\n            \" Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.732\\n\",\n            \" Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.829\\n\",\n            \"Results saved to \\u001b[1mruns/val/exp\\u001b[0m\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"rc_KbFk0juX2\"\n      },\n      \"source\": [\n        \"## COCO test\\n\",\n        \"Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (**20,000 images, no labels**). Results are saved to a `*.json` file which should be **zipped** and submitted to the evaluation server at https://competitions.codalab.org/competitions/20794.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"V0AJnSeCIHyJ\"\n      },\n      \"source\": [\n        \"# Download COCO test-dev2017\\n\",\n        \"torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017labels.zip', 'tmp.zip')\\n\",\n        \"!unzip -q tmp.zip -d ../datasets && rm tmp.zip\\n\",\n        \"!f=\\\"test2017.zip\\\" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f -d ../datasets/coco/images\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"29GJXAP_lPrt\"\n      },\n      \"source\": [\n        \"# Run YOLOv5x on COCO test\\n\",\n        \"!python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half --task test\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"ZY2VXXXu74w5\"\n      },\n      \"source\": [\n        \"# 3. Train\\n\",\n        \"\\n\",\n        \"<p align=\\\"\\\"><a href=\\\"https://roboflow.com/?ref=ultralytics\\\"><img width=\\\"1000\\\" src=\\\"https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png\\\"/></a></p>\\n\",\n        \"Close the active learning loop by sampling images from your inference conditions with the `roboflow` pip package\\n\",\n        \"<br><br>\\n\",\n        \"\\n\",\n        \"Train a YOLOv5s model on the [COCO128](https://www.kaggle.com/ultralytics/coco128) dataset with `--data coco128.yaml`, starting from pretrained `--weights yolov5s.pt`, or from randomly initialized `--weights '' --cfg yolov5s.yaml`.\\n\",\n        \"\\n\",\n        \"- **Pretrained [Models](https://github.com/ultralytics/yolov5/tree/master/models)** are downloaded\\n\",\n        \"automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases)\\n\",\n        \"- **[Datasets](https://github.com/ultralytics/yolov5/tree/master/data)** available for autodownload include: [COCO](https://github.com/ultralytics/yolov5/blob/master/data/coco.yaml), [COCO128](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), [VOC](https://github.com/ultralytics/yolov5/blob/master/data/VOC.yaml), [Argoverse](https://github.com/ultralytics/yolov5/blob/master/data/Argoverse.yaml), [VisDrone](https://github.com/ultralytics/yolov5/blob/master/data/VisDrone.yaml), [GlobalWheat](https://github.com/ultralytics/yolov5/blob/master/data/GlobalWheat2020.yaml), [xView](https://github.com/ultralytics/yolov5/blob/master/data/xView.yaml), [Objects365](https://github.com/ultralytics/yolov5/blob/master/data/Objects365.yaml), [SKU-110K](https://github.com/ultralytics/yolov5/blob/master/data/SKU-110K.yaml).\\n\",\n        \"- **Training Results** are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc.\\n\",\n        \"<br><br>\\n\",\n        \"\\n\",\n        \"## Train on Custom Data with Roboflow 🌟 NEW\\n\",\n        \"\\n\",\n        \"[Roboflow](https://roboflow.com/?ref=ultralytics) enables you to easily **organize, label, and prepare** a high quality dataset with your own custom data. Roboflow also makes it easy to establish an active learning pipeline, collaborate with your team on dataset improvement, and integrate directly into your model building workflow with the `roboflow` pip package.\\n\",\n        \"\\n\",\n        \"- Custom Training Example: [https://blog.roboflow.com/how-to-train-yolov5-on-a-custom-dataset/](https://blog.roboflow.com/how-to-train-yolov5-on-a-custom-dataset/?ref=ultralytics)\\n\",\n        \"- Custom Training Notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/yolov5-custom-training-tutorial/blob/main/yolov5-custom-training.ipynb)\\n\",\n        \"<br>\\n\",\n        \"\\n\",\n        \"<p align=\\\"\\\"><a href=\\\"https://roboflow.com/?ref=ultralytics\\\"><img width=\\\"480\\\" src=\\\"https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/6152a275ad4b4ac20cd2e21a_roboflow-annotate.gif\\\"/></a></p>Label images lightning fast (including with model-assisted labeling)\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"bOy5KI2ncnWd\"\n      },\n      \"source\": [\n        \"# Tensorboard  (optional)\\n\",\n        \"%load_ext tensorboard\\n\",\n        \"%tensorboard --logdir runs/train\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"2fLAV42oNb7M\"\n      },\n      \"source\": [\n        \"# Weights & Biases  (optional)\\n\",\n        \"%pip install -q wandb\\n\",\n        \"import wandb\\n\",\n        \"wandb.login()\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"1NcFxRcFdJ_O\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"8724d13d-6711-4a12-d96a-1c655e5c3549\"\n      },\n      \"source\": [\n        \"# Train YOLOv5s on COCO128 for 3 epochs\\n\",\n        \"!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\u001b[34m\\u001b[1mtrain: \\u001b[0mweights=yolov5s.pt, cfg=, data=coco128.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=3, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=ram, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, patience=100, freeze=0, save_period=-1, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest\\n\",\n            \"\\u001b[34m\\u001b[1mgithub: \\u001b[0mup to date with https://github.com/ultralytics/yolov5 ✅\\n\",\n            \"YOLOv5 🚀 v6.0-48-g84a8099 torch 1.10.0+cu102 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)\\n\",\n            \"\\n\",\n            \"\\u001b[34m\\u001b[1mhyperparameters: \\u001b[0mlr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0\\n\",\n            \"\\u001b[34m\\u001b[1mWeights & Biases: \\u001b[0mrun 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs (RECOMMENDED)\\n\",\n            \"\\u001b[34m\\u001b[1mTensorBoard: \\u001b[0mStart with 'tensorboard --logdir runs/train', view at http://localhost:6006/\\n\",\n            \"\\n\",\n            \"                 from  n    params  module                                  arguments                     \\n\",\n            \"  0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]              \\n\",\n            \"  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]                \\n\",\n            \"  2                -1  1     18816  models.common.C3                        [64, 64, 1]                   \\n\",\n            \"  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]               \\n\",\n            \"  4                -1  2    115712  models.common.C3                        [128, 128, 2]                 \\n\",\n            \"  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]              \\n\",\n            \"  6                -1  3    625152  models.common.C3                        [256, 256, 3]                 \\n\",\n            \"  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]              \\n\",\n            \"  8                -1  1   1182720  models.common.C3                        [512, 512, 1]                 \\n\",\n            \"  9                -1  1    656896  models.common.SPPF                      [512, 512, 5]                 \\n\",\n            \" 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]              \\n\",\n            \" 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          \\n\",\n            \" 12           [-1, 6]  1         0  models.common.Concat                    [1]                           \\n\",\n            \" 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]          \\n\",\n            \" 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]              \\n\",\n            \" 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          \\n\",\n            \" 16           [-1, 4]  1         0  models.common.Concat                    [1]                           \\n\",\n            \" 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]          \\n\",\n            \" 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]              \\n\",\n            \" 19          [-1, 14]  1         0  models.common.Concat                    [1]                           \\n\",\n            \" 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]          \\n\",\n            \" 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]              \\n\",\n            \" 22          [-1, 10]  1         0  models.common.Concat                    [1]                           \\n\",\n            \" 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]          \\n\",\n            \" 24      [17, 20, 23]  1    229245  models.yolo.Detect                      [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]\\n\",\n            \"Model Summary: 270 layers, 7235389 parameters, 7235389 gradients, 16.5 GFLOPs\\n\",\n            \"\\n\",\n            \"Transferred 349/349 items from yolov5s.pt\\n\",\n            \"Scaled weight_decay = 0.0005\\n\",\n            \"\\u001b[34m\\u001b[1moptimizer:\\u001b[0m SGD with parameter groups 57 weight, 60 weight (no decay), 60 bias\\n\",\n            \"\\u001b[34m\\u001b[1malbumentations: \\u001b[0mversion 1.0.3 required by YOLOv5, but version 0.1.12 is currently installed\\n\",\n            \"\\u001b[34m\\u001b[1mtrain: \\u001b[0mScanning '../datasets/coco128/labels/train2017.cache' images and labels... 128 found, 0 missing, 2 empty, 0 corrupted: 100% 128/128 [00:00<?, ?it/s]\\n\",\n            \"\\u001b[34m\\u001b[1mtrain: \\u001b[0mCaching images (0.1GB ram): 100% 128/128 [00:00<00:00, 296.04it/s]\\n\",\n            \"\\u001b[34m\\u001b[1mval: \\u001b[0mScanning '../datasets/coco128/labels/train2017.cache' images and labels... 128 found, 0 missing, 2 empty, 0 corrupted: 100% 128/128 [00:00<?, ?it/s]\\n\",\n            \"\\u001b[34m\\u001b[1mval: \\u001b[0mCaching images (0.1GB ram): 100% 128/128 [00:01<00:00, 121.58it/s]\\n\",\n            \"Plotting labels... \\n\",\n            \"\\n\",\n            \"\\u001b[34m\\u001b[1mautoanchor: \\u001b[0mAnalyzing anchors... anchors/target = 4.27, Best Possible Recall (BPR) = 0.9935\\n\",\n            \"Image sizes 640 train, 640 val\\n\",\n            \"Using 2 dataloader workers\\n\",\n            \"Logging results to \\u001b[1mruns/train/exp\\u001b[0m\\n\",\n            \"Starting training for 3 epochs...\\n\",\n            \"\\n\",\n            \"     Epoch   gpu_mem       box       obj       cls    labels  img_size\\n\",\n            \"       0/2     3.62G   0.04621    0.0711   0.02112       203       640: 100% 8/8 [00:04<00:00,  1.99it/s]\\n\",\n            \"               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 4/4 [00:00<00:00,  4.37it/s]\\n\",\n            \"                 all        128        929      0.655      0.547      0.622       0.41\\n\",\n            \"\\n\",\n            \"     Epoch   gpu_mem       box       obj       cls    labels  img_size\\n\",\n            \"       1/2     5.31G   0.04564   0.06898   0.02116       143       640: 100% 8/8 [00:01<00:00,  4.77it/s]\\n\",\n            \"               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 4/4 [00:00<00:00,  4.27it/s]\\n\",\n            \"                 all        128        929       0.68      0.554      0.632      0.419\\n\",\n            \"\\n\",\n            \"     Epoch   gpu_mem       box       obj       cls    labels  img_size\\n\",\n            \"       2/2     5.31G   0.04487   0.06883   0.01998       253       640: 100% 8/8 [00:01<00:00,  4.91it/s]\\n\",\n            \"               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 4/4 [00:00<00:00,  4.30it/s]\\n\",\n            \"                 all        128        929       0.71      0.544      0.629      0.423\\n\",\n            \"\\n\",\n            \"3 epochs completed in 0.003 hours.\\n\",\n            \"Optimizer stripped from runs/train/exp/weights/last.pt, 14.9MB\\n\",\n            \"Optimizer stripped from runs/train/exp/weights/best.pt, 14.9MB\\n\",\n            \"\\n\",\n            \"Validating runs/train/exp/weights/best.pt...\\n\",\n            \"Fusing layers... \\n\",\n            \"Model Summary: 213 layers, 7225885 parameters, 0 gradients, 16.5 GFLOPs\\n\",\n            \"               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100% 4/4 [00:03<00:00,  1.04it/s]\\n\",\n            \"                 all        128        929       0.71      0.544       0.63      0.423\\n\",\n            \"              person        128        254      0.816      0.669      0.774      0.507\\n\",\n            \"             bicycle        128          6      0.799      0.667      0.614      0.371\\n\",\n            \"                 car        128         46      0.803      0.355      0.486      0.209\\n\",\n            \"          motorcycle        128          5      0.704        0.6      0.791      0.583\\n\",\n            \"            airplane        128          6          1      0.795      0.995      0.717\\n\",\n            \"                 bus        128          7      0.656      0.714       0.72      0.606\\n\",\n            \"               train        128          3      0.852          1      0.995      0.682\\n\",\n            \"               truck        128         12      0.521       0.25      0.395      0.215\\n\",\n            \"                boat        128          6      0.795      0.333      0.445      0.137\\n\",\n            \"       traffic light        128         14      0.576      0.143       0.24      0.161\\n\",\n            \"           stop sign        128          2      0.636        0.5      0.828      0.713\\n\",\n            \"               bench        128          9      0.972      0.444      0.575       0.25\\n\",\n            \"                bird        128         16      0.939      0.968      0.988      0.645\\n\",\n            \"                 cat        128          4      0.984       0.75      0.822      0.694\\n\",\n            \"                 dog        128          9      0.888      0.667      0.903       0.54\\n\",\n            \"               horse        128          2      0.689          1      0.995      0.697\\n\",\n            \"            elephant        128         17       0.96      0.882      0.943      0.681\\n\",\n            \"                bear        128          1      0.549          1      0.995      0.995\\n\",\n            \"               zebra        128          4       0.86          1      0.995      0.952\\n\",\n            \"             giraffe        128          9      0.822      0.778      0.905       0.57\\n\",\n            \"            backpack        128          6          1      0.309      0.457      0.195\\n\",\n            \"            umbrella        128         18      0.775      0.576       0.74      0.418\\n\",\n            \"             handbag        128         19      0.628      0.105      0.167      0.111\\n\",\n            \"                 tie        128          7       0.96      0.571      0.701      0.441\\n\",\n            \"            suitcase        128          4          1      0.895      0.995      0.621\\n\",\n            \"             frisbee        128          5      0.641        0.8      0.798      0.664\\n\",\n            \"                skis        128          1      0.627          1      0.995      0.497\\n\",\n            \"           snowboard        128          7      0.988      0.714      0.768      0.556\\n\",\n            \"         sports ball        128          6      0.671        0.5      0.579      0.339\\n\",\n            \"                kite        128         10      0.631      0.515      0.598      0.221\\n\",\n            \"        baseball bat        128          4       0.47      0.456      0.277      0.137\\n\",\n            \"      baseball glove        128          7      0.459      0.429      0.334      0.182\\n\",\n            \"          skateboard        128          5        0.7       0.48      0.736      0.548\\n\",\n            \"       tennis racket        128          7      0.559      0.571      0.538      0.315\\n\",\n            \"              bottle        128         18      0.607      0.389      0.484      0.282\\n\",\n            \"          wine glass        128         16      0.722      0.812       0.82      0.385\\n\",\n            \"                 cup        128         36      0.881      0.361      0.532      0.312\\n\",\n            \"                fork        128          6      0.384      0.167      0.239      0.191\\n\",\n            \"               knife        128         16      0.908      0.616      0.681      0.443\\n\",\n            \"               spoon        128         22      0.836      0.364      0.536      0.264\\n\",\n            \"                bowl        128         28      0.793      0.536      0.633      0.471\\n\",\n            \"              banana        128          1          0          0      0.142     0.0995\\n\",\n            \"            sandwich        128          2          0          0     0.0951     0.0717\\n\",\n            \"              orange        128          4          1          0       0.67      0.317\\n\",\n            \"            broccoli        128         11      0.345      0.182      0.283      0.243\\n\",\n            \"              carrot        128         24      0.688      0.459      0.612      0.402\\n\",\n            \"             hot dog        128          2      0.424      0.771      0.497      0.473\\n\",\n            \"               pizza        128          5      0.622          1      0.824      0.551\\n\",\n            \"               donut        128         14      0.703          1      0.952      0.853\\n\",\n            \"                cake        128          4      0.733          1      0.945      0.777\\n\",\n            \"               chair        128         35      0.512      0.486      0.488      0.222\\n\",\n            \"               couch        128          6       0.68       0.36      0.746      0.406\\n\",\n            \"        potted plant        128         14      0.797      0.714      0.808      0.482\\n\",\n            \"                 bed        128          3          1          0      0.474      0.318\\n\",\n            \"        dining table        128         13      0.852      0.445      0.478      0.315\\n\",\n            \"              toilet        128          2      0.512        0.5      0.554      0.487\\n\",\n            \"                  tv        128          2      0.754          1      0.995      0.895\\n\",\n            \"              laptop        128          3          1          0       0.39      0.147\\n\",\n            \"               mouse        128          2          1          0     0.0283     0.0226\\n\",\n            \"              remote        128          8      0.747      0.625      0.636      0.488\\n\",\n            \"          cell phone        128          8      0.555      0.166      0.417      0.222\\n\",\n            \"           microwave        128          3      0.417          1      0.995      0.732\\n\",\n            \"                oven        128          5       0.37        0.4      0.432      0.249\\n\",\n            \"                sink        128          6      0.356      0.167      0.269      0.149\\n\",\n            \"        refrigerator        128          5      0.705        0.8      0.814       0.45\\n\",\n            \"                book        128         29      0.628      0.138      0.298      0.136\\n\",\n            \"               clock        128          9      0.857      0.778      0.893      0.574\\n\",\n            \"                vase        128          2      0.242          1      0.663      0.622\\n\",\n            \"            scissors        128          1          1          0     0.0207    0.00207\\n\",\n            \"          teddy bear        128         21      0.847      0.381      0.622      0.345\\n\",\n            \"          toothbrush        128          5       0.99        0.6      0.662       0.45\\n\",\n            \"Results saved to \\u001b[1mruns/train/exp\\u001b[0m\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"15glLzbQx5u0\"\n      },\n      \"source\": [\n        \"# 4. Visualize\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"DLI1JmHU7B0l\"\n      },\n      \"source\": [\n        \"## Weights & Biases Logging 🌟 NEW\\n\",\n        \"\\n\",\n        \"[Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_notebook) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B `pip install wandb`, and then train normally (you will be guided through setup on first use). \\n\",\n        \"\\n\",\n        \"During training you will see live updates at [https://wandb.ai/home](https://wandb.ai/home?utm_campaign=repo_yolo_notebook), and you can create and share detailed [Reports](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY) of your results. For more information see the [YOLOv5 Weights & Biases Tutorial](https://github.com/ultralytics/yolov5/issues/1289). \\n\",\n        \"\\n\",\n        \"<p align=\\\"left\\\"><img width=\\\"900\\\" alt=\\\"Weights & Biases dashboard\\\" src=\\\"https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png\\\"></p>\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"-WPvRbS5Swl6\"\n      },\n      \"source\": [\n        \"## Local Logging\\n\",\n        \"\\n\",\n        \"All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and val jpgs to see mosaics, labels, predictions and augmentation effects. Note an Ultralytics **Mosaic Dataloader** is used for training (shown below), which combines 4 images into 1 mosaic during training.\\n\",\n        \"\\n\",\n        \"> <img src=\\\"https://user-images.githubusercontent.com/26833433/131255960-b536647f-7c61-4f60-bbc5-cb2544d71b2a.jpg\\\" width=\\\"700\\\">  \\n\",\n        \"`train_batch0.jpg` shows train batch 0 mosaics and labels\\n\",\n        \"\\n\",\n        \"> <img src=\\\"https://user-images.githubusercontent.com/26833433/131256748-603cafc7-55d1-4e58-ab26-83657761aed9.jpg\\\" width=\\\"700\\\">  \\n\",\n        \"`test_batch0_labels.jpg` shows val batch 0 labels\\n\",\n        \"\\n\",\n        \"> <img src=\\\"https://user-images.githubusercontent.com/26833433/131256752-3f25d7a5-7b0f-4bb3-ab78-46343c3800fe.jpg\\\" width=\\\"700\\\">  \\n\",\n        \"`test_batch0_pred.jpg` shows val batch 0 _predictions_\\n\",\n        \"\\n\",\n        \"Training results are automatically logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) as `results.csv`, which is plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:\\n\",\n        \"\\n\",\n        \"```python\\n\",\n        \"from utils.plots import plot_results \\n\",\n        \"plot_results('path/to/results.csv')  # plot 'results.csv' as 'results.png'\\n\",\n        \"```\\n\",\n        \"\\n\",\n        \"<img align=\\\"left\\\" width=\\\"800\\\" alt=\\\"COCO128 Training Results\\\" src=\\\"https://user-images.githubusercontent.com/26833433/126906780-8c5e2990-6116-4de6-b78a-367244a33ccf.png\\\">\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"Zelyeqbyt3GD\"\n      },\n      \"source\": [\n        \"# Environments\\n\",\n        \"\\n\",\n        \"YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\\n\",\n        \"\\n\",\n        \"- **Google Colab and Kaggle** notebooks with free GPU: <a href=\\\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"></a> <a href=\\\"https://www.kaggle.com/ultralytics/yolov5\\\"><img src=\\\"https://kaggle.com/static/images/open-in-kaggle.svg\\\" alt=\\\"Open In Kaggle\\\"></a>\\n\",\n        \"- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)\\n\",\n        \"- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)\\n\",\n        \"- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href=\\\"https://hub.docker.com/r/ultralytics/yolov5\\\"><img src=\\\"https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker\\\" alt=\\\"Docker Pulls\\\"></a>\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"6Qu7Iesl0p54\"\n      },\n      \"source\": [\n        \"# Status\\n\",\n        \"\\n\",\n        \"![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)\\n\",\n        \"\\n\",\n        \"If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"IEijrePND_2I\"\n      },\n      \"source\": [\n        \"# Appendix\\n\",\n        \"\\n\",\n        \"Optional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"mcKoSIK2WSzj\"\n      },\n      \"source\": [\n        \"# Reproduce\\n\",\n        \"for x in 'yolov5s', 'yolov5m', 'yolov5l', 'yolov5x':\\n\",\n        \"  !python val.py --weights {x}.pt --data coco.yaml --img 640 --task speed  # speed\\n\",\n        \"  !python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65  # mAP\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"GMusP4OAxFu6\"\n      },\n      \"source\": [\n        \"# PyTorch Hub\\n\",\n        \"import torch\\n\",\n        \"\\n\",\n        \"# Model\\n\",\n        \"model = torch.hub.load('ultralytics/yolov5', 'yolov5s')\\n\",\n        \"\\n\",\n        \"# Images\\n\",\n        \"dir = 'https://ultralytics.com/images/'\\n\",\n        \"imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')]  # batch of images\\n\",\n        \"\\n\",\n        \"# Inference\\n\",\n        \"results = model(imgs)\\n\",\n        \"results.print()  # or .show(), .save()\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"FGH0ZjkGjejy\"\n      },\n      \"source\": [\n        \"# CI Checks\\n\",\n        \"%%shell\\n\",\n        \"export PYTHONPATH=\\\"$PWD\\\"  # to run *.py. files in subdirectories\\n\",\n        \"rm -rf runs  # remove runs/\\n\",\n        \"for m in yolov5n; do  # models\\n\",\n        \"  python train.py --img 64 --batch 32 --weights $m.pt --epochs 1 --device 0  # train pretrained\\n\",\n        \"  python train.py --img 64 --batch 32 --weights '' --cfg $m.yaml --epochs 1 --device 0  # train scratch\\n\",\n        \"  for d in 0 cpu; do  # devices\\n\",\n        \"    python val.py --weights $m.pt --device $d # val official\\n\",\n        \"    python val.py --weights runs/train/exp/weights/best.pt --device $d # val custom\\n\",\n        \"    python detect.py --weights $m.pt --device $d  # detect official\\n\",\n        \"    python detect.py --weights runs/train/exp/weights/best.pt --device $d  # detect custom\\n\",\n        \"  done\\n\",\n        \"  python hubconf.py  # hub\\n\",\n        \"  python models/yolo.py --cfg $m.yaml  # build PyTorch model\\n\",\n        \"  python models/tf.py --weights $m.pt  # build TensorFlow model\\n\",\n        \"  python export.py --img 64 --batch 1 --weights $m.pt --include torchscript onnx  # export\\n\",\n        \"done\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"gogI-kwi3Tye\"\n      },\n      \"source\": [\n        \"# Profile\\n\",\n        \"from utils.torch_utils import profile\\n\",\n        \"\\n\",\n        \"m1 = lambda x: x * torch.sigmoid(x)\\n\",\n        \"m2 = torch.nn.SiLU()\\n\",\n        \"results = profile(input=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"RVRSOhEvUdb5\"\n      },\n      \"source\": [\n        \"# Evolve\\n\",\n        \"!python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov5s.pt --cache --noautoanchor --evolve\\n\",\n        \"!d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket  # upload results (optional)\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"BSgFCAcMbk1R\"\n      },\n      \"source\": [\n        \"# VOC\\n\",\n        \"for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']):  # zip(batch_size, model)\\n\",\n        \"  !python train.py --batch {b} --weights {m}.pt --data VOC.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"metadata\": {\n        \"id\": \"VTRwsvA9u7ln\"\n      },\n      \"source\": [\n        \"# TensorRT \\n\",\n        \"# https://developer.nvidia.com/nvidia-tensorrt-download\\n\",\n        \"!lsb_release -a  # check system\\n\",\n        \"%ls /usr/local | grep cuda  # check CUDA\\n\",\n        \"!wget https://ultralytics.com/assets/TensorRT-8.2.0.6.Linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz  # download\\n\",\n        \"![ -d /content/TensorRT-8.2.0.6/ ] || tar -C /content/ -zxf ./TensorRT-8.2.0.6.Linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz  # unzip\\n\",\n        \"%pip list | grep tensorrt || pip install /content/TensorRT-8.2.0.6/python/tensorrt-8.2.0.6-cp37-none-linux_x86_64.whl  # install\\n\",\n        \"%env LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64:/content/cuda-11.1/lib64:/content/TensorRT-8.2.0.6/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64  # add to path\\n\",\n        \"\\n\",\n        \"!python export.py --weights yolov5s.pt --include engine --imgsz 640 640 --device 0\\n\",\n        \"!python detect.py --weights yolov5s.engine --imgsz 640 640 --device 0\"\n      ],\n      \"execution_count\": null,\n      \"outputs\": []\n    }\n  ]\n}\n"
  },
  {
    "path": "utils/__init__.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nutils/initialization\n\"\"\"\n\n\ndef notebook_init(verbose=True):\n    # Check system software and hardware\n    print('Checking setup...')\n\n    import os\n    import shutil\n\n    from utils.general import check_requirements, emojis, is_colab\n    from utils.torch_utils import select_device  # imports\n\n    check_requirements(('psutil', 'IPython'))\n    import psutil\n    from IPython import display  # to display images and clear console output\n\n    if is_colab():\n        shutil.rmtree('/content/sample_data', ignore_errors=True)  # remove colab /sample_data directory\n\n    if verbose:\n        # System info\n        # gb = 1 / 1000 ** 3  # bytes to GB\n        gib = 1 / 1024 ** 3  # bytes to GiB\n        ram = psutil.virtual_memory().total\n        total, used, free = shutil.disk_usage(\"/\")\n        display.clear_output()\n        s = f'({os.cpu_count()} CPUs, {ram * gib:.1f} GB RAM, {(total - free) * gib:.1f}/{total * gib:.1f} GB disk)'\n    else:\n        s = ''\n\n    select_device(newline=False)\n    print(emojis(f'Setup complete ✅ {s}'))\n    return display\n"
  },
  {
    "path": "utils/activations.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nActivation functions\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\n# SiLU https://arxiv.org/pdf/1606.08415.pdf ----------------------------------------------------------------------------\nclass SiLU(nn.Module):  # export-friendly version of nn.SiLU()\n    @staticmethod\n    def forward(x):\n        return x * torch.sigmoid(x)\n\n\nclass Hardswish(nn.Module):  # export-friendly version of nn.Hardswish()\n    @staticmethod\n    def forward(x):\n        # return x * F.hardsigmoid(x)  # for TorchScript and CoreML\n        return x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0  # for TorchScript, CoreML and ONNX\n\n\n# Mish https://github.com/digantamisra98/Mish --------------------------------------------------------------------------\nclass Mish(nn.Module):\n    @staticmethod\n    def forward(x):\n        return x * F.softplus(x).tanh()\n\n\nclass MemoryEfficientMish(nn.Module):\n    class F(torch.autograd.Function):\n        @staticmethod\n        def forward(ctx, x):\n            ctx.save_for_backward(x)\n            return x.mul(torch.tanh(F.softplus(x)))  # x * tanh(ln(1 + exp(x)))\n\n        @staticmethod\n        def backward(ctx, grad_output):\n            x = ctx.saved_tensors[0]\n            sx = torch.sigmoid(x)\n            fx = F.softplus(x).tanh()\n            return grad_output * (fx + x * sx * (1 - fx * fx))\n\n    def forward(self, x):\n        return self.F.apply(x)\n\n\n# FReLU https://arxiv.org/abs/2007.11824 -------------------------------------------------------------------------------\nclass FReLU(nn.Module):\n    def __init__(self, c1, k=3):  # ch_in, kernel\n        super().__init__()\n        self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False)\n        self.bn = nn.BatchNorm2d(c1)\n\n    def forward(self, x):\n        return torch.max(x, self.bn(self.conv(x)))\n\n\n# ACON https://arxiv.org/pdf/2009.04759.pdf ----------------------------------------------------------------------------\nclass AconC(nn.Module):\n    r\"\"\" ACON activation (activate or not).\n    AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter\n    according to \"Activate or Not: Learning Customized Activation\" <https://arxiv.org/pdf/2009.04759.pdf>.\n    \"\"\"\n\n    def __init__(self, c1):\n        super().__init__()\n        self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))\n        self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))\n        self.beta = nn.Parameter(torch.ones(1, c1, 1, 1))\n\n    def forward(self, x):\n        dpx = (self.p1 - self.p2) * x\n        return dpx * torch.sigmoid(self.beta * dpx) + self.p2 * x\n\n\nclass MetaAconC(nn.Module):\n    r\"\"\" ACON activation (activate or not).\n    MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network\n    according to \"Activate or Not: Learning Customized Activation\" <https://arxiv.org/pdf/2009.04759.pdf>.\n    \"\"\"\n\n    def __init__(self, c1, k=1, s=1, r=16):  # ch_in, kernel, stride, r\n        super().__init__()\n        c2 = max(r, c1 // r)\n        self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))\n        self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))\n        self.fc1 = nn.Conv2d(c1, c2, k, s, bias=True)\n        self.fc2 = nn.Conv2d(c2, c1, k, s, bias=True)\n        # self.bn1 = nn.BatchNorm2d(c2)\n        # self.bn2 = nn.BatchNorm2d(c1)\n\n    def forward(self, x):\n        y = x.mean(dim=2, keepdims=True).mean(dim=3, keepdims=True)\n        # batch-size 1 bug/instabilities https://github.com/ultralytics/yolov5/issues/2891\n        # beta = torch.sigmoid(self.bn2(self.fc2(self.bn1(self.fc1(y)))))  # bug/unstable\n        beta = torch.sigmoid(self.fc2(self.fc1(y)))  # bug patch BN layers removed\n        dpx = (self.p1 - self.p2) * x\n        return dpx * torch.sigmoid(beta * dpx) + self.p2 * x\n"
  },
  {
    "path": "utils/augmentations.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nImage augmentation functions\n\"\"\"\n\nimport math\nimport random\n\nimport cv2\nimport numpy as np\n\nfrom utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box\nfrom utils.metrics import bbox_ioa\n\n\nclass Albumentations:\n    # YOLOv5 Albumentations class (optional, only used if package is installed)\n    def __init__(self):\n        self.transform = None\n        try:\n            import albumentations as A\n            check_version(A.__version__, '1.0.3', hard=True)  # version requirement\n\n            self.transform = A.Compose([\n                A.Blur(p=0.01),\n                A.MedianBlur(p=0.01),\n                A.ToGray(p=0.01),\n                A.CLAHE(p=0.01),\n                A.RandomBrightnessContrast(p=0.0),\n                A.RandomGamma(p=0.0),\n                A.ImageCompression(quality_lower=75, p=0.0)],\n                bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))\n\n            LOGGER.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p))\n        except ImportError:  # package not installed, skip\n            pass\n        except Exception as e:\n            LOGGER.info(colorstr('albumentations: ') + f'{e}')\n\n    def __call__(self, im, labels, p=1.0):\n        if self.transform and random.random() < p:\n            new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0])  # transformed\n            im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])])\n        return im, labels\n\n\ndef augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):\n    # HSV color-space augmentation\n    if hgain or sgain or vgain:\n        r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1  # random gains\n        hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV))\n        dtype = im.dtype  # uint8\n\n        x = np.arange(0, 256, dtype=r.dtype)\n        lut_hue = ((x * r[0]) % 180).astype(dtype)\n        lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)\n        lut_val = np.clip(x * r[2], 0, 255).astype(dtype)\n\n        im_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val)))\n        cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im)  # no return needed\n\n\ndef hist_equalize(im, clahe=True, bgr=False):\n    # Equalize histogram on BGR image 'im' with im.shape(n,m,3) and range 0-255\n    yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV)\n    if clahe:\n        c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))\n        yuv[:, :, 0] = c.apply(yuv[:, :, 0])\n    else:\n        yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0])  # equalize Y channel histogram\n    return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB)  # convert YUV image to RGB\n\n\ndef replicate(im, labels):\n    # Replicate labels\n    h, w = im.shape[:2]\n    boxes = labels[:, 1:].astype(int)\n    x1, y1, x2, y2 = boxes.T\n    s = ((x2 - x1) + (y2 - y1)) / 2  # side length (pixels)\n    for i in s.argsort()[:round(s.size * 0.5)]:  # smallest indices\n        x1b, y1b, x2b, y2b = boxes[i]\n        bh, bw = y2b - y1b, x2b - x1b\n        yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw))  # offset x, y\n        x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]\n        im[y1a:y2a, x1a:x2a] = im[y1b:y2b, x1b:x2b]  # im4[ymin:ymax, xmin:xmax]\n        labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)\n\n    return im, labels\n\n\ndef letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):\n    # Resize and pad image while meeting stride-multiple constraints\n    shape = im.shape[:2]  # current shape [height, width]\n    if isinstance(new_shape, int):\n        new_shape = (new_shape, new_shape)\n\n    # Scale ratio (new / old)\n    r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])\n    if not scaleup:  # only scale down, do not scale up (for better val mAP)\n        r = min(r, 1.0)\n\n    # Compute padding\n    ratio = r, r  # width, height ratios\n    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))\n    dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding\n    if auto:  # minimum rectangle\n        dw, dh = np.mod(dw, stride), np.mod(dh, stride)  # wh padding\n    elif scaleFill:  # stretch\n        dw, dh = 0.0, 0.0\n        new_unpad = (new_shape[1], new_shape[0])\n        ratio = new_shape[1] / shape[1], new_shape[0] / shape[0]  # width, height ratios\n\n    dw /= 2  # divide padding into 2 sides\n    dh /= 2\n\n    if shape[::-1] != new_unpad:  # resize\n        im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)\n    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))\n    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))\n    im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border\n    return im, ratio, (dw, dh)\n\n\ndef random_perspective(im, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0,\n                       border=(0, 0)):\n    # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10))\n    # targets = [cls, xyxy]\n\n    height = im.shape[0] + border[0] * 2  # shape(h,w,c)\n    width = im.shape[1] + border[1] * 2\n\n    # Center\n    C = np.eye(3)\n    C[0, 2] = -im.shape[1] / 2  # x translation (pixels)\n    C[1, 2] = -im.shape[0] / 2  # y translation (pixels)\n\n    # Perspective\n    P = np.eye(3)\n    P[2, 0] = random.uniform(-perspective, perspective)  # x perspective (about y)\n    P[2, 1] = random.uniform(-perspective, perspective)  # y perspective (about x)\n\n    # Rotation and Scale\n    R = np.eye(3)\n    a = random.uniform(-degrees, degrees)\n    # a += random.choice([-180, -90, 0, 90])  # add 90deg rotations to small rotations\n    s = random.uniform(1 - scale, 1 + scale)\n    # s = 2 ** random.uniform(-scale, scale)\n    R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)\n\n    # Shear\n    S = np.eye(3)\n    S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180)  # x shear (deg)\n    S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180)  # y shear (deg)\n\n    # Translation\n    T = np.eye(3)\n    T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width  # x translation (pixels)\n    T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height  # y translation (pixels)\n\n    # Combined rotation matrix\n    M = T @ S @ R @ P @ C  # order of operations (right to left) is IMPORTANT\n    if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any():  # image changed\n        if perspective:\n            im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114))\n        else:  # affine\n            im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))\n\n    # Visualize\n    # import matplotlib.pyplot as plt\n    # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()\n    # ax[0].imshow(im[:, :, ::-1])  # base\n    # ax[1].imshow(im2[:, :, ::-1])  # warped\n\n    # Transform label coordinates\n    n = len(targets)\n    if n:\n        use_segments = any(x.any() for x in segments)\n        new = np.zeros((n, 4))\n        if use_segments:  # warp segments\n            segments = resample_segments(segments)  # upsample\n            for i, segment in enumerate(segments):\n                xy = np.ones((len(segment), 3))\n                xy[:, :2] = segment\n                xy = xy @ M.T  # transform\n                xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]  # perspective rescale or affine\n\n                # clip\n                new[i] = segment2box(xy, width, height)\n\n        else:  # warp boxes\n            xy = np.ones((n * 4, 3))\n            xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2)  # x1y1, x2y2, x1y2, x2y1\n            xy = xy @ M.T  # transform\n            xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8)  # perspective rescale or affine\n\n            # create new boxes\n            x = xy[:, [0, 2, 4, 6]]\n            y = xy[:, [1, 3, 5, 7]]\n            new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T\n\n            # clip\n            new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)\n            new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)\n\n        # filter candidates\n        i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10)\n        targets = targets[i]\n        targets[:, 1:5] = new[i]\n\n    return im, targets\n\n\ndef copy_paste(im, labels, segments, p=0.5):\n    # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)\n    n = len(segments)\n    if p and n:\n        h, w, c = im.shape  # height, width, channels\n        im_new = np.zeros(im.shape, np.uint8)\n        for j in random.sample(range(n), k=round(p * n)):\n            l, s = labels[j], segments[j]\n            box = w - l[3], l[2], w - l[1], l[4]\n            ioa = bbox_ioa(box, labels[:, 1:5])  # intersection over area\n            if (ioa < 0.30).all():  # allow 30% obscuration of existing labels\n                labels = np.concatenate((labels, [[l[0], *box]]), 0)\n                segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))\n                cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)\n\n        result = cv2.bitwise_and(src1=im, src2=im_new)\n        result = cv2.flip(result, 1)  # augment segments (flip left-right)\n        i = result > 0  # pixels to replace\n        # i[:, :] = result.max(2).reshape(h, w, 1)  # act over ch\n        im[i] = result[i]  # cv2.imwrite('debug.jpg', im)  # debug\n\n    return im, labels, segments\n\n\ndef cutout(im, labels, p=0.5):\n    # Applies image cutout augmentation https://arxiv.org/abs/1708.04552\n    if random.random() < p:\n        h, w = im.shape[:2]\n        scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16  # image size fraction\n        for s in scales:\n            mask_h = random.randint(1, int(h * s))  # create random masks\n            mask_w = random.randint(1, int(w * s))\n\n            # box\n            xmin = max(0, random.randint(0, w) - mask_w // 2)\n            ymin = max(0, random.randint(0, h) - mask_h // 2)\n            xmax = min(w, xmin + mask_w)\n            ymax = min(h, ymin + mask_h)\n\n            # apply random color mask\n            im[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]\n\n            # return unobscured labels\n            if len(labels) and s > 0.03:\n                box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)\n                ioa = bbox_ioa(box, labels[:, 1:5])  # intersection over area\n                labels = labels[ioa < 0.60]  # remove >60% obscured labels\n\n    return labels\n\n\ndef mixup(im, labels, im2, labels2):\n    # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf\n    r = np.random.beta(32.0, 32.0)  # mixup ratio, alpha=beta=32.0\n    im = (im * r + im2 * (1 - r)).astype(np.uint8)\n    labels = np.concatenate((labels, labels2), 0)\n    return im, labels\n\n\ndef box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16):  # box1(4,n), box2(4,n)\n    # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio\n    w1, h1 = box1[2] - box1[0], box1[3] - box1[1]\n    w2, h2 = box2[2] - box2[0], box2[3] - box2[1]\n    ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps))  # aspect ratio\n    return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr)  # candidates\n"
  },
  {
    "path": "utils/autoanchor.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nAuto-anchor utils\n\"\"\"\n\nimport random\n\nimport numpy as np\nimport torch\nimport yaml\nfrom tqdm import tqdm\n\nfrom utils.general import LOGGER, colorstr, emojis\n\nPREFIX = colorstr('AutoAnchor: ')\n\n\ndef check_anchor_order(m):\n    # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary\n    a = m.anchors.prod(-1).view(-1)  # anchor area\n    da = a[-1] - a[0]  # delta a\n    ds = m.stride[-1] - m.stride[0]  # delta s\n    if da.sign() != ds.sign():  # same order\n        LOGGER.info(f'{PREFIX}Reversing anchor order')\n        m.anchors[:] = m.anchors.flip(0)\n\n\ndef check_anchors(dataset, model, thr=4.0, imgsz=640):\n    # Check anchor fit to data, recompute if necessary\n    m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1]  # Detect()\n    shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)\n    scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1))  # augment scale\n    wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float()  # wh\n\n    def metric(k):  # compute metric\n        r = wh[:, None] / k[None]\n        x = torch.min(r, 1 / r).min(2)[0]  # ratio metric\n        best = x.max(1)[0]  # best_x\n        aat = (x > 1 / thr).float().sum(1).mean()  # anchors above threshold\n        bpr = (best > 1 / thr).float().mean()  # best possible recall\n        return bpr, aat\n\n    anchors = m.anchors.clone() * m.stride.to(m.anchors.device).view(-1, 1, 1)  # current anchors\n    bpr, aat = metric(anchors.cpu().view(-1, 2))\n    s = f'\\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). '\n    if bpr > 0.98:  # threshold to recompute\n        LOGGER.info(emojis(f'{s}Current anchors are a good fit to dataset ✅'))\n    else:\n        LOGGER.info(emojis(f'{s}Anchors are a poor fit to dataset ⚠️, attempting to improve...'))\n        na = m.anchors.numel() // 2  # number of anchors\n        try:\n            anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)\n        except Exception as e:\n            LOGGER.info(f'{PREFIX}ERROR: {e}')\n        new_bpr = metric(anchors)[0]\n        if new_bpr > bpr:  # replace anchors\n            anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)\n            m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1)  # loss\n            check_anchor_order(m)\n            LOGGER.info(f'{PREFIX}New anchors saved to model. Update model *.yaml to use these anchors in the future.')\n        else:\n            LOGGER.info(f'{PREFIX}Original anchors better than new anchors. Proceeding with original anchors.')\n\n\ndef kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):\n    \"\"\" Creates kmeans-evolved anchors from training dataset\n\n        Arguments:\n            dataset: path to data.yaml, or a loaded dataset\n            n: number of anchors\n            img_size: image size used for training\n            thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0\n            gen: generations to evolve anchors using genetic algorithm\n            verbose: print all results\n\n        Return:\n            k: kmeans evolved anchors\n\n        Usage:\n            from utils.autoanchor import *; _ = kmean_anchors()\n    \"\"\"\n    from scipy.cluster.vq import kmeans\n\n    thr = 1 / thr\n\n    def metric(k, wh):  # compute metrics\n        r = wh[:, None] / k[None]\n        x = torch.min(r, 1 / r).min(2)[0]  # ratio metric\n        # x = wh_iou(wh, torch.tensor(k))  # iou metric\n        return x, x.max(1)[0]  # x, best_x\n\n    def anchor_fitness(k):  # mutation fitness\n        _, best = metric(torch.tensor(k, dtype=torch.float32), wh)\n        return (best * (best > thr).float()).mean()  # fitness\n\n    def print_results(k, verbose=True):\n        k = k[np.argsort(k.prod(1))]  # sort small to large\n        x, best = metric(k, wh0)\n        bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n  # best possible recall, anch > thr\n        s = f'{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\\n' \\\n            f'{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' \\\n            f'past_thr={x[x > thr].mean():.3f}-mean: '\n        for i, x in enumerate(k):\n            s += '%i,%i, ' % (round(x[0]), round(x[1]))\n        if verbose:\n            LOGGER.info(s[:-2])\n        return k\n\n    if isinstance(dataset, str):  # *.yaml file\n        with open(dataset, errors='ignore') as f:\n            data_dict = yaml.safe_load(f)  # model dict\n        from utils.datasets import LoadImagesAndLabels\n        dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)\n\n    # Get label wh\n    shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)\n    wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)])  # wh\n\n    # Filter\n    i = (wh0 < 3.0).any(1).sum()\n    if i:\n        LOGGER.info(f'{PREFIX}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')\n    wh = wh0[(wh0 >= 2.0).any(1)]  # filter > 2 pixels\n    # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1)  # multiply by random scale 0-1\n\n    # Kmeans calculation\n    LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...')\n    s = wh.std(0)  # sigmas for whitening\n    k, dist = kmeans(wh / s, n, iter=30)  # points, mean distance\n    assert len(k) == n, f'{PREFIX}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}'\n    k *= s\n    wh = torch.tensor(wh, dtype=torch.float32)  # filtered\n    wh0 = torch.tensor(wh0, dtype=torch.float32)  # unfiltered\n    k = print_results(k, verbose=False)\n\n    # Plot\n    # k, d = [None] * 20, [None] * 20\n    # for i in tqdm(range(1, 21)):\n    #     k[i-1], d[i-1] = kmeans(wh / s, i)  # points, mean distance\n    # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)\n    # ax = ax.ravel()\n    # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')\n    # fig, ax = plt.subplots(1, 2, figsize=(14, 7))  # plot wh\n    # ax[0].hist(wh[wh[:, 0]<100, 0],400)\n    # ax[1].hist(wh[wh[:, 1]<100, 1],400)\n    # fig.savefig('wh.png', dpi=200)\n\n    # Evolve\n    npr = np.random\n    f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1  # fitness, generations, mutation prob, sigma\n    pbar = tqdm(range(gen), desc=f'{PREFIX}Evolving anchors with Genetic Algorithm:')  # progress bar\n    for _ in pbar:\n        v = np.ones(sh)\n        while (v == 1).all():  # mutate until a change occurs (prevent duplicates)\n            v = ((npr.random(sh) < mp) * random.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)\n        kg = (k.copy() * v).clip(min=2.0)\n        fg = anchor_fitness(kg)\n        if fg > f:\n            f, k = fg, kg.copy()\n            pbar.desc = f'{PREFIX}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'\n            if verbose:\n                print_results(k, verbose)\n\n    return print_results(k)\n"
  },
  {
    "path": "utils/autobatch.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nAuto-batch utils\n\"\"\"\n\nfrom copy import deepcopy\n\nimport numpy as np\nimport torch\nfrom torch.cuda import amp\n\nfrom utils.general import LOGGER, colorstr\nfrom utils.torch_utils import profile\n\n\ndef check_train_batch_size(model, imgsz=640):\n    # Check YOLOv5 training batch size\n    with amp.autocast():\n        return autobatch(deepcopy(model).train(), imgsz)  # compute optimal batch size\n\n\ndef autobatch(model, imgsz=640, fraction=0.9, batch_size=16):\n    # Automatically estimate best batch size to use `fraction` of available CUDA memory\n    # Usage:\n    #     import torch\n    #     from utils.autobatch import autobatch\n    #     model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)\n    #     print(autobatch(model))\n\n    prefix = colorstr('AutoBatch: ')\n    LOGGER.info(f'{prefix}Computing optimal batch size for --imgsz {imgsz}')\n    device = next(model.parameters()).device  # get model device\n    if device.type == 'cpu':\n        LOGGER.info(f'{prefix}CUDA not detected, using default CPU batch-size {batch_size}')\n        return batch_size\n\n    d = str(device).upper()  # 'CUDA:0'\n    properties = torch.cuda.get_device_properties(device)  # device properties\n    t = properties.total_memory / 1024 ** 3  # (GiB)\n    r = torch.cuda.memory_reserved(device) / 1024 ** 3  # (GiB)\n    a = torch.cuda.memory_allocated(device) / 1024 ** 3  # (GiB)\n    f = t - (r + a)  # free inside reserved\n    LOGGER.info(f'{prefix}{d} ({properties.name}) {t:.2f}G total, {r:.2f}G reserved, {a:.2f}G allocated, {f:.2f}G free')\n\n    batch_sizes = [1, 2, 4, 8, 16]\n    try:\n        img = [torch.zeros(b, 3, imgsz, imgsz) for b in batch_sizes]\n        y = profile(img, model, n=3, device=device)\n    except Exception as e:\n        LOGGER.warning(f'{prefix}{e}')\n\n    y = [x[2] for x in y if x]  # memory [2]\n    batch_sizes = batch_sizes[:len(y)]\n    p = np.polyfit(batch_sizes, y, deg=1)  # first degree polynomial fit\n    b = int((f * fraction - p[1]) / p[0])  # y intercept (optimal batch size)\n    LOGGER.info(f'{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%)')\n    return b\n"
  },
  {
    "path": "utils/aws/__init__.py",
    "content": ""
  },
  {
    "path": "utils/aws/mime.sh",
    "content": "# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/\n# This script will run on every instance restart, not only on first start\n# --- DO NOT COPY ABOVE COMMENTS WHEN PASTING INTO USERDATA ---\n\nContent-Type: multipart/mixed; boundary=\"//\"\nMIME-Version: 1.0\n\n--//\nContent-Type: text/cloud-config; charset=\"us-ascii\"\nMIME-Version: 1.0\nContent-Transfer-Encoding: 7bit\nContent-Disposition: attachment; filename=\"cloud-config.txt\"\n\n#cloud-config\ncloud_final_modules:\n- [scripts-user, always]\n\n--//\nContent-Type: text/x-shellscript; charset=\"us-ascii\"\nMIME-Version: 1.0\nContent-Transfer-Encoding: 7bit\nContent-Disposition: attachment; filename=\"userdata.txt\"\n\n#!/bin/bash\n# --- paste contents of userdata.sh here ---\n--//\n"
  },
  {
    "path": "utils/aws/resume.py",
    "content": "# Resume all interrupted trainings in yolov5/ dir including DDP trainings\n# Usage: $ python utils/aws/resume.py\n\nimport os\nimport sys\nfrom pathlib import Path\n\nimport torch\nimport yaml\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[2]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\n\nport = 0  # --master_port\npath = Path('').resolve()\nfor last in path.rglob('*/**/last.pt'):\n    ckpt = torch.load(last)\n    if ckpt['optimizer'] is None:\n        continue\n\n    # Load opt.yaml\n    with open(last.parent.parent / 'opt.yaml', errors='ignore') as f:\n        opt = yaml.safe_load(f)\n\n    # Get device count\n    d = opt['device'].split(',')  # devices\n    nd = len(d)  # number of devices\n    ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1)  # distributed data parallel\n\n    if ddp:  # multi-GPU\n        port += 1\n        cmd = f'python -m torch.distributed.run --nproc_per_node {nd} --master_port {port} train.py --resume {last}'\n    else:  # single-GPU\n        cmd = f'python train.py --resume {last}'\n\n    cmd += ' > /dev/null 2>&1 &'  # redirect output to dev/null and run in daemon thread\n    print(cmd)\n    os.system(cmd)\n"
  },
  {
    "path": "utils/aws/userdata.sh",
    "content": "#!/bin/bash\n# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html\n# This script will run only once on first instance start (for a re-start script see mime.sh)\n# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir\n# Use >300 GB SSD\n\ncd home/ubuntu\nif [ ! -d yolov5 ]; then\n  echo \"Running first-time script.\" # install dependencies, download COCO, pull Docker\n  git clone https://github.com/ultralytics/yolov5 -b master && sudo chmod -R 777 yolov5\n  cd yolov5\n  bash data/scripts/get_coco.sh && echo \"COCO done.\" &\n  sudo docker pull ultralytics/yolov5:latest && echo \"Docker done.\" &\n  python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo \"Requirements done.\" &\n  wait && echo \"All tasks done.\" # finish background tasks\nelse\n  echo \"Running re-start script.\" # resume interrupted runs\n  i=0\n  list=$(sudo docker ps -qa) # container list i.e. $'one\\ntwo\\nthree\\nfour'\n  while IFS= read -r id; do\n    ((i++))\n    echo \"restarting container $i: $id\"\n    sudo docker start $id\n    # sudo docker exec -it $id python train.py --resume # single-GPU\n    sudo docker exec -d $id python utils/aws/resume.py # multi-scenario\n  done <<<\"$list\"\nfi\n"
  },
  {
    "path": "utils/callbacks.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nCallback utils\n\"\"\"\n\n\nclass Callbacks:\n    \"\"\"\"\n    Handles all registered callbacks for YOLOv5 Hooks\n    \"\"\"\n\n    def __init__(self):\n        # Define the available callbacks\n        self._callbacks = {\n            'on_pretrain_routine_start': [],\n            'on_pretrain_routine_end': [],\n\n            'on_train_start': [],\n            'on_train_epoch_start': [],\n            'on_train_batch_start': [],\n            'optimizer_step': [],\n            'on_before_zero_grad': [],\n            'on_train_batch_end': [],\n            'on_train_epoch_end': [],\n\n            'on_val_start': [],\n            'on_val_batch_start': [],\n            'on_val_image_end': [],\n            'on_val_batch_end': [],\n            'on_val_end': [],\n\n            'on_fit_epoch_end': [],  # fit = train + val\n            'on_model_save': [],\n            'on_train_end': [],\n\n            'teardown': [],\n        }\n\n    def register_action(self, hook, name='', callback=None):\n        \"\"\"\n        Register a new action to a callback hook\n\n        Args:\n            hook        The callback hook name to register the action to\n            name        The name of the action for later reference\n            callback    The callback to fire\n        \"\"\"\n        assert hook in self._callbacks, f\"hook '{hook}' not found in callbacks {self._callbacks}\"\n        assert callable(callback), f\"callback '{callback}' is not callable\"\n        self._callbacks[hook].append({'name': name, 'callback': callback})\n\n    def get_registered_actions(self, hook=None):\n        \"\"\"\"\n        Returns all the registered actions by callback hook\n\n        Args:\n            hook The name of the hook to check, defaults to all\n        \"\"\"\n        if hook:\n            return self._callbacks[hook]\n        else:\n            return self._callbacks\n\n    def run(self, hook, *args, **kwargs):\n        \"\"\"\n        Loop through the registered actions and fire all callbacks\n\n        Args:\n            hook The name of the hook to check, defaults to all\n            args Arguments to receive from YOLOv5\n            kwargs Keyword Arguments to receive from YOLOv5\n        \"\"\"\n\n        assert hook in self._callbacks, f\"hook '{hook}' not found in callbacks {self._callbacks}\"\n\n        for logger in self._callbacks[hook]:\n            logger['callback'](*args, **kwargs)\n"
  },
  {
    "path": "utils/datasets.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nDataloaders and dataset utils\n\"\"\"\n\nimport glob\nimport hashlib\nimport json\nimport os\nimport random\nimport shutil\nimport time\nfrom itertools import repeat\nfrom multiprocessing.pool import Pool, ThreadPool\nfrom pathlib import Path\nfrom threading import Thread\nfrom zipfile import ZipFile\n\nimport cv2\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nimport yaml\nfrom PIL import ExifTags, Image, ImageOps\nfrom torch.utils.data import DataLoader, Dataset, dataloader, distributed\nfrom tqdm import tqdm\n\nfrom utils.augmentations import Albumentations, augment_hsv, copy_paste, letterbox, mixup, random_perspective\nfrom utils.general import (LOGGER, NUM_THREADS, check_dataset, check_requirements, check_yaml, clean_str,\n                           segments2boxes, xyn2xy, xywh2xyxy, xywhn2xyxy, xyxy2xywhn)\nfrom utils.torch_utils import torch_distributed_zero_first\n\n# Parameters\nHELP_URL = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'\nIMG_FORMATS = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng', 'webp', 'mpo']  # acceptable image suffixes\nVID_FORMATS = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv']  # acceptable video suffixes\nWORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))  # DPP\n\n# Get orientation exif tag\nfor orientation in ExifTags.TAGS.keys():\n    if ExifTags.TAGS[orientation] == 'Orientation':\n        break\n\n\ndef get_hash(paths):\n    # Returns a single hash value of a list of paths (files or dirs)\n    size = sum(os.path.getsize(p) for p in paths if os.path.exists(p))  # sizes\n    h = hashlib.md5(str(size).encode())  # hash sizes\n    h.update(''.join(paths).encode())  # hash paths\n    return h.hexdigest()  # return hash\n\n\ndef exif_size(img):\n    # Returns exif-corrected PIL size\n    s = img.size  # (width, height)\n    try:\n        rotation = dict(img._getexif().items())[orientation]\n        if rotation == 6:  # rotation 270\n            s = (s[1], s[0])\n        elif rotation == 8:  # rotation 90\n            s = (s[1], s[0])\n    except:\n        pass\n\n    return s\n\n\ndef exif_transpose(image):\n    \"\"\"\n    Transpose a PIL image accordingly if it has an EXIF Orientation tag.\n    Inplace version of https://github.com/python-pillow/Pillow/blob/master/src/PIL/ImageOps.py exif_transpose()\n\n    :param image: The image to transpose.\n    :return: An image.\n    \"\"\"\n    exif = image.getexif()\n    orientation = exif.get(0x0112, 1)  # default 1\n    if orientation > 1:\n        method = {2: Image.FLIP_LEFT_RIGHT,\n                  3: Image.ROTATE_180,\n                  4: Image.FLIP_TOP_BOTTOM,\n                  5: Image.TRANSPOSE,\n                  6: Image.ROTATE_270,\n                  7: Image.TRANSVERSE,\n                  8: Image.ROTATE_90,\n                  }.get(orientation)\n        if method is not None:\n            image = image.transpose(method)\n            del exif[0x0112]\n            image.info[\"exif\"] = exif.tobytes()\n    return image\n\n\ndef create_dataloader(path, imgsz, batch_size, stride, single_cls=False, hyp=None, augment=False, cache=False, pad=0.0,\n                      rect=False, rank=-1, workers=8, image_weights=False, quad=False, prefix='', shuffle=False):\n    if rect and shuffle:\n        LOGGER.warning('WARNING: --rect is incompatible with DataLoader shuffle, setting shuffle=False')\n        shuffle = False\n    with torch_distributed_zero_first(rank):  # init dataset *.cache only once if DDP\n        dataset = LoadImagesAndLabels(path, imgsz, batch_size,\n                                      augment=augment,  # augmentation\n                                      hyp=hyp,  # hyperparameters\n                                      rect=rect,  # rectangular batches\n                                      cache_images=cache,\n                                      single_cls=single_cls,\n                                      stride=int(stride),\n                                      pad=pad,\n                                      image_weights=image_weights,\n                                      prefix=prefix)\n\n    batch_size = min(batch_size, len(dataset))\n    nw = min([os.cpu_count() // WORLD_SIZE, batch_size if batch_size > 1 else 0, workers])  # number of workers\n    sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle)\n    loader = DataLoader if image_weights else InfiniteDataLoader  # only DataLoader allows for attribute updates\n    return loader(dataset,\n                  batch_size=batch_size,\n                  shuffle=shuffle and sampler is None,\n                  num_workers=nw,\n                  sampler=sampler,\n                  pin_memory=True,\n                  collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn), dataset\n\n\nclass InfiniteDataLoader(dataloader.DataLoader):\n    \"\"\" Dataloader that reuses workers\n\n    Uses same syntax as vanilla DataLoader\n    \"\"\"\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))\n        self.iterator = super().__iter__()\n\n    def __len__(self):\n        return len(self.batch_sampler.sampler)\n\n    def __iter__(self):\n        for i in range(len(self)):\n            yield next(self.iterator)\n\n\nclass _RepeatSampler:\n    \"\"\" Sampler that repeats forever\n\n    Args:\n        sampler (Sampler)\n    \"\"\"\n\n    def __init__(self, sampler):\n        self.sampler = sampler\n\n    def __iter__(self):\n        while True:\n            yield from iter(self.sampler)\n\n\nclass LoadImages:\n    # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4`\n    def __init__(self, path, img_size=640, stride=32, auto=True):\n        p = str(Path(path).resolve())  # os-agnostic absolute path\n        if '*' in p:\n            files = sorted(glob.glob(p, recursive=True))  # glob\n        elif os.path.isdir(p):\n            files = sorted(glob.glob(os.path.join(p, '*.*')))  # dir\n        elif os.path.isfile(p):\n            files = [p]  # files\n        else:\n            raise Exception(f'ERROR: {p} does not exist')\n\n        images = [x for x in files if x.split('.')[-1].lower() in IMG_FORMATS]\n        videos = [x for x in files if x.split('.')[-1].lower() in VID_FORMATS]\n        ni, nv = len(images), len(videos)\n\n        self.img_size = img_size\n        self.stride = stride\n        self.files = images + videos\n        self.nf = ni + nv  # number of files\n        self.video_flag = [False] * ni + [True] * nv\n        self.mode = 'image'\n        self.auto = auto\n        if any(videos):\n            self.new_video(videos[0])  # new video\n        else:\n            self.cap = None\n        assert self.nf > 0, f'No images or videos found in {p}. ' \\\n                            f'Supported formats are:\\nimages: {IMG_FORMATS}\\nvideos: {VID_FORMATS}'\n\n    def __iter__(self):\n        self.count = 0\n        return self\n\n    def __next__(self):\n        if self.count == self.nf:\n            raise StopIteration\n        path = self.files[self.count]\n\n        if self.video_flag[self.count]:\n            # Read video\n            self.mode = 'video'\n            ret_val, img0 = self.cap.read()\n            while not ret_val:\n                self.count += 1\n                self.cap.release()\n                if self.count == self.nf:  # last video\n                    raise StopIteration\n                else:\n                    path = self.files[self.count]\n                    self.new_video(path)\n                    ret_val, img0 = self.cap.read()\n\n            self.frame += 1\n            s = f'video {self.count + 1}/{self.nf} ({self.frame}/{self.frames}) {path}: '\n\n        else:\n            # Read image\n            self.count += 1\n            img0 = cv2.imread(path)  # BGR\n            assert img0 is not None, f'Image Not Found {path}'\n            s = f'image {self.count}/{self.nf} {path}: '\n\n        # Padded resize\n        img = letterbox(img0, self.img_size, stride=self.stride, auto=self.auto)[0]\n\n        # Convert\n        img = img.transpose((2, 0, 1))[::-1]  # HWC to CHW, BGR to RGB\n        img = np.ascontiguousarray(img)\n\n        return path, img, img0, self.cap, s\n\n    def new_video(self, path):\n        self.frame = 0\n        self.cap = cv2.VideoCapture(path)\n        self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))\n\n    def __len__(self):\n        return self.nf  # number of files\n\n\nclass LoadWebcam:  # for inference\n    # YOLOv5 local webcam dataloader, i.e. `python detect.py --source 0`\n    def __init__(self, pipe='0', img_size=640, stride=32):\n        self.img_size = img_size\n        self.stride = stride\n        self.pipe = eval(pipe) if pipe.isnumeric() else pipe\n        self.cap = cv2.VideoCapture(self.pipe)  # video capture object\n        self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3)  # set buffer size\n\n    def __iter__(self):\n        self.count = -1\n        return self\n\n    def __next__(self):\n        self.count += 1\n        if cv2.waitKey(1) == ord('q'):  # q to quit\n            self.cap.release()\n            cv2.destroyAllWindows()\n            raise StopIteration\n\n        # Read frame\n        ret_val, img0 = self.cap.read()\n        img0 = cv2.flip(img0, 1)  # flip left-right\n\n        # Print\n        assert ret_val, f'Camera Error {self.pipe}'\n        img_path = 'webcam.jpg'\n        s = f'webcam {self.count}: '\n\n        # Padded resize\n        img = letterbox(img0, self.img_size, stride=self.stride)[0]\n\n        # Convert\n        img = img.transpose((2, 0, 1))[::-1]  # HWC to CHW, BGR to RGB\n        img = np.ascontiguousarray(img)\n\n        return img_path, img, img0, None, s\n\n    def __len__(self):\n        return 0\n\n\nclass LoadStreams:\n    # YOLOv5 streamloader, i.e. `python detect.py --source 'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP streams`\n    def __init__(self, sources='streams.txt', img_size=640, stride=32, auto=True):\n        self.mode = 'stream'\n        self.img_size = img_size\n        self.stride = stride\n\n        if os.path.isfile(sources):\n            with open(sources) as f:\n                sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())]\n        else:\n            sources = [sources]\n\n        n = len(sources)\n        self.imgs, self.fps, self.frames, self.threads = [None] * n, [0] * n, [0] * n, [None] * n\n        self.sources = [clean_str(x) for x in sources]  # clean source names for later\n        self.auto = auto\n        for i, s in enumerate(sources):  # index, source\n            # Start thread to read frames from video stream\n            st = f'{i + 1}/{n}: {s}... '\n            if 'youtube.com/' in s or 'youtu.be/' in s:  # if source is YouTube video\n                check_requirements(('pafy', 'youtube_dl'))\n                import pafy\n                s = pafy.new(s).getbest(preftype=\"mp4\").url  # YouTube URL\n            s = eval(s) if s.isnumeric() else s  # i.e. s = '0' local webcam\n            cap = cv2.VideoCapture(s)\n            assert cap.isOpened(), f'{st}Failed to open {s}'\n            w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n            h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n            self.fps[i] = max(cap.get(cv2.CAP_PROP_FPS) % 100, 0) or 30.0  # 30 FPS fallback\n            self.frames[i] = max(int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), 0) or float('inf')  # infinite stream fallback\n\n            _, self.imgs[i] = cap.read()  # guarantee first frame\n            self.threads[i] = Thread(target=self.update, args=([i, cap, s]), daemon=True)\n            LOGGER.info(f\"{st} Success ({self.frames[i]} frames {w}x{h} at {self.fps[i]:.2f} FPS)\")\n            self.threads[i].start()\n        LOGGER.info('')  # newline\n\n        # check for common shapes\n        s = np.stack([letterbox(x, self.img_size, stride=self.stride, auto=self.auto)[0].shape for x in self.imgs])\n        self.rect = np.unique(s, axis=0).shape[0] == 1  # rect inference if all shapes equal\n        if not self.rect:\n            LOGGER.warning('WARNING: Stream shapes differ. For optimal performance supply similarly-shaped streams.')\n\n    def update(self, i, cap, stream):\n        # Read stream `i` frames in daemon thread\n        n, f, read = 0, self.frames[i], 1  # frame number, frame array, inference every 'read' frame\n        while cap.isOpened() and n < f:\n            n += 1\n            # _, self.imgs[index] = cap.read()\n            cap.grab()\n            if n % read == 0:\n                success, im = cap.retrieve()\n                if success:\n                    self.imgs[i] = im\n                else:\n                    LOGGER.warning('WARNING: Video stream unresponsive, please check your IP camera connection.')\n                    self.imgs[i] = np.zeros_like(self.imgs[i])\n                    cap.open(stream)  # re-open stream if signal was lost\n            time.sleep(1 / self.fps[i])  # wait time\n\n    def __iter__(self):\n        self.count = -1\n        return self\n\n    def __next__(self):\n        self.count += 1\n        if not all(x.is_alive() for x in self.threads) or cv2.waitKey(1) == ord('q'):  # q to quit\n            cv2.destroyAllWindows()\n            raise StopIteration\n\n        # Letterbox\n        img0 = self.imgs.copy()\n        img = [letterbox(x, self.img_size, stride=self.stride, auto=self.rect and self.auto)[0] for x in img0]\n\n        # Stack\n        img = np.stack(img, 0)\n\n        # Convert\n        img = img[..., ::-1].transpose((0, 3, 1, 2))  # BGR to RGB, BHWC to BCHW\n        img = np.ascontiguousarray(img)\n\n        return self.sources, img, img0, None, ''\n\n    def __len__(self):\n        return len(self.sources)  # 1E12 frames = 32 streams at 30 FPS for 30 years\n\n\ndef img2label_paths(img_paths):\n    # Define label paths as a function of image paths\n    sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep  # /images/, /labels/ substrings\n    return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths]\n\n\nclass LoadImagesAndLabels(Dataset):\n    # YOLOv5 train_loader/val_loader, loads images and labels for training and validation\n    cache_version = 0.6  # dataset labels *.cache version\n\n    def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False,\n                 cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''):\n        self.img_size = img_size\n        self.augment = augment\n        self.hyp = hyp\n        self.image_weights = image_weights\n        self.rect = False if image_weights else rect\n        self.mosaic = self.augment and not self.rect  # load 4 images at a time into a mosaic (only during training)\n        self.mosaic_border = [-img_size // 2, -img_size // 2]\n        self.stride = stride\n        self.path = path\n        self.albumentations = Albumentations() if augment else None\n\n        try:\n            f = []  # image files\n            for p in path if isinstance(path, list) else [path]:\n                p = Path(p)  # os-agnostic\n                if p.is_dir():  # dir\n                    f += glob.glob(str(p / '**' / '*.*'), recursive=True)\n                    # f = list(p.rglob('*.*'))  # pathlib\n                elif p.is_file():  # file\n                    with open(p) as t:\n                        t = t.read().strip().splitlines()\n                        parent = str(p.parent) + os.sep\n                        f += [x.replace('./', parent) if x.startswith('./') else x for x in t]  # local to global path\n                        # f += [p.parent / x.lstrip(os.sep) for x in t]  # local to global path (pathlib)\n                else:\n                    raise Exception(f'{prefix}{p} does not exist')\n            self.img_files = sorted(x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in IMG_FORMATS)\n            # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in IMG_FORMATS])  # pathlib\n            assert self.img_files, f'{prefix}No images found'\n        except Exception as e:\n            raise Exception(f'{prefix}Error loading data from {path}: {e}\\nSee {HELP_URL}')\n\n        # Check cache\n        self.label_files = img2label_paths(self.img_files)  # labels\n        cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache')\n        try:\n            cache, exists = np.load(cache_path, allow_pickle=True).item(), True  # load dict\n            assert cache['version'] == self.cache_version  # same version\n            assert cache['hash'] == get_hash(self.label_files + self.img_files)  # same hash\n        except:\n            cache, exists = self.cache_labels(cache_path, prefix), False  # cache\n\n        # Display cache\n        nf, nm, ne, nc, n = cache.pop('results')  # found, missing, empty, corrupted, total\n        if exists:\n            d = f\"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted\"\n            tqdm(None, desc=prefix + d, total=n, initial=n)  # display cache results\n            if cache['msgs']:\n                LOGGER.info('\\n'.join(cache['msgs']))  # display warnings\n        assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {HELP_URL}'\n\n        # Read cache\n        [cache.pop(k) for k in ('hash', 'version', 'msgs')]  # remove items\n        labels, shapes, self.segments = zip(*cache.values())\n        self.labels = list(labels)\n        self.shapes = np.array(shapes, dtype=np.float64)\n        self.img_files = list(cache.keys())  # update\n        self.label_files = img2label_paths(cache.keys())  # update\n        n = len(shapes)  # number of images\n        bi = np.floor(np.arange(n) / batch_size).astype(np.int)  # batch index\n        nb = bi[-1] + 1  # number of batches\n        self.batch = bi  # batch index of image\n        self.n = n\n        self.indices = range(n)\n\n        # Update labels\n        include_class = []  # filter labels to include only these classes (optional)\n        include_class_array = np.array(include_class).reshape(1, -1)\n        for i, (label, segment) in enumerate(zip(self.labels, self.segments)):\n            if include_class:\n                j = (label[:, 0:1] == include_class_array).any(1)\n                self.labels[i] = label[j]\n                if segment:\n                    self.segments[i] = segment[j]\n            if single_cls:  # single-class training, merge all classes into 0\n                self.labels[i][:, 0] = 0\n                if segment:\n                    self.segments[i][:, 0] = 0\n\n        # Rectangular Training\n        if self.rect:\n            # Sort by aspect ratio\n            s = self.shapes  # wh\n            ar = s[:, 1] / s[:, 0]  # aspect ratio\n            irect = ar.argsort()\n            self.img_files = [self.img_files[i] for i in irect]\n            self.label_files = [self.label_files[i] for i in irect]\n            self.labels = [self.labels[i] for i in irect]\n            self.shapes = s[irect]  # wh\n            ar = ar[irect]\n\n            # Set training image shapes\n            shapes = [[1, 1]] * nb\n            for i in range(nb):\n                ari = ar[bi == i]\n                mini, maxi = ari.min(), ari.max()\n                if maxi < 1:\n                    shapes[i] = [maxi, 1]\n                elif mini > 1:\n                    shapes[i] = [1, 1 / mini]\n\n            self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride\n\n        # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM)\n        self.imgs, self.img_npy = [None] * n, [None] * n\n        if cache_images:\n            if cache_images == 'disk':\n                self.im_cache_dir = Path(Path(self.img_files[0]).parent.as_posix() + '_npy')\n                self.img_npy = [self.im_cache_dir / Path(f).with_suffix('.npy').name for f in self.img_files]\n                self.im_cache_dir.mkdir(parents=True, exist_ok=True)\n            gb = 0  # Gigabytes of cached images\n            self.img_hw0, self.img_hw = [None] * n, [None] * n\n            results = ThreadPool(NUM_THREADS).imap(lambda x: load_image(*x), zip(repeat(self), range(n)))\n            pbar = tqdm(enumerate(results), total=n)\n            for i, x in pbar:\n                if cache_images == 'disk':\n                    if not self.img_npy[i].exists():\n                        np.save(self.img_npy[i].as_posix(), x[0])\n                    gb += self.img_npy[i].stat().st_size\n                else:\n                    self.imgs[i], self.img_hw0[i], self.img_hw[i] = x  # im, hw_orig, hw_resized = load_image(self, i)\n                    gb += self.imgs[i].nbytes\n                pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB {cache_images})'\n            pbar.close()\n\n    def cache_labels(self, path=Path('./labels.cache'), prefix=''):\n        # Cache dataset labels, check images and read shapes\n        x = {}  # dict\n        nm, nf, ne, nc, msgs = 0, 0, 0, 0, []  # number missing, found, empty, corrupt, messages\n        desc = f\"{prefix}Scanning '{path.parent / path.stem}' images and labels...\"\n        with Pool(NUM_THREADS) as pool:\n            pbar = tqdm(pool.imap(verify_image_label, zip(self.img_files, self.label_files, repeat(prefix))),\n                        desc=desc, total=len(self.img_files))\n            for im_file, l, shape, segments, nm_f, nf_f, ne_f, nc_f, msg in pbar:\n                nm += nm_f\n                nf += nf_f\n                ne += ne_f\n                nc += nc_f\n                if im_file:\n                    x[im_file] = [l, shape, segments]\n                if msg:\n                    msgs.append(msg)\n                pbar.desc = f\"{desc}{nf} found, {nm} missing, {ne} empty, {nc} corrupted\"\n\n        pbar.close()\n        if msgs:\n            LOGGER.info('\\n'.join(msgs))\n        if nf == 0:\n            LOGGER.warning(f'{prefix}WARNING: No labels found in {path}. See {HELP_URL}')\n        x['hash'] = get_hash(self.label_files + self.img_files)\n        x['results'] = nf, nm, ne, nc, len(self.img_files)\n        x['msgs'] = msgs  # warnings\n        x['version'] = self.cache_version  # cache version\n        try:\n            np.save(path, x)  # save cache for next time\n            path.with_suffix('.cache.npy').rename(path)  # remove .npy suffix\n            LOGGER.info(f'{prefix}New cache created: {path}')\n        except Exception as e:\n            LOGGER.warning(f'{prefix}WARNING: Cache directory {path.parent} is not writeable: {e}')  # not writeable\n        return x\n\n    def __len__(self):\n        return len(self.img_files)\n\n    # def __iter__(self):\n    #     self.count = -1\n    #     print('ran dataset iter')\n    #     #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF)\n    #     return self\n\n    def __getitem__(self, index):\n        index = self.indices[index]  # linear, shuffled, or image_weights\n\n        hyp = self.hyp\n        mosaic = self.mosaic and random.random() < hyp['mosaic']\n        if mosaic:\n            # Load mosaic\n            img, labels = load_mosaic(self, index)\n            shapes = None\n\n            # MixUp augmentation\n            if random.random() < hyp['mixup']:\n                img, labels = mixup(img, labels, *load_mosaic(self, random.randint(0, self.n - 1)))\n\n        else:\n            # Load image\n            img, (h0, w0), (h, w) = load_image(self, index)\n\n            # Letterbox\n            shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size  # final letterboxed shape\n            img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment)\n            shapes = (h0, w0), ((h / h0, w / w0), pad)  # for COCO mAP rescaling\n\n            labels = self.labels[index].copy()\n            if labels.size:  # normalized xywh to pixel xyxy format\n                labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1])\n\n            if self.augment:\n                img, labels = random_perspective(img, labels,\n                                                 degrees=hyp['degrees'],\n                                                 translate=hyp['translate'],\n                                                 scale=hyp['scale'],\n                                                 shear=hyp['shear'],\n                                                 perspective=hyp['perspective'])\n\n        nl = len(labels)  # number of labels\n        if nl:\n            labels[:, 1:5] = xyxy2xywhn(labels[:, 1:5], w=img.shape[1], h=img.shape[0], clip=True, eps=1E-3)\n\n        if self.augment:\n            # Albumentations\n            img, labels = self.albumentations(img, labels)\n            nl = len(labels)  # update after albumentations\n\n            # HSV color-space\n            augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v'])\n\n            # Flip up-down\n            if random.random() < hyp['flipud']:\n                img = np.flipud(img)\n                if nl:\n                    labels[:, 2] = 1 - labels[:, 2]\n\n            # Flip left-right\n            if random.random() < hyp['fliplr']:\n                img = np.fliplr(img)\n                if nl:\n                    labels[:, 1] = 1 - labels[:, 1]\n\n            # Cutouts\n            # labels = cutout(img, labels, p=0.5)\n            # nl = len(labels)  # update after cutout\n\n        labels_out = torch.zeros((nl, 6))\n        if nl:\n            labels_out[:, 1:] = torch.from_numpy(labels)\n\n        # Convert\n        img = img.transpose((2, 0, 1))[::-1]  # HWC to CHW, BGR to RGB\n        img = np.ascontiguousarray(img)\n\n        return torch.from_numpy(img), labels_out, self.img_files[index], shapes\n\n    @staticmethod\n    def collate_fn(batch):\n        img, label, path, shapes = zip(*batch)  # transposed\n        for i, l in enumerate(label):\n            l[:, 0] = i  # add target image index for build_targets()\n        return torch.stack(img, 0), torch.cat(label, 0), path, shapes\n\n    @staticmethod\n    def collate_fn4(batch):\n        img, label, path, shapes = zip(*batch)  # transposed\n        n = len(shapes) // 4\n        img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n]\n\n        ho = torch.tensor([[0.0, 0, 0, 1, 0, 0]])\n        wo = torch.tensor([[0.0, 0, 1, 0, 0, 0]])\n        s = torch.tensor([[1, 1, 0.5, 0.5, 0.5, 0.5]])  # scale\n        for i in range(n):  # zidane torch.zeros(16,3,720,1280)  # BCHW\n            i *= 4\n            if random.random() < 0.5:\n                im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2.0, mode='bilinear', align_corners=False)[\n                    0].type(img[i].type())\n                l = label[i]\n            else:\n                im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2)\n                l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s\n            img4.append(im)\n            label4.append(l)\n\n        for i, l in enumerate(label4):\n            l[:, 0] = i  # add target image index for build_targets()\n\n        return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4\n\n\n# Ancillary functions --------------------------------------------------------------------------------------------------\ndef load_image(self, i):\n    # loads 1 image from dataset index 'i', returns im, original hw, resized hw\n    im = self.imgs[i]\n    if im is None:  # not cached in ram\n        npy = self.img_npy[i]\n        if npy and npy.exists():  # load npy\n            im = np.load(npy)\n        else:  # read image\n            path = self.img_files[i]\n            im = cv2.imread(path)  # BGR\n            assert im is not None, f'Image Not Found {path}'\n        h0, w0 = im.shape[:2]  # orig hw\n        r = self.img_size / max(h0, w0)  # ratio\n        if r != 1:  # if sizes are not equal\n            im = cv2.resize(im, (int(w0 * r), int(h0 * r)),\n                            interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR)\n        return im, (h0, w0), im.shape[:2]  # im, hw_original, hw_resized\n    else:\n        return self.imgs[i], self.img_hw0[i], self.img_hw[i]  # im, hw_original, hw_resized\n\n\ndef load_mosaic(self, index):\n    # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic\n    labels4, segments4 = [], []\n    s = self.img_size\n    yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border)  # mosaic center x, y\n    indices = [index] + random.choices(self.indices, k=3)  # 3 additional image indices\n    random.shuffle(indices)\n    for i, index in enumerate(indices):\n        # Load image\n        img, _, (h, w) = load_image(self, index)\n\n        # place img in img4\n        if i == 0:  # top left\n            img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8)  # base image with 4 tiles\n            x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc  # xmin, ymin, xmax, ymax (large image)\n            x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h  # xmin, ymin, xmax, ymax (small image)\n        elif i == 1:  # top right\n            x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc\n            x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h\n        elif i == 2:  # bottom left\n            x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)\n            x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)\n        elif i == 3:  # bottom right\n            x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)\n            x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)\n\n        img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b]  # img4[ymin:ymax, xmin:xmax]\n        padw = x1a - x1b\n        padh = y1a - y1b\n\n        # Labels\n        labels, segments = self.labels[index].copy(), self.segments[index].copy()\n        if labels.size:\n            labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh)  # normalized xywh to pixel xyxy format\n            segments = [xyn2xy(x, w, h, padw, padh) for x in segments]\n        labels4.append(labels)\n        segments4.extend(segments)\n\n    # Concat/clip labels\n    labels4 = np.concatenate(labels4, 0)\n    for x in (labels4[:, 1:], *segments4):\n        np.clip(x, 0, 2 * s, out=x)  # clip when using random_perspective()\n    # img4, labels4 = replicate(img4, labels4)  # replicate\n\n    # Augment\n    img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp['copy_paste'])\n    img4, labels4 = random_perspective(img4, labels4, segments4,\n                                       degrees=self.hyp['degrees'],\n                                       translate=self.hyp['translate'],\n                                       scale=self.hyp['scale'],\n                                       shear=self.hyp['shear'],\n                                       perspective=self.hyp['perspective'],\n                                       border=self.mosaic_border)  # border to remove\n\n    return img4, labels4\n\n\ndef load_mosaic9(self, index):\n    # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic\n    labels9, segments9 = [], []\n    s = self.img_size\n    indices = [index] + random.choices(self.indices, k=8)  # 8 additional image indices\n    random.shuffle(indices)\n    for i, index in enumerate(indices):\n        # Load image\n        img, _, (h, w) = load_image(self, index)\n\n        # place img in img9\n        if i == 0:  # center\n            img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8)  # base image with 4 tiles\n            h0, w0 = h, w\n            c = s, s, s + w, s + h  # xmin, ymin, xmax, ymax (base) coordinates\n        elif i == 1:  # top\n            c = s, s - h, s + w, s\n        elif i == 2:  # top right\n            c = s + wp, s - h, s + wp + w, s\n        elif i == 3:  # right\n            c = s + w0, s, s + w0 + w, s + h\n        elif i == 4:  # bottom right\n            c = s + w0, s + hp, s + w0 + w, s + hp + h\n        elif i == 5:  # bottom\n            c = s + w0 - w, s + h0, s + w0, s + h0 + h\n        elif i == 6:  # bottom left\n            c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h\n        elif i == 7:  # left\n            c = s - w, s + h0 - h, s, s + h0\n        elif i == 8:  # top left\n            c = s - w, s + h0 - hp - h, s, s + h0 - hp\n\n        padx, pady = c[:2]\n        x1, y1, x2, y2 = (max(x, 0) for x in c)  # allocate coords\n\n        # Labels\n        labels, segments = self.labels[index].copy(), self.segments[index].copy()\n        if labels.size:\n            labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady)  # normalized xywh to pixel xyxy format\n            segments = [xyn2xy(x, w, h, padx, pady) for x in segments]\n        labels9.append(labels)\n        segments9.extend(segments)\n\n        # Image\n        img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:]  # img9[ymin:ymax, xmin:xmax]\n        hp, wp = h, w  # height, width previous\n\n    # Offset\n    yc, xc = (int(random.uniform(0, s)) for _ in self.mosaic_border)  # mosaic center x, y\n    img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s]\n\n    # Concat/clip labels\n    labels9 = np.concatenate(labels9, 0)\n    labels9[:, [1, 3]] -= xc\n    labels9[:, [2, 4]] -= yc\n    c = np.array([xc, yc])  # centers\n    segments9 = [x - c for x in segments9]\n\n    for x in (labels9[:, 1:], *segments9):\n        np.clip(x, 0, 2 * s, out=x)  # clip when using random_perspective()\n    # img9, labels9 = replicate(img9, labels9)  # replicate\n\n    # Augment\n    img9, labels9 = random_perspective(img9, labels9, segments9,\n                                       degrees=self.hyp['degrees'],\n                                       translate=self.hyp['translate'],\n                                       scale=self.hyp['scale'],\n                                       shear=self.hyp['shear'],\n                                       perspective=self.hyp['perspective'],\n                                       border=self.mosaic_border)  # border to remove\n\n    return img9, labels9\n\n\ndef create_folder(path='./new'):\n    # Create folder\n    if os.path.exists(path):\n        shutil.rmtree(path)  # delete output folder\n    os.makedirs(path)  # make new output folder\n\n\ndef flatten_recursive(path='../datasets/coco128'):\n    # Flatten a recursive directory by bringing all files to top level\n    new_path = Path(path + '_flat')\n    create_folder(new_path)\n    for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)):\n        shutil.copyfile(file, new_path / Path(file).name)\n\n\ndef extract_boxes(path='../datasets/coco128'):  # from utils.datasets import *; extract_boxes()\n    # Convert detection dataset into classification dataset, with one directory per class\n    path = Path(path)  # images dir\n    shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None  # remove existing\n    files = list(path.rglob('*.*'))\n    n = len(files)  # number of files\n    for im_file in tqdm(files, total=n):\n        if im_file.suffix[1:] in IMG_FORMATS:\n            # image\n            im = cv2.imread(str(im_file))[..., ::-1]  # BGR to RGB\n            h, w = im.shape[:2]\n\n            # labels\n            lb_file = Path(img2label_paths([str(im_file)])[0])\n            if Path(lb_file).exists():\n                with open(lb_file) as f:\n                    lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32)  # labels\n\n                for j, x in enumerate(lb):\n                    c = int(x[0])  # class\n                    f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg'  # new filename\n                    if not f.parent.is_dir():\n                        f.parent.mkdir(parents=True)\n\n                    b = x[1:] * [w, h, w, h]  # box\n                    # b[2:] = b[2:].max()  # rectangle to square\n                    b[2:] = b[2:] * 1.2 + 3  # pad\n                    b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int)\n\n                    b[[0, 2]] = np.clip(b[[0, 2]], 0, w)  # clip boxes outside of image\n                    b[[1, 3]] = np.clip(b[[1, 3]], 0, h)\n                    assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}'\n\n\ndef autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False):\n    \"\"\" Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files\n    Usage: from utils.datasets import *; autosplit()\n    Arguments\n        path:            Path to images directory\n        weights:         Train, val, test weights (list, tuple)\n        annotated_only:  Only use images with an annotated txt file\n    \"\"\"\n    path = Path(path)  # images dir\n    files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS)  # image files only\n    n = len(files)  # number of files\n    random.seed(0)  # for reproducibility\n    indices = random.choices([0, 1, 2], weights=weights, k=n)  # assign each image to a split\n\n    txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt']  # 3 txt files\n    [(path.parent / x).unlink(missing_ok=True) for x in txt]  # remove existing\n\n    print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only)\n    for i, img in tqdm(zip(indices, files), total=n):\n        if not annotated_only or Path(img2label_paths([str(img)])[0]).exists():  # check label\n            with open(path.parent / txt[i], 'a') as f:\n                f.write('./' + img.relative_to(path.parent).as_posix() + '\\n')  # add image to txt file\n\n\ndef verify_image_label(args):\n    # Verify one image-label pair\n    im_file, lb_file, prefix = args\n    nm, nf, ne, nc, msg, segments = 0, 0, 0, 0, '', []  # number (missing, found, empty, corrupt), message, segments\n    try:\n        # verify images\n        im = Image.open(im_file)\n        im.verify()  # PIL verify\n        shape = exif_size(im)  # image size\n        assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels'\n        assert im.format.lower() in IMG_FORMATS, f'invalid image format {im.format}'\n        if im.format.lower() in ('jpg', 'jpeg'):\n            with open(im_file, 'rb') as f:\n                f.seek(-2, 2)\n                if f.read() != b'\\xff\\xd9':  # corrupt JPEG\n                    ImageOps.exif_transpose(Image.open(im_file)).save(im_file, 'JPEG', subsampling=0, quality=100)\n                    msg = f'{prefix}WARNING: {im_file}: corrupt JPEG restored and saved'\n\n        # verify labels\n        if os.path.isfile(lb_file):\n            nf = 1  # label found\n            with open(lb_file) as f:\n                l = [x.split() for x in f.read().strip().splitlines() if len(x)]\n                if any([len(x) > 8 for x in l]):  # is segment\n                    classes = np.array([x[0] for x in l], dtype=np.float32)\n                    segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l]  # (cls, xy1...)\n                    l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1)  # (cls, xywh)\n                l = np.array(l, dtype=np.float32)\n            nl = len(l)\n            if nl:\n                assert l.shape[1] == 5, f'labels require 5 columns, {l.shape[1]} columns detected'\n                assert (l >= 0).all(), f'negative label values {l[l < 0]}'\n                assert (l[:, 1:] <= 1).all(), f'non-normalized or out of bounds coordinates {l[:, 1:][l[:, 1:] > 1]}'\n                _, i = np.unique(l, axis=0, return_index=True)\n                if len(i) < nl:  # duplicate row check\n                    l = l[i]  # remove duplicates\n                    if segments:\n                        segments = segments[i]\n                    msg = f'{prefix}WARNING: {im_file}: {nl - len(i)} duplicate labels removed'\n            else:\n                ne = 1  # label empty\n                l = np.zeros((0, 5), dtype=np.float32)\n        else:\n            nm = 1  # label missing\n            l = np.zeros((0, 5), dtype=np.float32)\n        return im_file, l, shape, segments, nm, nf, ne, nc, msg\n    except Exception as e:\n        nc = 1\n        msg = f'{prefix}WARNING: {im_file}: ignoring corrupt image/label: {e}'\n        return [None, None, None, None, nm, nf, ne, nc, msg]\n\n\ndef dataset_stats(path='coco128.yaml', autodownload=False, verbose=False, profile=False, hub=False):\n    \"\"\" Return dataset statistics dictionary with images and instances counts per split per class\n    To run in parent directory: export PYTHONPATH=\"$PWD/yolov5\"\n    Usage1: from utils.datasets import *; dataset_stats('coco128.yaml', autodownload=True)\n    Usage2: from utils.datasets import *; dataset_stats('../datasets/coco128_with_yaml.zip')\n    Arguments\n        path:           Path to data.yaml or data.zip (with data.yaml inside data.zip)\n        autodownload:   Attempt to download dataset if not found locally\n        verbose:        Print stats dictionary\n    \"\"\"\n\n    def round_labels(labels):\n        # Update labels to integer class and 6 decimal place floats\n        return [[int(c), *(round(x, 4) for x in points)] for c, *points in labels]\n\n    def unzip(path):\n        # Unzip data.zip TODO: CONSTRAINT: path/to/abc.zip MUST unzip to 'path/to/abc/'\n        if str(path).endswith('.zip'):  # path is data.zip\n            assert Path(path).is_file(), f'Error unzipping {path}, file not found'\n            ZipFile(path).extractall(path=path.parent)  # unzip\n            dir = path.with_suffix('')  # dataset directory == zip name\n            return True, str(dir), next(dir.rglob('*.yaml'))  # zipped, data_dir, yaml_path\n        else:  # path is data.yaml\n            return False, None, path\n\n    def hub_ops(f, max_dim=1920):\n        # HUB ops for 1 image 'f': resize and save at reduced quality in /dataset-hub for web/app viewing\n        f_new = im_dir / Path(f).name  # dataset-hub image filename\n        try:  # use PIL\n            im = Image.open(f)\n            r = max_dim / max(im.height, im.width)  # ratio\n            if r < 1.0:  # image too large\n                im = im.resize((int(im.width * r), int(im.height * r)))\n            im.save(f_new, 'JPEG', quality=75, optimize=True)  # save\n        except Exception as e:  # use OpenCV\n            print(f'WARNING: HUB ops PIL failure {f}: {e}')\n            im = cv2.imread(f)\n            im_height, im_width = im.shape[:2]\n            r = max_dim / max(im_height, im_width)  # ratio\n            if r < 1.0:  # image too large\n                im = cv2.resize(im, (int(im_width * r), int(im_height * r)), interpolation=cv2.INTER_AREA)\n            cv2.imwrite(str(f_new), im)\n\n    zipped, data_dir, yaml_path = unzip(Path(path))\n    with open(check_yaml(yaml_path), errors='ignore') as f:\n        data = yaml.safe_load(f)  # data dict\n        if zipped:\n            data['path'] = data_dir  # TODO: should this be dir.resolve()?\n    check_dataset(data, autodownload)  # download dataset if missing\n    hub_dir = Path(data['path'] + ('-hub' if hub else ''))\n    stats = {'nc': data['nc'], 'names': data['names']}  # statistics dictionary\n    for split in 'train', 'val', 'test':\n        if data.get(split) is None:\n            stats[split] = None  # i.e. no test set\n            continue\n        x = []\n        dataset = LoadImagesAndLabels(data[split])  # load dataset\n        for label in tqdm(dataset.labels, total=dataset.n, desc='Statistics'):\n            x.append(np.bincount(label[:, 0].astype(int), minlength=data['nc']))\n        x = np.array(x)  # shape(128x80)\n        stats[split] = {'instance_stats': {'total': int(x.sum()), 'per_class': x.sum(0).tolist()},\n                        'image_stats': {'total': dataset.n, 'unlabelled': int(np.all(x == 0, 1).sum()),\n                                        'per_class': (x > 0).sum(0).tolist()},\n                        'labels': [{str(Path(k).name): round_labels(v.tolist())} for k, v in\n                                   zip(dataset.img_files, dataset.labels)]}\n\n        if hub:\n            im_dir = hub_dir / 'images'\n            im_dir.mkdir(parents=True, exist_ok=True)\n            for _ in tqdm(ThreadPool(NUM_THREADS).imap(hub_ops, dataset.img_files), total=dataset.n, desc='HUB Ops'):\n                pass\n\n    # Profile\n    stats_path = hub_dir / 'stats.json'\n    if profile:\n        for _ in range(1):\n            file = stats_path.with_suffix('.npy')\n            t1 = time.time()\n            np.save(file, stats)\n            t2 = time.time()\n            x = np.load(file, allow_pickle=True)\n            print(f'stats.npy times: {time.time() - t2:.3f}s read, {t2 - t1:.3f}s write')\n\n            file = stats_path.with_suffix('.json')\n            t1 = time.time()\n            with open(file, 'w') as f:\n                json.dump(stats, f)  # save stats *.json\n            t2 = time.time()\n            with open(file) as f:\n                x = json.load(f)  # load hyps dict\n            print(f'stats.json times: {time.time() - t2:.3f}s read, {t2 - t1:.3f}s write')\n\n    # Save, print and return\n    if hub:\n        print(f'Saving {stats_path.resolve()}...')\n        with open(stats_path, 'w') as f:\n            json.dump(stats, f)  # save stats.json\n    if verbose:\n        print(json.dumps(stats, indent=2, sort_keys=False))\n    return stats\n"
  },
  {
    "path": "utils/downloads.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nDownload utils\n\"\"\"\n\nimport os\nimport platform\nimport subprocess\nimport time\nimport urllib\nfrom pathlib import Path\nfrom zipfile import ZipFile\n\nimport requests\nimport torch\n\n\ndef gsutil_getsize(url=''):\n    # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du\n    s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')\n    return eval(s.split(' ')[0]) if len(s) else 0  # bytes\n\n\ndef safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):\n    # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes\n    file = Path(file)\n    assert_msg = f\"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}\"\n    try:  # url1\n        print(f'Downloading {url} to {file}...')\n        torch.hub.download_url_to_file(url, str(file))\n        assert file.exists() and file.stat().st_size > min_bytes, assert_msg  # check\n    except Exception as e:  # url2\n        file.unlink(missing_ok=True)  # remove partial downloads\n        print(f'ERROR: {e}\\nRe-attempting {url2 or url} to {file}...')\n        os.system(f\"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -\")  # curl download, retry and resume on fail\n    finally:\n        if not file.exists() or file.stat().st_size < min_bytes:  # check\n            file.unlink(missing_ok=True)  # remove partial downloads\n            print(f\"ERROR: {assert_msg}\\n{error_msg}\")\n        print('')\n\n\ndef attempt_download(file, repo='ultralytics/yolov5'):  # from utils.downloads import *; attempt_download()\n    # Attempt file download if does not exist\n    file = Path(str(file).strip().replace(\"'\", ''))\n\n    if not file.exists():\n        # URL specified\n        name = Path(urllib.parse.unquote(str(file))).name  # decode '%2F' to '/' etc.\n        if str(file).startswith(('http:/', 'https:/')):  # download\n            url = str(file).replace(':/', '://')  # Pathlib turns :// -> :/\n            name = name.split('?')[0]  # parse authentication https://url.com/file.txt?auth...\n            safe_download(file=name, url=url, min_bytes=1E5)\n            return name\n\n        # GitHub assets\n        file.parent.mkdir(parents=True, exist_ok=True)  # make parent dir (if required)\n        try:\n            response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json()  # github api\n            assets = [x['name'] for x in response['assets']]  # release assets, i.e. ['yolov5s.pt', 'yolov5m.pt', ...]\n            tag = response['tag_name']  # i.e. 'v1.0'\n        except:  # fallback plan\n            assets = ['yolov5n.pt', 'yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt',\n                      'yolov5n6.pt', 'yolov5s6.pt', 'yolov5m6.pt', 'yolov5l6.pt', 'yolov5x6.pt']\n            try:\n                tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1]\n            except:\n                tag = 'v6.0'  # current release\n\n        if name in assets:\n            safe_download(file,\n                          url=f'https://github.com/{repo}/releases/download/{tag}/{name}',\n                          # url2=f'https://storage.googleapis.com/{repo}/ckpt/{name}',  # backup url (optional)\n                          min_bytes=1E5,\n                          error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/')\n\n    return str(file)\n\n\ndef gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'):\n    # Downloads a file from Google Drive. from yolov5.utils.downloads import *; gdrive_download()\n    t = time.time()\n    file = Path(file)\n    cookie = Path('cookie')  # gdrive cookie\n    print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')\n    file.unlink(missing_ok=True)  # remove existing file\n    cookie.unlink(missing_ok=True)  # remove existing cookie\n\n    # Attempt file download\n    out = \"NUL\" if platform.system() == \"Windows\" else \"/dev/null\"\n    os.system(f'curl -c ./cookie -s -L \"drive.google.com/uc?export=download&id={id}\" > {out}')\n    if os.path.exists('cookie'):  # large file\n        s = f'curl -Lb ./cookie \"drive.google.com/uc?export=download&confirm={get_token()}&id={id}\" -o {file}'\n    else:  # small file\n        s = f'curl -s -L -o {file} \"drive.google.com/uc?export=download&id={id}\"'\n    r = os.system(s)  # execute, capture return\n    cookie.unlink(missing_ok=True)  # remove existing cookie\n\n    # Error check\n    if r != 0:\n        file.unlink(missing_ok=True)  # remove partial\n        print('Download error ')  # raise Exception('Download error')\n        return r\n\n    # Unzip if archive\n    if file.suffix == '.zip':\n        print('unzipping... ', end='')\n        ZipFile(file).extractall(path=file.parent)  # unzip\n        file.unlink()  # remove zip\n\n    print(f'Done ({time.time() - t:.1f}s)')\n    return r\n\n\ndef get_token(cookie=\"./cookie\"):\n    with open(cookie) as f:\n        for line in f:\n            if \"download\" in line:\n                return line.split()[-1]\n    return \"\"\n\n# Google utils: https://cloud.google.com/storage/docs/reference/libraries ----------------------------------------------\n#\n#\n# def upload_blob(bucket_name, source_file_name, destination_blob_name):\n#     # Uploads a file to a bucket\n#     # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python\n#\n#     storage_client = storage.Client()\n#     bucket = storage_client.get_bucket(bucket_name)\n#     blob = bucket.blob(destination_blob_name)\n#\n#     blob.upload_from_filename(source_file_name)\n#\n#     print('File {} uploaded to {}.'.format(\n#         source_file_name,\n#         destination_blob_name))\n#\n#\n# def download_blob(bucket_name, source_blob_name, destination_file_name):\n#     # Uploads a blob from a bucket\n#     storage_client = storage.Client()\n#     bucket = storage_client.get_bucket(bucket_name)\n#     blob = bucket.blob(source_blob_name)\n#\n#     blob.download_to_filename(destination_file_name)\n#\n#     print('Blob {} downloaded to {}.'.format(\n#         source_blob_name,\n#         destination_file_name))\n"
  },
  {
    "path": "utils/flask_rest_api/README.md",
    "content": "# Flask REST API\n\n[REST](https://en.wikipedia.org/wiki/Representational_state_transfer) [API](https://en.wikipedia.org/wiki/API)s are\ncommonly used to expose Machine Learning (ML)  models to other services. This folder contains an example REST API\ncreated using Flask to expose the YOLOv5s model from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/).\n\n## Requirements\n\n[Flask](https://palletsprojects.com/p/flask/) is required. Install with:\n\n```shell\n$ pip install Flask\n```\n\n## Run\n\nAfter Flask installation run:\n\n```shell\n$ python3 restapi.py --port 5000\n```\n\nThen use [curl](https://curl.se/) to perform a request:\n\n```shell\n$ curl -X POST -F image=@zidane.jpg 'http://localhost:5000/v1/object-detection/yolov5s'\n```\n\nThe model inference results are returned as a JSON response:\n\n```json\n[\n  {\n    \"class\": 0,\n    \"confidence\": 0.8900438547,\n    \"height\": 0.9318675399,\n    \"name\": \"person\",\n    \"width\": 0.3264600933,\n    \"xcenter\": 0.7438579798,\n    \"ycenter\": 0.5207948685\n  },\n  {\n    \"class\": 0,\n    \"confidence\": 0.8440024257,\n    \"height\": 0.7155083418,\n    \"name\": \"person\",\n    \"width\": 0.6546785235,\n    \"xcenter\": 0.427829951,\n    \"ycenter\": 0.6334488392\n  },\n  {\n    \"class\": 27,\n    \"confidence\": 0.3771208823,\n    \"height\": 0.3902671337,\n    \"name\": \"tie\",\n    \"width\": 0.0696444362,\n    \"xcenter\": 0.3675483763,\n    \"ycenter\": 0.7991207838\n  },\n  {\n    \"class\": 27,\n    \"confidence\": 0.3527112305,\n    \"height\": 0.1540903747,\n    \"name\": \"tie\",\n    \"width\": 0.0336618312,\n    \"xcenter\": 0.7814827561,\n    \"ycenter\": 0.5065554976\n  }\n]\n```\n\nAn example python script to perform inference using [requests](https://docs.python-requests.org/en/master/) is given\nin `example_request.py`\n"
  },
  {
    "path": "utils/flask_rest_api/example_request.py",
    "content": "\"\"\"Perform test request\"\"\"\nimport pprint\n\nimport requests\n\nDETECTION_URL = \"http://localhost:5000/v1/object-detection/yolov5s\"\nTEST_IMAGE = \"zidane.jpg\"\n\nimage_data = open(TEST_IMAGE, \"rb\").read()\n\nresponse = requests.post(DETECTION_URL, files={\"image\": image_data}).json()\n\npprint.pprint(response)\n"
  },
  {
    "path": "utils/flask_rest_api/restapi.py",
    "content": "\"\"\"\nRun a rest API exposing the yolov5s object detection model\n\"\"\"\nimport argparse\nimport io\n\nimport torch\nfrom flask import Flask, request\nfrom PIL import Image\n\napp = Flask(__name__)\n\nDETECTION_URL = \"/v1/object-detection/yolov5s\"\n\n\n@app.route(DETECTION_URL, methods=[\"POST\"])\ndef predict():\n    if not request.method == \"POST\":\n        return\n\n    if request.files.get(\"image\"):\n        image_file = request.files[\"image\"]\n        image_bytes = image_file.read()\n\n        img = Image.open(io.BytesIO(image_bytes))\n\n        results = model(img, size=640)  # reduce size=320 for faster inference\n        return results.pandas().xyxy[0].to_json(orient=\"records\")\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(description=\"Flask API exposing YOLOv5 model\")\n    parser.add_argument(\"--port\", default=5000, type=int, help=\"port number\")\n    args = parser.parse_args()\n\n    model = torch.hub.load(\"ultralytics/yolov5\", \"yolov5s\", force_reload=True)  # force_reload to recache\n    app.run(host=\"0.0.0.0\", port=args.port)  # debug=True causes Restarting with stat\n"
  },
  {
    "path": "utils/general.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nGeneral utils\n\"\"\"\n\nimport contextlib\nimport glob\nimport logging\nimport math\nimport os\nimport platform\nimport random\nimport re\nimport shutil\nimport signal\nimport time\nimport urllib\nfrom itertools import repeat\nfrom multiprocessing.pool import ThreadPool\nfrom pathlib import Path\nfrom subprocess import check_output\nfrom zipfile import ZipFile\n\nimport cv2\nimport numpy as np\nimport pandas as pd\nimport pkg_resources as pkg\nimport torch\nimport torchvision\nimport yaml\n\nfrom utils.downloads import gsutil_getsize\nfrom utils.metrics import box_iou, fitness\n\n# Settings\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[1]  # YOLOv5 root directory\nNUM_THREADS = min(8, max(1, os.cpu_count() - 1))  # number of YOLOv5 multiprocessing threads\n\ntorch.set_printoptions(linewidth=320, precision=5, profile='long')\nnp.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format})  # format short g, %precision=5\npd.options.display.max_columns = 10\ncv2.setNumThreads(0)  # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)\nos.environ['NUMEXPR_MAX_THREADS'] = str(NUM_THREADS)  # NumExpr max threads\n\n\ndef set_logging(name=None, verbose=True):\n    # Sets level and returns logger\n    rank = int(os.getenv('RANK', -1))  # rank in world for Multi-GPU trainings\n    logging.basicConfig(format=\"%(message)s\", level=logging.INFO if (verbose and rank in (-1, 0)) else logging.WARNING)\n    return logging.getLogger(name)\n\n\nLOGGER = set_logging(__name__)  # define globally (used in train.py, val.py, detect.py, etc.)\n\n\nclass Profile(contextlib.ContextDecorator):\n    # Usage: @Profile() decorator or 'with Profile():' context manager\n    def __enter__(self):\n        self.start = time.time()\n\n    def __exit__(self, type, value, traceback):\n        print(f'Profile results: {time.time() - self.start:.5f}s')\n\n\nclass Timeout(contextlib.ContextDecorator):\n    # Usage: @Timeout(seconds) decorator or 'with Timeout(seconds):' context manager\n    def __init__(self, seconds, *, timeout_msg='', suppress_timeout_errors=True):\n        self.seconds = int(seconds)\n        self.timeout_message = timeout_msg\n        self.suppress = bool(suppress_timeout_errors)\n\n    def _timeout_handler(self, signum, frame):\n        raise TimeoutError(self.timeout_message)\n\n    def __enter__(self):\n        signal.signal(signal.SIGALRM, self._timeout_handler)  # Set handler for SIGALRM\n        signal.alarm(self.seconds)  # start countdown for SIGALRM to be raised\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        signal.alarm(0)  # Cancel SIGALRM if it's scheduled\n        if self.suppress and exc_type is TimeoutError:  # Suppress TimeoutError\n            return True\n\n\nclass WorkingDirectory(contextlib.ContextDecorator):\n    # Usage: @WorkingDirectory(dir) decorator or 'with WorkingDirectory(dir):' context manager\n    def __init__(self, new_dir):\n        self.dir = new_dir  # new dir\n        self.cwd = Path.cwd().resolve()  # current dir\n\n    def __enter__(self):\n        os.chdir(self.dir)\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        os.chdir(self.cwd)\n\n\ndef try_except(func):\n    # try-except function. Usage: @try_except decorator\n    def handler(*args, **kwargs):\n        try:\n            func(*args, **kwargs)\n        except Exception as e:\n            print(e)\n\n    return handler\n\n\ndef methods(instance):\n    # Get class/instance methods\n    return [f for f in dir(instance) if callable(getattr(instance, f)) and not f.startswith(\"__\")]\n\n\ndef print_args(name, opt):\n    # Print argparser arguments\n    LOGGER.info(colorstr(f'{name}: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items()))\n\n\ndef init_seeds(seed=0):\n    # Initialize random number generator (RNG) seeds https://pytorch.org/docs/stable/notes/randomness.html\n    # cudnn seed 0 settings are slower and more reproducible, else faster and less reproducible\n    import torch.backends.cudnn as cudnn\n    random.seed(seed)\n    np.random.seed(seed)\n    torch.manual_seed(seed)\n    cudnn.benchmark, cudnn.deterministic = (False, True) if seed == 0 else (True, False)\n\n\ndef intersect_dicts(da, db, exclude=()):\n    # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values\n    return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}\n\n\ndef get_latest_run(search_dir='.'):\n    # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)\n    last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)\n    return max(last_list, key=os.path.getctime) if last_list else ''\n\n\ndef user_config_dir(dir='Ultralytics', env_var='YOLOV5_CONFIG_DIR'):\n    # Return path of user configuration directory. Prefer environment variable if exists. Make dir if required.\n    env = os.getenv(env_var)\n    if env:\n        path = Path(env)  # use environment variable\n    else:\n        cfg = {'Windows': 'AppData/Roaming', 'Linux': '.config', 'Darwin': 'Library/Application Support'}  # 3 OS dirs\n        path = Path.home() / cfg.get(platform.system(), '')  # OS-specific config dir\n        path = (path if is_writeable(path) else Path('/tmp')) / dir  # GCP and AWS lambda fix, only /tmp is writeable\n    path.mkdir(exist_ok=True)  # make if required\n    return path\n\n\ndef is_writeable(dir, test=False):\n    # Return True if directory has write permissions, test opening a file with write permissions if test=True\n    if test:  # method 1\n        file = Path(dir) / 'tmp.txt'\n        try:\n            with open(file, 'w'):  # open file with write permissions\n                pass\n            file.unlink()  # remove file\n            return True\n        except OSError:\n            return False\n    else:  # method 2\n        return os.access(dir, os.R_OK)  # possible issues on Windows\n\n\ndef is_docker():\n    # Is environment a Docker container?\n    return Path('/workspace').exists()  # or Path('/.dockerenv').exists()\n\n\ndef is_colab():\n    # Is environment a Google Colab instance?\n    try:\n        import google.colab\n        return True\n    except ImportError:\n        return False\n\n\ndef is_pip():\n    # Is file in a pip package?\n    return 'site-packages' in Path(__file__).resolve().parts\n\n\ndef is_ascii(s=''):\n    # Is string composed of all ASCII (no UTF) characters? (note str().isascii() introduced in python 3.7)\n    s = str(s)  # convert list, tuple, None, etc. to str\n    return len(s.encode().decode('ascii', 'ignore')) == len(s)\n\n\ndef is_chinese(s='人工智能'):\n    # Is string composed of any Chinese characters?\n    return re.search('[\\u4e00-\\u9fff]', s)\n\n\ndef emojis(str=''):\n    # Return platform-dependent emoji-safe version of string\n    return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str\n\n\ndef file_size(path):\n    # Return file/dir size (MB)\n    path = Path(path)\n    if path.is_file():\n        return path.stat().st_size / 1E6\n    elif path.is_dir():\n        return sum(f.stat().st_size for f in path.glob('**/*') if f.is_file()) / 1E6\n    else:\n        return 0.0\n\n\ndef check_online():\n    # Check internet connectivity\n    import socket\n    try:\n        socket.create_connection((\"1.1.1.1\", 443), 5)  # check host accessibility\n        return True\n    except OSError:\n        return False\n\n\n@try_except\n@WorkingDirectory(ROOT)\ndef check_git_status():\n    # Recommend 'git pull' if code is out of date\n    msg = ', for updates see https://github.com/ultralytics/yolov5'\n    print(colorstr('github: '), end='')\n    assert Path('.git').exists(), 'skipping check (not a git repository)' + msg\n    assert not is_docker(), 'skipping check (Docker image)' + msg\n    assert check_online(), 'skipping check (offline)' + msg\n\n    cmd = 'git fetch && git config --get remote.origin.url'\n    url = check_output(cmd, shell=True, timeout=5).decode().strip().rstrip('.git')  # git fetch\n    branch = check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip()  # checked out\n    n = int(check_output(f'git rev-list {branch}..origin/master --count', shell=True))  # commits behind\n    if n > 0:\n        s = f\"⚠️ YOLOv5 is out of date by {n} commit{'s' * (n > 1)}. Use `git pull` or `git clone {url}` to update.\"\n    else:\n        s = f'up to date with {url} ✅'\n    print(emojis(s))  # emoji-safe\n\n\ndef check_python(minimum='3.6.2'):\n    # Check current python version vs. required python version\n    check_version(platform.python_version(), minimum, name='Python ', hard=True)\n\n\ndef check_version(current='0.0.0', minimum='0.0.0', name='version ', pinned=False, hard=False):\n    # Check version vs. required version\n    current, minimum = (pkg.parse_version(x) for x in (current, minimum))\n    result = (current == minimum) if pinned else (current >= minimum)  # bool\n    if hard:  # assert min requirements met\n        assert result, f'{name}{minimum} required by YOLOv5, but {name}{current} is currently installed'\n    else:\n        return result\n\n\n@try_except\ndef check_requirements(requirements=ROOT / 'requirements.txt', exclude=(), install=True):\n    # Check installed dependencies meet requirements (pass *.txt file or list of packages)\n    prefix = colorstr('red', 'bold', 'requirements:')\n    check_python()  # check python version\n    if isinstance(requirements, (str, Path)):  # requirements.txt file\n        file = Path(requirements)\n        assert file.exists(), f\"{prefix} {file.resolve()} not found, check failed.\"\n        with file.open() as f:\n            requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(f) if x.name not in exclude]\n    else:  # list or tuple of packages\n        requirements = [x for x in requirements if x not in exclude]\n\n    n = 0  # number of packages updates\n    for r in requirements:\n        try:\n            pkg.require(r)\n        except Exception as e:  # DistributionNotFound or VersionConflict if requirements not met\n            s = f\"{prefix} {r} not found and is required by YOLOv5\"\n            if install:\n                print(f\"{s}, attempting auto-update...\")\n                try:\n                    assert check_online(), f\"'pip install {r}' skipped (offline)\"\n                    print(check_output(f\"pip install '{r}'\", shell=True).decode())\n                    n += 1\n                except Exception as e:\n                    print(f'{prefix} {e}')\n            else:\n                print(f'{s}. Please install and rerun your command.')\n\n    if n:  # if packages updated\n        source = file.resolve() if 'file' in locals() else requirements\n        s = f\"{prefix} {n} package{'s' * (n > 1)} updated per {source}\\n\" \\\n            f\"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\\n\"\n        print(emojis(s))\n\n\ndef check_img_size(imgsz, s=32, floor=0):\n    # Verify image size is a multiple of stride s in each dimension\n    if isinstance(imgsz, int):  # integer i.e. img_size=640\n        new_size = max(make_divisible(imgsz, int(s)), floor)\n    else:  # list i.e. img_size=[640, 480]\n        new_size = [max(make_divisible(x, int(s)), floor) for x in imgsz]\n    if new_size != imgsz:\n        print(f'WARNING: --img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}')\n    return new_size\n\n\ndef check_imshow():\n    # Check if environment supports image displays\n    try:\n        assert not is_docker(), 'cv2.imshow() is disabled in Docker environments'\n        assert not is_colab(), 'cv2.imshow() is disabled in Google Colab environments'\n        cv2.imshow('test', np.zeros((1, 1, 3)))\n        cv2.waitKey(1)\n        cv2.destroyAllWindows()\n        cv2.waitKey(1)\n        return True\n    except Exception as e:\n        print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\\n{e}')\n        return False\n\n\ndef check_suffix(file='yolov5s.pt', suffix=('.pt',), msg=''):\n    # Check file(s) for acceptable suffix\n    if file and suffix:\n        if isinstance(suffix, str):\n            suffix = [suffix]\n        for f in file if isinstance(file, (list, tuple)) else [file]:\n            s = Path(f).suffix.lower()  # file suffix\n            if len(s):\n                assert s in suffix, f\"{msg}{f} acceptable suffix is {suffix}\"\n\n\ndef check_yaml(file, suffix=('.yaml', '.yml')):\n    # Search/download YAML file (if necessary) and return path, checking suffix\n    return check_file(file, suffix)\n\n\ndef check_file(file, suffix=''):\n    # Search/download file (if necessary) and return path\n    check_suffix(file, suffix)  # optional\n    file = str(file)  # convert to str()\n    if Path(file).is_file() or file == '':  # exists\n        return file\n    elif file.startswith(('http:/', 'https:/')):  # download\n        url = str(Path(file)).replace(':/', '://')  # Pathlib turns :// -> :/\n        file = Path(urllib.parse.unquote(file).split('?')[0]).name  # '%2F' to '/', split https://url.com/file.txt?auth\n        if Path(file).is_file():\n            print(f'Found {url} locally at {file}')  # file already exists\n        else:\n            print(f'Downloading {url} to {file}...')\n            torch.hub.download_url_to_file(url, file)\n            assert Path(file).exists() and Path(file).stat().st_size > 0, f'File download failed: {url}'  # check\n        return file\n    else:  # search\n        files = []\n        for d in 'data', 'models', 'utils':  # search directories\n            files.extend(glob.glob(str(ROOT / d / '**' / file), recursive=True))  # find file\n        assert len(files), f'File not found: {file}'  # assert file was found\n        assert len(files) == 1, f\"Multiple files match '{file}', specify exact path: {files}\"  # assert unique\n        return files[0]  # return file\n\n\ndef check_dataset(data, autodownload=True):\n    # Download and/or unzip dataset if not found locally\n    # Usage: https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128_with_yaml.zip\n\n    # Download (optional)\n    extract_dir = ''\n    if isinstance(data, (str, Path)) and str(data).endswith('.zip'):  # i.e. gs://bucket/dir/coco128.zip\n        download(data, dir='../datasets', unzip=True, delete=False, curl=False, threads=1)\n        data = next((Path('../datasets') / Path(data).stem).rglob('*.yaml'))\n        extract_dir, autodownload = data.parent, False\n\n    # Read yaml (optional)\n    if isinstance(data, (str, Path)):\n        with open(data, errors='ignore') as f:\n            data = yaml.safe_load(f)  # dictionary\n\n    # Parse yaml\n    path = extract_dir or Path(data.get('path') or '')  # optional 'path' default to '.'\n    for k in 'train', 'val', 'test':\n        if data.get(k):  # prepend path\n            data[k] = str(path / data[k]) if isinstance(data[k], str) else [str(path / x) for x in data[k]]\n\n    assert 'nc' in data, \"Dataset 'nc' key missing.\"\n    if 'names' not in data:\n        data['names'] = [f'class{i}' for i in range(data['nc'])]  # assign class names if missing\n    train, val, test, s = (data.get(x) for x in ('train', 'val', 'test', 'download'))\n    if val:\n        val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])]  # val path\n        if not all(x.exists() for x in val):\n            print('\\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()])\n            if s and autodownload:  # download script\n                root = path.parent if 'path' in data else '..'  # unzip directory i.e. '../'\n                if s.startswith('http') and s.endswith('.zip'):  # URL\n                    f = Path(s).name  # filename\n                    print(f'Downloading {s} to {f}...')\n                    torch.hub.download_url_to_file(s, f)\n                    Path(root).mkdir(parents=True, exist_ok=True)  # create root\n                    ZipFile(f).extractall(path=root)  # unzip\n                    Path(f).unlink()  # remove zip\n                    r = None  # success\n                elif s.startswith('bash '):  # bash script\n                    print(f'Running {s} ...')\n                    r = os.system(s)\n                else:  # python script\n                    r = exec(s, {'yaml': data})  # return None\n                print(f\"Dataset autodownload {f'success, saved to {root}' if r in (0, None) else 'failure'}\\n\")\n            else:\n                raise Exception('Dataset not found.')\n\n    return data  # dictionary\n\n\ndef url2file(url):\n    # Convert URL to filename, i.e. https://url.com/file.txt?auth -> file.txt\n    url = str(Path(url)).replace(':/', '://')  # Pathlib turns :// -> :/\n    file = Path(urllib.parse.unquote(url)).name.split('?')[0]  # '%2F' to '/', split https://url.com/file.txt?auth\n    return file\n\n\ndef download(url, dir='.', unzip=True, delete=True, curl=False, threads=1):\n    # Multi-threaded file download and unzip function, used in data.yaml for autodownload\n    def download_one(url, dir):\n        # Download 1 file\n        f = dir / Path(url).name  # filename\n        if Path(url).is_file():  # exists in current path\n            Path(url).rename(f)  # move to dir\n        elif not f.exists():\n            print(f'Downloading {url} to {f}...')\n            if curl:\n                os.system(f\"curl -L '{url}' -o '{f}' --retry 9 -C -\")  # curl download, retry and resume on fail\n            else:\n                torch.hub.download_url_to_file(url, f, progress=True)  # torch download\n        if unzip and f.suffix in ('.zip', '.gz'):\n            print(f'Unzipping {f}...')\n            if f.suffix == '.zip':\n                ZipFile(f).extractall(path=dir)  # unzip\n            elif f.suffix == '.gz':\n                os.system(f'tar xfz {f} --directory {f.parent}')  # unzip\n            if delete:\n                f.unlink()  # remove zip\n\n    dir = Path(dir)\n    dir.mkdir(parents=True, exist_ok=True)  # make directory\n    if threads > 1:\n        pool = ThreadPool(threads)\n        pool.imap(lambda x: download_one(*x), zip(url, repeat(dir)))  # multi-threaded\n        pool.close()\n        pool.join()\n    else:\n        for u in [url] if isinstance(url, (str, Path)) else url:\n            download_one(u, dir)\n\n\ndef make_divisible(x, divisor):\n    # Returns nearest x divisible by divisor\n    if isinstance(divisor, torch.Tensor):\n        divisor = int(divisor.max())  # to int\n    return math.ceil(x / divisor) * divisor\n\n\ndef clean_str(s):\n    # Cleans a string by replacing special characters with underscore _\n    return re.sub(pattern=\"[|@#!¡·$€%&()=?¿^*;:,¨´><+]\", repl=\"_\", string=s)\n\n\ndef one_cycle(y1=0.0, y2=1.0, steps=100):\n    # lambda function for sinusoidal ramp from y1 to y2 https://arxiv.org/pdf/1812.01187.pdf\n    return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1\n\n\ndef colorstr(*input):\n    # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e.  colorstr('blue', 'hello world')\n    *args, string = input if len(input) > 1 else ('blue', 'bold', input[0])  # color arguments, string\n    colors = {'black': '\\033[30m',  # basic colors\n              'red': '\\033[31m',\n              'green': '\\033[32m',\n              'yellow': '\\033[33m',\n              'blue': '\\033[34m',\n              'magenta': '\\033[35m',\n              'cyan': '\\033[36m',\n              'white': '\\033[37m',\n              'bright_black': '\\033[90m',  # bright colors\n              'bright_red': '\\033[91m',\n              'bright_green': '\\033[92m',\n              'bright_yellow': '\\033[93m',\n              'bright_blue': '\\033[94m',\n              'bright_magenta': '\\033[95m',\n              'bright_cyan': '\\033[96m',\n              'bright_white': '\\033[97m',\n              'end': '\\033[0m',  # misc\n              'bold': '\\033[1m',\n              'underline': '\\033[4m'}\n    return ''.join(colors[x] for x in args) + f'{string}' + colors['end']\n\n\ndef labels_to_class_weights(labels, nc=80):\n    # Get class weights (inverse frequency) from training labels\n    if labels[0] is None:  # no labels loaded\n        return torch.Tensor()\n\n    labels = np.concatenate(labels, 0)  # labels.shape = (866643, 5) for COCO\n    classes = labels[:, 0].astype(np.int)  # labels = [class xywh]\n    weights = np.bincount(classes, minlength=nc)  # occurrences per class\n\n    # Prepend gridpoint count (for uCE training)\n    # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum()  # gridpoints per image\n    # weights = np.hstack([gpi * len(labels)  - weights.sum() * 9, weights * 9]) ** 0.5  # prepend gridpoints to start\n\n    weights[weights == 0] = 1  # replace empty bins with 1\n    weights = 1 / weights  # number of targets per class\n    weights /= weights.sum()  # normalize\n    return torch.from_numpy(weights)\n\n\ndef labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):\n    # Produces image weights based on class_weights and image contents\n    class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels])\n    image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1)\n    # index = random.choices(range(n), weights=image_weights, k=1)  # weight image sample\n    return image_weights\n\n\ndef coco80_to_coco91_class():  # converts 80-index (val2014) to 91-index (paper)\n    # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/\n    # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\\n')\n    # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\\n')\n    # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)]  # darknet to coco\n    # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)]  # coco to darknet\n    x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,\n         35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,\n         64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]\n    return x\n\n\ndef xyxy2xywh(x):\n    # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right\n    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)\n    y[:, 0] = (x[:, 0] + x[:, 2]) / 2  # x center\n    y[:, 1] = (x[:, 1] + x[:, 3]) / 2  # y center\n    y[:, 2] = x[:, 2] - x[:, 0]  # width\n    y[:, 3] = x[:, 3] - x[:, 1]  # height\n    return y\n\n\ndef xywh2xyxy(x):\n    # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right\n    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)\n    y[:, 0] = x[:, 0] - x[:, 2] / 2  # top left x\n    y[:, 1] = x[:, 1] - x[:, 3] / 2  # top left y\n    y[:, 2] = x[:, 0] + x[:, 2] / 2  # bottom right x\n    y[:, 3] = x[:, 1] + x[:, 3] / 2  # bottom right y\n    return y\n\n\ndef xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):\n    # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right\n    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)\n    y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw  # top left x\n    y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh  # top left y\n    y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw  # bottom right x\n    y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh  # bottom right y\n    return y\n\n\ndef xyxy2xywhn(x, w=640, h=640, clip=False, eps=0.0):\n    # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] normalized where xy1=top-left, xy2=bottom-right\n    if clip:\n        clip_coords(x, (h - eps, w - eps))  # warning: inplace clip\n    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)\n    y[:, 0] = ((x[:, 0] + x[:, 2]) / 2) / w  # x center\n    y[:, 1] = ((x[:, 1] + x[:, 3]) / 2) / h  # y center\n    y[:, 2] = (x[:, 2] - x[:, 0]) / w  # width\n    y[:, 3] = (x[:, 3] - x[:, 1]) / h  # height\n    return y\n\n\ndef xyn2xy(x, w=640, h=640, padw=0, padh=0):\n    # Convert normalized segments into pixel segments, shape (n,2)\n    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)\n    y[:, 0] = w * x[:, 0] + padw  # top left x\n    y[:, 1] = h * x[:, 1] + padh  # top left y\n    return y\n\n\ndef segment2box(segment, width=640, height=640):\n    # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)\n    x, y = segment.T  # segment xy\n    inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)\n    x, y, = x[inside], y[inside]\n    return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4))  # xyxy\n\n\ndef segments2boxes(segments):\n    # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)\n    boxes = []\n    for s in segments:\n        x, y = s.T  # segment xy\n        boxes.append([x.min(), y.min(), x.max(), y.max()])  # cls, xyxy\n    return xyxy2xywh(np.array(boxes))  # cls, xywh\n\n\ndef resample_segments(segments, n=1000):\n    # Up-sample an (n,2) segment\n    for i, s in enumerate(segments):\n        x = np.linspace(0, len(s) - 1, n)\n        xp = np.arange(len(s))\n        segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T  # segment xy\n    return segments\n\n\ndef scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):\n    # Rescale coords (xyxy) from img1_shape to img0_shape\n    if ratio_pad is None:  # calculate from img0_shape\n        gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1])  # gain  = old / new\n        pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2  # wh padding\n    else:\n        gain = ratio_pad[0][0]\n        pad = ratio_pad[1]\n\n    coords[:, [0, 2]] -= pad[0]  # x padding\n    coords[:, [1, 3]] -= pad[1]  # y padding\n    coords[:, :4] /= gain\n    clip_coords(coords, img0_shape)\n    return coords\n\n\ndef clip_coords(boxes, shape):\n    # Clip bounding xyxy bounding boxes to image shape (height, width)\n    if isinstance(boxes, torch.Tensor):  # faster individually\n        boxes[:, 0].clamp_(0, shape[1])  # x1\n        boxes[:, 1].clamp_(0, shape[0])  # y1\n        boxes[:, 2].clamp_(0, shape[1])  # x2\n        boxes[:, 3].clamp_(0, shape[0])  # y2\n    else:  # np.array (faster grouped)\n        boxes[:, [0, 2]] = boxes[:, [0, 2]].clip(0, shape[1])  # x1, x2\n        boxes[:, [1, 3]] = boxes[:, [1, 3]].clip(0, shape[0])  # y1, y2\n\n\ndef non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,\n                        labels=(), max_det=300):\n    \"\"\"Runs Non-Maximum Suppression (NMS) on inference results\n\n    Returns:\n         list of detections, on (n,6) tensor per image [xyxy, conf, cls]\n    \"\"\"\n\n    nc = prediction.shape[2] - 5  # number of classes\n    xc = prediction[..., 4] > conf_thres  # candidates\n\n    # Checks\n    assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0'\n    assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0'\n\n    # Settings\n    min_wh, max_wh = 2, 4096  # (pixels) minimum and maximum box width and height\n    max_nms = 30000  # maximum number of boxes into torchvision.ops.nms()\n    time_limit = 10.0  # seconds to quit after\n    redundant = True  # require redundant detections\n    multi_label &= nc > 1  # multiple labels per box (adds 0.5ms/img)\n    merge = False  # use merge-NMS\n\n    t = time.time()\n    output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]\n    for xi, x in enumerate(prediction):  # image index, image inference\n        # Apply constraints\n        # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0  # width-height\n        x = x[xc[xi]]  # confidence\n\n        # Cat apriori labels if autolabelling\n        if labels and len(labels[xi]):\n            l = labels[xi]\n            v = torch.zeros((len(l), nc + 5), device=x.device)\n            v[:, :4] = l[:, 1:5]  # box\n            v[:, 4] = 1.0  # conf\n            v[range(len(l)), l[:, 0].long() + 5] = 1.0  # cls\n            x = torch.cat((x, v), 0)\n\n        # If none remain process next image\n        if not x.shape[0]:\n            continue\n\n        # Compute conf\n        x[:, 5:] *= x[:, 4:5]  # conf = obj_conf * cls_conf\n\n        # Box (center x, center y, width, height) to (x1, y1, x2, y2)\n        box = xywh2xyxy(x[:, :4])\n\n        # Detections matrix nx6 (xyxy, conf, cls)\n        if multi_label:\n            i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T\n            x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)\n        else:  # best class only\n            conf, j = x[:, 5:].max(1, keepdim=True)\n            x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]\n\n        # Filter by class\n        if classes is not None:\n            x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]\n\n        # Apply finite constraint\n        # if not torch.isfinite(x).all():\n        #     x = x[torch.isfinite(x).all(1)]\n\n        # Check shape\n        n = x.shape[0]  # number of boxes\n        if not n:  # no boxes\n            continue\n        elif n > max_nms:  # excess boxes\n            x = x[x[:, 4].argsort(descending=True)[:max_nms]]  # sort by confidence\n\n        # Batched NMS\n        c = x[:, 5:6] * (0 if agnostic else max_wh)  # classes\n        boxes, scores = x[:, :4] + c, x[:, 4]  # boxes (offset by class), scores\n        i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS\n        if i.shape[0] > max_det:  # limit detections\n            i = i[:max_det]\n        if merge and (1 < n < 3E3):  # Merge NMS (boxes merged using weighted mean)\n            # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)\n            iou = box_iou(boxes[i], boxes) > iou_thres  # iou matrix\n            weights = iou * scores[None]  # box weights\n            x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True)  # merged boxes\n            if redundant:\n                i = i[iou.sum(1) > 1]  # require redundancy\n\n        output[xi] = x[i]\n        if (time.time() - t) > time_limit:\n            print(f'WARNING: NMS time limit {time_limit}s exceeded')\n            break  # time limit exceeded\n\n    return output\n\n\ndef strip_optimizer(f='best.pt', s=''):  # from utils.general import *; strip_optimizer()\n    # Strip optimizer from 'f' to finalize training, optionally save as 's'\n    x = torch.load(f, map_location=torch.device('cpu'))\n    if x.get('ema'):\n        x['model'] = x['ema']  # replace model with ema\n    for k in 'optimizer', 'best_fitness', 'wandb_id', 'ema', 'updates':  # keys\n        x[k] = None\n    x['epoch'] = -1\n    x['model'].half()  # to FP16\n    for p in x['model'].parameters():\n        p.requires_grad = False\n    torch.save(x, s or f)\n    mb = os.path.getsize(s or f) / 1E6  # filesize\n    print(f\"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB\")\n\n\ndef print_mutation(results, hyp, save_dir, bucket):\n    evolve_csv, results_csv, evolve_yaml = save_dir / 'evolve.csv', save_dir / 'results.csv', save_dir / 'hyp_evolve.yaml'\n    keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',\n            'val/box_loss', 'val/obj_loss', 'val/cls_loss') + tuple(hyp.keys())  # [results + hyps]\n    keys = tuple(x.strip() for x in keys)\n    vals = results + tuple(hyp.values())\n    n = len(keys)\n\n    # Download (optional)\n    if bucket:\n        url = f'gs://{bucket}/evolve.csv'\n        if gsutil_getsize(url) > (os.path.getsize(evolve_csv) if os.path.exists(evolve_csv) else 0):\n            os.system(f'gsutil cp {url} {save_dir}')  # download evolve.csv if larger than local\n\n    # Log to evolve.csv\n    s = '' if evolve_csv.exists() else (('%20s,' * n % keys).rstrip(',') + '\\n')  # add header\n    with open(evolve_csv, 'a') as f:\n        f.write(s + ('%20.5g,' * n % vals).rstrip(',') + '\\n')\n\n    # Print to screen\n    print(colorstr('evolve: ') + ', '.join(f'{x.strip():>20s}' for x in keys))\n    print(colorstr('evolve: ') + ', '.join(f'{x:20.5g}' for x in vals), end='\\n\\n\\n')\n\n    # Save yaml\n    with open(evolve_yaml, 'w') as f:\n        data = pd.read_csv(evolve_csv)\n        data = data.rename(columns=lambda x: x.strip())  # strip keys\n        i = np.argmax(fitness(data.values[:, :7]))  #\n        f.write('# YOLOv5 Hyperparameter Evolution Results\\n' +\n                f'# Best generation: {i}\\n' +\n                f'# Last generation: {len(data) - 1}\\n' +\n                '# ' + ', '.join(f'{x.strip():>20s}' for x in keys[:7]) + '\\n' +\n                '# ' + ', '.join(f'{x:>20.5g}' for x in data.values[i, :7]) + '\\n\\n')\n        yaml.safe_dump(hyp, f, sort_keys=False)\n\n    if bucket:\n        os.system(f'gsutil cp {evolve_csv} {evolve_yaml} gs://{bucket}')  # upload\n\n\ndef apply_classifier(x, model, img, im0):\n    # Apply a second stage classifier to YOLO outputs\n    # Example model = torchvision.models.__dict__['efficientnet_b0'](pretrained=True).to(device).eval()\n    im0 = [im0] if isinstance(im0, np.ndarray) else im0\n    for i, d in enumerate(x):  # per image\n        if d is not None and len(d):\n            d = d.clone()\n\n            # Reshape and pad cutouts\n            b = xyxy2xywh(d[:, :4])  # boxes\n            b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1)  # rectangle to square\n            b[:, 2:] = b[:, 2:] * 1.3 + 30  # pad\n            d[:, :4] = xywh2xyxy(b).long()\n\n            # Rescale boxes from img_size to im0 size\n            scale_coords(img.shape[2:], d[:, :4], im0[i].shape)\n\n            # Classes\n            pred_cls1 = d[:, 5].long()\n            ims = []\n            for j, a in enumerate(d):  # per item\n                cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]\n                im = cv2.resize(cutout, (224, 224))  # BGR\n                # cv2.imwrite('example%i.jpg' % j, cutout)\n\n                im = im[:, :, ::-1].transpose(2, 0, 1)  # BGR to RGB, to 3x416x416\n                im = np.ascontiguousarray(im, dtype=np.float32)  # uint8 to float32\n                im /= 255  # 0 - 255 to 0.0 - 1.0\n                ims.append(im)\n\n            pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1)  # classifier prediction\n            x[i] = x[i][pred_cls1 == pred_cls2]  # retain matching class detections\n\n    return x\n\n\ndef increment_path(path, exist_ok=False, sep='', mkdir=False):\n    # Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc.\n    path = Path(path)  # os-agnostic\n    if path.exists() and not exist_ok:\n        path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '')\n        dirs = glob.glob(f\"{path}{sep}*\")  # similar paths\n        matches = [re.search(rf\"%s{sep}(\\d+)\" % path.stem, d) for d in dirs]\n        i = [int(m.groups()[0]) for m in matches if m]  # indices\n        n = max(i) + 1 if i else 2  # increment number\n        path = Path(f\"{path}{sep}{n}{suffix}\")  # increment path\n    if mkdir:\n        path.mkdir(parents=True, exist_ok=True)  # make directory\n    return path\n\n\n# Variables\nNCOLS = 0 if is_docker() else shutil.get_terminal_size().columns  # terminal window size for tqdm\n"
  },
  {
    "path": "utils/google_app_engine/Dockerfile",
    "content": "FROM gcr.io/google-appengine/python\n\n# Create a virtualenv for dependencies. This isolates these packages from\n# system-level packages.\n# Use -p python3 or -p python3.7 to select python version. Default is version 2.\nRUN virtualenv /env -p python3\n\n# Setting these environment variables are the same as running\n# source /env/bin/activate.\nENV VIRTUAL_ENV /env\nENV PATH /env/bin:$PATH\n\nRUN apt-get update && apt-get install -y python-opencv\n\n# Copy the application's requirements.txt and run pip to install all\n# dependencies into the virtualenv.\nADD requirements.txt /app/requirements.txt\nRUN pip install -r /app/requirements.txt\n\n# Add the application source code.\nADD . /app\n\n# Run a WSGI server to serve the application. gunicorn must be declared as\n# a dependency in requirements.txt.\nCMD gunicorn -b :$PORT main:app\n"
  },
  {
    "path": "utils/google_app_engine/additional_requirements.txt",
    "content": "# add these requirements in your app on top of the existing ones\npip==21.1\nFlask==1.0.2\ngunicorn==19.9.0\n"
  },
  {
    "path": "utils/google_app_engine/app.yaml",
    "content": "runtime: custom\nenv: flex\n\nservice: yolov5app\n\nliveness_check:\n  initial_delay_sec: 600\n\nmanual_scaling:\n  instances: 1\nresources:\n  cpu: 1\n  memory_gb: 4\n  disk_size_gb: 20\n"
  },
  {
    "path": "utils/loggers/__init__.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nLogging utils\n\"\"\"\n\nimport os\nimport warnings\nfrom threading import Thread\n\nimport pkg_resources as pkg\nimport torch\nfrom torch.utils.tensorboard import SummaryWriter\n\nfrom utils.general import colorstr, emojis\nfrom utils.loggers.wandb.wandb_utils import WandbLogger\nfrom utils.plots import plot_images, plot_results\nfrom utils.torch_utils import de_parallel\n\nLOGGERS = ('csv', 'tb', 'wandb')  # text-file, TensorBoard, Weights & Biases\nRANK = int(os.getenv('RANK', -1))\n\ntry:\n    import wandb\n\n    assert hasattr(wandb, '__version__')  # verify package import not local dir\n    if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.2') and RANK in [0, -1]:\n        try:\n            wandb_login_success = wandb.login(timeout=30)\n        except wandb.errors.UsageError:  # known non-TTY terminal issue\n            wandb_login_success = False\n        if not wandb_login_success:\n            wandb = None\nexcept (ImportError, AssertionError):\n    wandb = None\n\n\nclass Loggers():\n    # YOLOv5 Loggers class\n    def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, logger=None, include=LOGGERS):\n        self.save_dir = save_dir\n        self.weights = weights\n        self.opt = opt\n        self.hyp = hyp\n        self.logger = logger  # for printing results to console\n        self.include = include\n        self.keys = ['train/box_loss', 'train/obj_loss', 'train/cls_loss',  # train loss\n                     'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',  # metrics\n                     'val/box_loss', 'val/obj_loss', 'val/cls_loss',  # val loss\n                     'x/lr0', 'x/lr1', 'x/lr2']  # params\n        for k in LOGGERS:\n            setattr(self, k, None)  # init empty logger dictionary\n        self.csv = True  # always log to csv\n\n        # Message\n        if not wandb:\n            prefix = colorstr('Weights & Biases: ')\n            s = f\"{prefix}run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs (RECOMMENDED)\"\n            print(emojis(s))\n\n        # TensorBoard\n        s = self.save_dir\n        if 'tb' in self.include and not self.opt.evolve:\n            prefix = colorstr('TensorBoard: ')\n            self.logger.info(f\"{prefix}Start with 'tensorboard --logdir {s.parent}', view at http://localhost:6006/\")\n            self.tb = SummaryWriter(str(s))\n\n        # W&B\n        if wandb and 'wandb' in self.include:\n            wandb_artifact_resume = isinstance(self.opt.resume, str) and self.opt.resume.startswith('wandb-artifact://')\n            run_id = torch.load(self.weights).get('wandb_id') if self.opt.resume and not wandb_artifact_resume else None\n            self.opt.hyp = self.hyp  # add hyperparameters\n            self.wandb = WandbLogger(self.opt, run_id)\n        else:\n            self.wandb = None\n\n    def on_pretrain_routine_end(self):\n        # Callback runs on pre-train routine end\n        paths = self.save_dir.glob('*labels*.jpg')  # training labels\n        if self.wandb:\n            self.wandb.log({\"Labels\": [wandb.Image(str(x), caption=x.name) for x in paths]})\n\n    def on_train_batch_end(self, ni, model, imgs, targets, paths, plots, sync_bn):\n        # Callback runs on train batch end\n        if plots:\n            if ni == 0:\n                if not sync_bn:  # tb.add_graph() --sync known issue https://github.com/ultralytics/yolov5/issues/3754\n                    with warnings.catch_warnings():\n                        warnings.simplefilter('ignore')  # suppress jit trace warning\n                        self.tb.add_graph(torch.jit.trace(de_parallel(model), imgs[0:1], strict=False), [])\n            if ni < 3:\n                f = self.save_dir / f'train_batch{ni}.jpg'  # filename\n                Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start()\n            if self.wandb and ni == 10:\n                files = sorted(self.save_dir.glob('train*.jpg'))\n                self.wandb.log({'Mosaics': [wandb.Image(str(f), caption=f.name) for f in files if f.exists()]})\n\n    def on_train_epoch_end(self, epoch):\n        # Callback runs on train epoch end\n        if self.wandb:\n            self.wandb.current_epoch = epoch + 1\n\n    def on_val_image_end(self, pred, predn, path, names, im):\n        # Callback runs on val image end\n        if self.wandb:\n            self.wandb.val_one_image(pred, predn, path, names, im)\n\n    def on_val_end(self):\n        # Callback runs on val end\n        if self.wandb:\n            files = sorted(self.save_dir.glob('val*.jpg'))\n            self.wandb.log({\"Validation\": [wandb.Image(str(f), caption=f.name) for f in files]})\n\n    def on_fit_epoch_end(self, vals, epoch, best_fitness, fi):\n        # Callback runs at the end of each fit (train+val) epoch\n        x = {k: v for k, v in zip(self.keys, vals)}  # dict\n        if self.csv:\n            file = self.save_dir / 'results.csv'\n            n = len(x) + 1  # number of cols\n            s = '' if file.exists() else (('%20s,' * n % tuple(['epoch'] + self.keys)).rstrip(',') + '\\n')  # add header\n            with open(file, 'a') as f:\n                f.write(s + ('%20.5g,' * n % tuple([epoch] + vals)).rstrip(',') + '\\n')\n\n        if self.tb:\n            for k, v in x.items():\n                self.tb.add_scalar(k, v, epoch)\n\n        if self.wandb:\n            self.wandb.log(x)\n            self.wandb.end_epoch(best_result=best_fitness == fi)\n\n    def on_model_save(self, last, epoch, final_epoch, best_fitness, fi):\n        # Callback runs on model save event\n        if self.wandb:\n            if ((epoch + 1) % self.opt.save_period == 0 and not final_epoch) and self.opt.save_period != -1:\n                self.wandb.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi)\n\n    def on_train_end(self, last, best, plots, epoch, results):\n        # Callback runs on training end\n        if plots:\n            plot_results(file=self.save_dir / 'results.csv')  # save results.png\n        files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))]\n        files = [(self.save_dir / f) for f in files if (self.save_dir / f).exists()]  # filter\n\n        if self.tb:\n            import cv2\n            for f in files:\n                self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC')\n\n        if self.wandb:\n            self.wandb.log({\"Results\": [wandb.Image(str(f), caption=f.name) for f in files]})\n            # Calling wandb.log. TODO: Refactor this into WandbLogger.log_model\n            if not self.opt.evolve:\n                wandb.log_artifact(str(best if best.exists() else last), type='model',\n                                   name='run_' + self.wandb.wandb_run.id + '_model',\n                                   aliases=['latest', 'best', 'stripped'])\n                self.wandb.finish_run()\n            else:\n                self.wandb.finish_run()\n                self.wandb = WandbLogger(self.opt)\n"
  },
  {
    "path": "utils/loggers/wandb/README.md",
    "content": "📚 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 🚀. UPDATED 29 September 2021.\n* [About Weights & Biases](#about-weights-&-biases)\n* [First-Time Setup](#first-time-setup)\n* [Viewing runs](#viewing-runs)\n* [Disabling wandb](#disabling-wandb)\n* [Advanced Usage: Dataset Versioning and Evaluation](#advanced-usage)\n* [Reports: Share your work with the world!](#reports)\n\n## About Weights & Biases\nThink of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models — architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.\n\nUsed by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows:\n\n * [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time\n * [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4) visualized automatically\n * [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization\n * [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators\n * [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently\n * [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models\n\n## First-Time Setup\n<details open>\n <summary> Toggle Details </summary>\nWhen you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device.\n\nW&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as:\n\n ```shell\n $ python train.py --project ... --name ...\n ```\n\nYOLOv5 notebook example: <a href=\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a> <a href=\"https://www.kaggle.com/ultralytics/yolov5\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a>\n<img width=\"960\" alt=\"Screen Shot 2021-09-29 at 10 23 13 PM\" src=\"https://user-images.githubusercontent.com/26833433/135392431-1ab7920a-c49d-450a-b0b0-0c86ec86100e.png\">\n\n\n </details>\n\n## Viewing Runs\n<details open>\n  <summary> Toggle Details </summary>\nRun information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in <b>realtime</b> . All important information is logged:\n\n * Training & Validation losses\n * Metrics: Precision, Recall, mAP@0.5, mAP@0.5:0.95\n * Learning Rate over time\n * A bounding box debugging panel, showing the training progress over time\n * GPU: Type, **GPU Utilization**, power, temperature, **CUDA memory usage**\n * System: Disk I/0, CPU utilization, RAM memory usage\n * Your trained model as W&B Artifact\n * Environment: OS and Python types, Git repository and state, **training command**\n\n<p align=\"center\"><img width=\"900\" alt=\"Weights & Biases dashboard\" src=\"https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png\"></p>\n</details>\n\n ## Disabling wandb\n* training after running `wandb disabled` inside that directory creates no wandb run\n![Screenshot (84)](https://user-images.githubusercontent.com/15766192/143441777-c780bdd7-7cb4-4404-9559-b4316030a985.png)\n\n* To enable wandb again, run `wandb online`\n![Screenshot (85)](https://user-images.githubusercontent.com/15766192/143441866-7191b2cb-22f0-4e0f-ae64-2dc47dc13078.png)\n\n## Advanced Usage\nYou can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.\n<details open>\n <h3> 1: Train and Log Evaluation simultaneousy </h3>\n   This is an extension of the previous section, but it'll also training after uploading the dataset. <b> This also evaluation Table</b>\n   Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets,\n   so no images will be uploaded from your system more than once.\n <details open>\n  <summary> <b>Usage</b> </summary>\n   <b>Code</b> <code> $ python train.py --upload_data val</code>\n\n![Screenshot from 2021-11-21 17-40-06](https://user-images.githubusercontent.com/15766192/142761183-c1696d8c-3f38-45ab-991a-bb0dfd98ae7d.png)\n </details>\n\n <h3>2. Visualize and Version Datasets</h3>\n Log, visualize, dynamically query, and understand your data with <a href='https://docs.wandb.ai/guides/data-vis/tables'>W&B Tables</a>. You can use the following command to log your dataset as a W&B Table. This will generate a <code>{dataset}_wandb.yaml</code> file which can be used to train from dataset artifact.\n <details>\n  <summary> <b>Usage</b> </summary>\n   <b>Code</b> <code> $ python utils/logger/wandb/log_dataset.py --project ... --name ... --data .. </code>\n\n ![Screenshot (64)](https://user-images.githubusercontent.com/15766192/128486078-d8433890-98a3-4d12-8986-b6c0e3fc64b9.png)\n </details>\n\n <h3> 3: Train using dataset artifact </h3>\n   When you upload a dataset as described in the first section, you get a new config file with an added `_wandb` to its name. This file contains the information that\n   can be used to train a model directly from the dataset artifact. <b> This also logs evaluation </b>\n <details>\n  <summary> <b>Usage</b> </summary>\n   <b>Code</b> <code> $ python train.py --data {data}_wandb.yaml </code>\n\n![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png)\n </details>\n\n   <h3> 4: Save model checkpoints as artifacts </h3>\n  To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval.\n  You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged\n\n <details>\n  <summary> <b>Usage</b> </summary>\n   <b>Code</b> <code> $ python train.py --save_period 1 </code>\n\n![Screenshot (68)](https://user-images.githubusercontent.com/15766192/128726138-ec6c1f60-639d-437d-b4ee-3acd9de47ef3.png)\n </details>\n\n</details>\n\n <h3> 5: Resume runs from checkpoint artifacts. </h3>\nAny run can be resumed using artifacts if the <code>--resume</code> argument starts with <code>wandb-artifact://</code> prefix followed by the run path, i.e, <code>wandb-artifact://username/project/runid </code>. This doesn't require the model checkpoint to be present on the local system.\n\n <details>\n  <summary> <b>Usage</b> </summary>\n   <b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>\n\n![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)\n </details>\n\n  <h3> 6: Resume runs from dataset artifact & checkpoint artifacts. </h3>\n <b> Local dataset or model checkpoints are not required. This can be used to resume runs directly on a different device </b>\n The syntax is same as the previous section, but you'll need to lof both the dataset and model checkpoints as artifacts, i.e, set bot <code>--upload_dataset</code> or\n train from <code>_wandb.yaml</code> file and set <code>--save_period</code>\n\n <details>\n  <summary> <b>Usage</b> </summary>\n   <b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>\n\n![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)\n </details>\n\n</details>\n\n <h3> Reports </h3>\nW&B Reports can be created from your saved runs for sharing online. Once a report is created you will receive a link you can use to publically share your results. Here is an example report created from the COCO128 tutorial trainings of all four YOLOv5 models ([link](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY)).\n\n<img width=\"900\" alt=\"Weights & Biases Reports\" src=\"https://user-images.githubusercontent.com/26833433/135394029-a17eaf86-c6c1-4b1d-bb80-b90e83aaffa7.png\">\n\n\n## Environments\n\nYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n\n- **Google Colab and Kaggle** notebooks with free GPU: <a href=\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a> <a href=\"https://www.kaggle.com/ultralytics/yolov5\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open In Kaggle\"></a>\n- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)\n- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)\n- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href=\"https://hub.docker.com/r/ultralytics/yolov5\"><img src=\"https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker\" alt=\"Docker Pulls\"></a>\n\n\n## Status\n\n![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)\n\nIf this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), validation ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.\n"
  },
  {
    "path": "utils/loggers/wandb/__init__.py",
    "content": ""
  },
  {
    "path": "utils/loggers/wandb/log_dataset.py",
    "content": "import argparse\n\nfrom wandb_utils import WandbLogger\n\nfrom utils.general import LOGGER\n\nWANDB_ARTIFACT_PREFIX = 'wandb-artifact://'\n\n\ndef create_dataset_artifact(opt):\n    logger = WandbLogger(opt, None, job_type='Dataset Creation')  # TODO: return value unused\n    if not logger.wandb:\n        LOGGER.info(\"install wandb using `pip install wandb` to log the dataset\")\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--data', type=str, default='data/coco128.yaml', help='data.yaml path')\n    parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset')\n    parser.add_argument('--project', type=str, default='YOLOv5', help='name of W&B Project')\n    parser.add_argument('--entity', default=None, help='W&B entity')\n    parser.add_argument('--name', type=str, default='log dataset', help='name of W&B run')\n\n    opt = parser.parse_args()\n    opt.resume = False  # Explicitly disallow resume check for dataset upload job\n\n    create_dataset_artifact(opt)\n"
  },
  {
    "path": "utils/loggers/wandb/sweep.py",
    "content": "import sys\nfrom pathlib import Path\n\nimport wandb\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[3]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\n\nfrom train import parse_opt, train\nfrom utils.callbacks import Callbacks\nfrom utils.general import increment_path\nfrom utils.torch_utils import select_device\n\n\ndef sweep():\n    wandb.init()\n    # Get hyp dict from sweep agent\n    hyp_dict = vars(wandb.config).get(\"_items\")\n\n    # Workaround: get necessary opt args\n    opt = parse_opt(known=True)\n    opt.batch_size = hyp_dict.get(\"batch_size\")\n    opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve))\n    opt.epochs = hyp_dict.get(\"epochs\")\n    opt.nosave = True\n    opt.data = hyp_dict.get(\"data\")\n    opt.weights = str(opt.weights)\n    opt.cfg = str(opt.cfg)\n    opt.data = str(opt.data)\n    opt.hyp = str(opt.hyp)\n    opt.project = str(opt.project)\n    device = select_device(opt.device, batch_size=opt.batch_size)\n\n    # train\n    train(hyp_dict, opt, device, callbacks=Callbacks())\n\n\nif __name__ == \"__main__\":\n    sweep()\n"
  },
  {
    "path": "utils/loggers/wandb/sweep.yaml",
    "content": "# Hyperparameters for training\n# To set range-\n# Provide min and max values as:\n#      parameter:\n#\n#         min: scalar\n#         max: scalar\n# OR\n#\n# Set a specific list of search space-\n#     parameter:\n#         values: [scalar1, scalar2, scalar3...]\n#\n# You can use grid, bayesian and hyperopt search strategy\n# For more info on configuring sweeps visit - https://docs.wandb.ai/guides/sweeps/configuration\n\nprogram: utils/loggers/wandb/sweep.py\nmethod: random\nmetric:\n  name: metrics/mAP_0.5\n  goal: maximize\n\nparameters:\n  # hyperparameters: set either min, max range or values list\n  data:\n    value: \"data/coco128.yaml\"\n  batch_size:\n    values: [64]\n  epochs:\n    values: [10]\n\n  lr0:\n    distribution: uniform\n    min: 1e-5\n    max: 1e-1\n  lrf:\n    distribution: uniform\n    min: 0.01\n    max: 1.0\n  momentum:\n    distribution: uniform\n    min: 0.6\n    max: 0.98\n  weight_decay:\n    distribution: uniform\n    min: 0.0\n    max: 0.001\n  warmup_epochs:\n    distribution: uniform\n    min: 0.0\n    max: 5.0\n  warmup_momentum:\n    distribution: uniform\n    min: 0.0\n    max: 0.95\n  warmup_bias_lr:\n    distribution: uniform\n    min: 0.0\n    max: 0.2\n  box:\n    distribution: uniform\n    min: 0.02\n    max: 0.2\n  cls:\n    distribution: uniform\n    min: 0.2\n    max: 4.0\n  cls_pw:\n    distribution: uniform\n    min: 0.5\n    max: 2.0\n  obj:\n    distribution: uniform\n    min: 0.2\n    max: 4.0\n  obj_pw:\n    distribution: uniform\n    min: 0.5\n    max: 2.0\n  iou_t:\n    distribution: uniform\n    min: 0.1\n    max: 0.7\n  anchor_t:\n    distribution: uniform\n    min: 2.0\n    max: 8.0\n  fl_gamma:\n    distribution: uniform\n    min: 0.0\n    max: 0.1\n  hsv_h:\n    distribution: uniform\n    min: 0.0\n    max: 0.1\n  hsv_s:\n    distribution: uniform\n    min: 0.0\n    max: 0.9\n  hsv_v:\n    distribution: uniform\n    min: 0.0\n    max: 0.9\n  degrees:\n    distribution: uniform\n    min: 0.0\n    max: 45.0\n  translate:\n    distribution: uniform\n    min: 0.0\n    max: 0.9\n  scale:\n    distribution: uniform\n    min: 0.0\n    max: 0.9\n  shear:\n    distribution: uniform\n    min: 0.0\n    max: 10.0\n  perspective:\n    distribution: uniform\n    min: 0.0\n    max: 0.001\n  flipud:\n    distribution: uniform\n    min: 0.0\n    max: 1.0\n  fliplr:\n    distribution: uniform\n    min: 0.0\n    max: 1.0\n  mosaic:\n    distribution: uniform\n    min: 0.0\n    max: 1.0\n  mixup:\n    distribution: uniform\n    min: 0.0\n    max: 1.0\n  copy_paste:\n    distribution: uniform\n    min: 0.0\n    max: 1.0\n"
  },
  {
    "path": "utils/loggers/wandb/wandb_utils.py",
    "content": "\"\"\"Utilities and tools for tracking runs with Weights & Biases.\"\"\"\n\nimport logging\nimport os\nimport sys\nfrom contextlib import contextmanager\nfrom pathlib import Path\nfrom typing import Dict\n\nimport yaml\nfrom tqdm import tqdm\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[3]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\n\nfrom utils.datasets import LoadImagesAndLabels, img2label_paths\nfrom utils.general import LOGGER, check_dataset, check_file\n\ntry:\n    import wandb\n\n    assert hasattr(wandb, '__version__')  # verify package import not local dir\nexcept (ImportError, AssertionError):\n    wandb = None\n\nRANK = int(os.getenv('RANK', -1))\nWANDB_ARTIFACT_PREFIX = 'wandb-artifact://'\n\n\ndef remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX):\n    return from_string[len(prefix):]\n\n\ndef check_wandb_config_file(data_config_file):\n    wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1))  # updated data.yaml path\n    if Path(wandb_config).is_file():\n        return wandb_config\n    return data_config_file\n\n\ndef check_wandb_dataset(data_file):\n    is_trainset_wandb_artifact = False\n    is_valset_wandb_artifact = False\n    if check_file(data_file) and data_file.endswith('.yaml'):\n        with open(data_file, errors='ignore') as f:\n            data_dict = yaml.safe_load(f)\n        is_trainset_wandb_artifact = (isinstance(data_dict['train'], str) and\n                                      data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX))\n        is_valset_wandb_artifact = (isinstance(data_dict['val'], str) and\n                                    data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX))\n    if is_trainset_wandb_artifact or is_valset_wandb_artifact:\n        return data_dict\n    else:\n        return check_dataset(data_file)\n\n\ndef get_run_info(run_path):\n    run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX))\n    run_id = run_path.stem\n    project = run_path.parent.stem\n    entity = run_path.parent.parent.stem\n    model_artifact_name = 'run_' + run_id + '_model'\n    return entity, project, run_id, model_artifact_name\n\n\ndef check_wandb_resume(opt):\n    process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None\n    if isinstance(opt.resume, str):\n        if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):\n            if RANK not in [-1, 0]:  # For resuming DDP runs\n                entity, project, run_id, model_artifact_name = get_run_info(opt.resume)\n                api = wandb.Api()\n                artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest')\n                modeldir = artifact.download()\n                opt.weights = str(Path(modeldir) / \"last.pt\")\n            return True\n    return None\n\n\ndef process_wandb_config_ddp_mode(opt):\n    with open(check_file(opt.data), errors='ignore') as f:\n        data_dict = yaml.safe_load(f)  # data dict\n    train_dir, val_dir = None, None\n    if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX):\n        api = wandb.Api()\n        train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias)\n        train_dir = train_artifact.download()\n        train_path = Path(train_dir) / 'data/images/'\n        data_dict['train'] = str(train_path)\n\n    if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX):\n        api = wandb.Api()\n        val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias)\n        val_dir = val_artifact.download()\n        val_path = Path(val_dir) / 'data/images/'\n        data_dict['val'] = str(val_path)\n    if train_dir or val_dir:\n        ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml')\n        with open(ddp_data_path, 'w') as f:\n            yaml.safe_dump(data_dict, f)\n        opt.data = ddp_data_path\n\n\nclass WandbLogger():\n    \"\"\"Log training runs, datasets, models, and predictions to Weights & Biases.\n\n    This logger sends information to W&B at wandb.ai. By default, this information\n    includes hyperparameters, system configuration and metrics, model metrics,\n    and basic data metrics and analyses.\n\n    By providing additional command line arguments to train.py, datasets,\n    models and predictions can also be logged.\n\n    For more on how this logger is used, see the Weights & Biases documentation:\n    https://docs.wandb.com/guides/integrations/yolov5\n    \"\"\"\n\n    def __init__(self, opt, run_id=None, job_type='Training'):\n        \"\"\"\n        - Initialize WandbLogger instance\n        - Upload dataset if opt.upload_dataset is True\n        - Setup trainig processes if job_type is 'Training'\n\n        arguments:\n        opt (namespace) -- Commandline arguments for this run\n        run_id (str) -- Run ID of W&B run to be resumed\n        job_type (str) -- To set the job_type for this run\n\n       \"\"\"\n        # Pre-training routine --\n        self.job_type = job_type\n        self.wandb, self.wandb_run = wandb, None if not wandb else wandb.run\n        self.val_artifact, self.train_artifact = None, None\n        self.train_artifact_path, self.val_artifact_path = None, None\n        self.result_artifact = None\n        self.val_table, self.result_table = None, None\n        self.bbox_media_panel_images = []\n        self.val_table_path_map = None\n        self.max_imgs_to_log = 16\n        self.wandb_artifact_data_dict = None\n        self.data_dict = None\n        # It's more elegant to stick to 1 wandb.init call,\n        #  but useful config data is overwritten in the WandbLogger's wandb.init call\n        if isinstance(opt.resume, str):  # checks resume from artifact\n            if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):\n                entity, project, run_id, model_artifact_name = get_run_info(opt.resume)\n                model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name\n                assert wandb, 'install wandb to resume wandb runs'\n                # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config\n                self.wandb_run = wandb.init(id=run_id,\n                                            project=project,\n                                            entity=entity,\n                                            resume='allow',\n                                            allow_val_change=True)\n                opt.resume = model_artifact_name\n        elif self.wandb:\n            self.wandb_run = wandb.init(config=opt,\n                                        resume=\"allow\",\n                                        project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem,\n                                        entity=opt.entity,\n                                        name=opt.name if opt.name != 'exp' else None,\n                                        job_type=job_type,\n                                        id=run_id,\n                                        allow_val_change=True) if not wandb.run else wandb.run\n        if self.wandb_run:\n            if self.job_type == 'Training':\n                if opt.upload_dataset:\n                    if not opt.resume:\n                        self.wandb_artifact_data_dict = self.check_and_upload_dataset(opt)\n\n                if opt.resume:\n                    # resume from artifact\n                    if isinstance(opt.resume, str) and opt.resume.startswith(WANDB_ARTIFACT_PREFIX):\n                        self.data_dict = dict(self.wandb_run.config.data_dict)\n                    else:  # local resume\n                        self.data_dict = check_wandb_dataset(opt.data)\n                else:\n                    self.data_dict = check_wandb_dataset(opt.data)\n                    self.wandb_artifact_data_dict = self.wandb_artifact_data_dict or self.data_dict\n\n                    # write data_dict to config. useful for resuming from artifacts. Do this only when not resuming.\n                    self.wandb_run.config.update({'data_dict': self.wandb_artifact_data_dict},\n                                                 allow_val_change=True)\n                self.setup_training(opt)\n\n            if self.job_type == 'Dataset Creation':\n                self.wandb_run.config.update({\"upload_dataset\": True})\n                self.data_dict = self.check_and_upload_dataset(opt)\n\n    def check_and_upload_dataset(self, opt):\n        \"\"\"\n        Check if the dataset format is compatible and upload it as W&B artifact\n\n        arguments:\n        opt (namespace)-- Commandline arguments for current run\n\n        returns:\n        Updated dataset info dictionary where local dataset paths are replaced by WAND_ARFACT_PREFIX links.\n        \"\"\"\n        assert wandb, 'Install wandb to upload dataset'\n        config_path = self.log_dataset_artifact(opt.data,\n                                                opt.single_cls,\n                                                'YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem)\n        with open(config_path, errors='ignore') as f:\n            wandb_data_dict = yaml.safe_load(f)\n        return wandb_data_dict\n\n    def setup_training(self, opt):\n        \"\"\"\n        Setup the necessary processes for training YOLO models:\n          - Attempt to download model checkpoint and dataset artifacts if opt.resume stats with WANDB_ARTIFACT_PREFIX\n          - Update data_dict, to contain info of previous run if resumed and the paths of dataset artifact if downloaded\n          - Setup log_dict, initialize bbox_interval\n\n        arguments:\n        opt (namespace) -- commandline arguments for this run\n\n        \"\"\"\n        self.log_dict, self.current_epoch = {}, 0\n        self.bbox_interval = opt.bbox_interval\n        if isinstance(opt.resume, str):\n            modeldir, _ = self.download_model_artifact(opt)\n            if modeldir:\n                self.weights = Path(modeldir) / \"last.pt\"\n                config = self.wandb_run.config\n                opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp = str(\n                    self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs, \\\n                                                                                                       config.hyp\n        data_dict = self.data_dict\n        if self.val_artifact is None:  # If --upload_dataset is set, use the existing artifact, don't download\n            self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(data_dict.get('train'),\n                                                                                           opt.artifact_alias)\n            self.val_artifact_path, self.val_artifact = self.download_dataset_artifact(data_dict.get('val'),\n                                                                                       opt.artifact_alias)\n\n        if self.train_artifact_path is not None:\n            train_path = Path(self.train_artifact_path) / 'data/images/'\n            data_dict['train'] = str(train_path)\n        if self.val_artifact_path is not None:\n            val_path = Path(self.val_artifact_path) / 'data/images/'\n            data_dict['val'] = str(val_path)\n\n        if self.val_artifact is not None:\n            self.result_artifact = wandb.Artifact(\"run_\" + wandb.run.id + \"_progress\", \"evaluation\")\n            columns = [\"epoch\", \"id\", \"ground truth\", \"prediction\"]\n            columns.extend(self.data_dict['names'])\n            self.result_table = wandb.Table(columns)\n            self.val_table = self.val_artifact.get(\"val\")\n            if self.val_table_path_map is None:\n                self.map_val_table_path()\n        if opt.bbox_interval == -1:\n            self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1\n        train_from_artifact = self.train_artifact_path is not None and self.val_artifact_path is not None\n        # Update the the data_dict to point to local artifacts dir\n        if train_from_artifact:\n            self.data_dict = data_dict\n\n    def download_dataset_artifact(self, path, alias):\n        \"\"\"\n        download the model checkpoint artifact if the path starts with WANDB_ARTIFACT_PREFIX\n\n        arguments:\n        path -- path of the dataset to be used for training\n        alias (str)-- alias of the artifact to be download/used for training\n\n        returns:\n        (str, wandb.Artifact) -- path of the downladed dataset and it's corresponding artifact object if dataset\n        is found otherwise returns (None, None)\n        \"\"\"\n        if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX):\n            artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + \":\" + alias)\n            dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace(\"\\\\\", \"/\"))\n            assert dataset_artifact is not None, \"'Error: W&B dataset artifact doesn\\'t exist'\"\n            datadir = dataset_artifact.download()\n            return datadir, dataset_artifact\n        return None, None\n\n    def download_model_artifact(self, opt):\n        \"\"\"\n        download the model checkpoint artifact if the resume path starts with WANDB_ARTIFACT_PREFIX\n\n        arguments:\n        opt (namespace) -- Commandline arguments for this run\n        \"\"\"\n        if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):\n            model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + \":latest\")\n            assert model_artifact is not None, 'Error: W&B model artifact doesn\\'t exist'\n            modeldir = model_artifact.download()\n            epochs_trained = model_artifact.metadata.get('epochs_trained')\n            total_epochs = model_artifact.metadata.get('total_epochs')\n            is_finished = total_epochs is None\n            assert not is_finished, 'training is finished, can only resume incomplete runs.'\n            return modeldir, model_artifact\n        return None, None\n\n    def log_model(self, path, opt, epoch, fitness_score, best_model=False):\n        \"\"\"\n        Log the model checkpoint as W&B artifact\n\n        arguments:\n        path (Path)   -- Path of directory containing the checkpoints\n        opt (namespace) -- Command line arguments for this run\n        epoch (int)  -- Current epoch number\n        fitness_score (float) -- fitness score for current epoch\n        best_model (boolean) -- Boolean representing if the current checkpoint is the best yet.\n        \"\"\"\n        model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', type='model', metadata={\n            'original_url': str(path),\n            'epochs_trained': epoch + 1,\n            'save period': opt.save_period,\n            'project': opt.project,\n            'total_epochs': opt.epochs,\n            'fitness_score': fitness_score\n        })\n        model_artifact.add_file(str(path / 'last.pt'), name='last.pt')\n        wandb.log_artifact(model_artifact,\n                           aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else ''])\n        LOGGER.info(f\"Saving model artifact on epoch {epoch + 1}\")\n\n    def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False):\n        \"\"\"\n        Log the dataset as W&B artifact and return the new data file with W&B links\n\n        arguments:\n        data_file (str) -- the .yaml file with information about the dataset like - path, classes etc.\n        single_class (boolean)  -- train multi-class data as single-class\n        project (str) -- project name. Used to construct the artifact path\n        overwrite_config (boolean) -- overwrites the data.yaml file if set to true otherwise creates a new\n        file with _wandb postfix. Eg -> data_wandb.yaml\n\n        returns:\n        the new .yaml file with artifact links. it can be used to start training directly from artifacts\n        \"\"\"\n        upload_dataset = self.wandb_run.config.upload_dataset\n        log_val_only = isinstance(upload_dataset, str) and upload_dataset == 'val'\n        self.data_dict = check_dataset(data_file)  # parse and check\n        data = dict(self.data_dict)\n        nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names'])\n        names = {k: v for k, v in enumerate(names)}  # to index dictionary\n\n        # log train set\n        if not log_val_only:\n            self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(\n                data['train'], rect=True, batch_size=1), names, name='train') if data.get('train') else None\n            if data.get('train'):\n                data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train')\n\n        self.val_artifact = self.create_dataset_table(LoadImagesAndLabels(\n            data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None\n        if data.get('val'):\n            data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val')\n\n        path = Path(data_file)\n        # create a _wandb.yaml file with artifacts links if both train and test set are logged\n        if not log_val_only:\n            path = (path.stem if overwrite_config else path.stem + '_wandb') + '.yaml'  # updated data.yaml path\n            path = Path('data') / path\n            data.pop('download', None)\n            data.pop('path', None)\n            with open(path, 'w') as f:\n                yaml.safe_dump(data, f)\n                LOGGER.info(f\"Created dataset config file {path}\")\n\n        if self.job_type == 'Training':  # builds correct artifact pipeline graph\n            if not log_val_only:\n                self.wandb_run.log_artifact(\n                    self.train_artifact)  # calling use_artifact downloads the dataset. NOT NEEDED!\n            self.wandb_run.use_artifact(self.val_artifact)\n            self.val_artifact.wait()\n            self.val_table = self.val_artifact.get('val')\n            self.map_val_table_path()\n        else:\n            self.wandb_run.log_artifact(self.train_artifact)\n            self.wandb_run.log_artifact(self.val_artifact)\n        return path\n\n    def map_val_table_path(self):\n        \"\"\"\n        Map the validation dataset Table like name of file -> it's id in the W&B Table.\n        Useful for - referencing artifacts for evaluation.\n        \"\"\"\n        self.val_table_path_map = {}\n        LOGGER.info(\"Mapping dataset\")\n        for i, data in enumerate(tqdm(self.val_table.data)):\n            self.val_table_path_map[data[3]] = data[0]\n\n    def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_id: Dict[int, str], name: str = 'dataset'):\n        \"\"\"\n        Create and return W&B artifact containing W&B Table of the dataset.\n\n        arguments:\n        dataset -- instance of LoadImagesAndLabels class used to iterate over the data to build Table\n        class_to_id -- hash map that maps class ids to labels\n        name -- name of the artifact\n\n        returns:\n        dataset artifact to be logged or used\n        \"\"\"\n        # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging\n        artifact = wandb.Artifact(name=name, type=\"dataset\")\n        img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None\n        img_files = tqdm(dataset.img_files) if not img_files else img_files\n        for img_file in img_files:\n            if Path(img_file).is_dir():\n                artifact.add_dir(img_file, name='data/images')\n                labels_path = 'labels'.join(dataset.path.rsplit('images', 1))\n                artifact.add_dir(labels_path, name='data/labels')\n            else:\n                artifact.add_file(img_file, name='data/images/' + Path(img_file).name)\n                label_file = Path(img2label_paths([img_file])[0])\n                artifact.add_file(str(label_file),\n                                  name='data/labels/' + label_file.name) if label_file.exists() else None\n        table = wandb.Table(columns=[\"id\", \"train_image\", \"Classes\", \"name\"])\n        class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()])\n        for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)):\n            box_data, img_classes = [], {}\n            for cls, *xywh in labels[:, 1:].tolist():\n                cls = int(cls)\n                box_data.append({\"position\": {\"middle\": [xywh[0], xywh[1]], \"width\": xywh[2], \"height\": xywh[3]},\n                                 \"class_id\": cls,\n                                 \"box_caption\": \"%s\" % (class_to_id[cls])})\n                img_classes[cls] = class_to_id[cls]\n            boxes = {\"ground_truth\": {\"box_data\": box_data, \"class_labels\": class_to_id}}  # inference-space\n            table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()),\n                           Path(paths).name)\n        artifact.add(table, name)\n        return artifact\n\n    def log_training_progress(self, predn, path, names):\n        \"\"\"\n        Build evaluation Table. Uses reference from validation dataset table.\n\n        arguments:\n        predn (list): list of predictions in the native space in the format - [xmin, ymin, xmax, ymax, confidence, class]\n        path (str): local path of the current evaluation image\n        names (dict(int, str)): hash map that maps class ids to labels\n        \"\"\"\n        class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()])\n        box_data = []\n        avg_conf_per_class = [0] * len(self.data_dict['names'])\n        pred_class_count = {}\n        for *xyxy, conf, cls in predn.tolist():\n            if conf >= 0.25:\n                cls = int(cls)\n                box_data.append(\n                    {\"position\": {\"minX\": xyxy[0], \"minY\": xyxy[1], \"maxX\": xyxy[2], \"maxY\": xyxy[3]},\n                     \"class_id\": cls,\n                     \"box_caption\": f\"{names[cls]} {conf:.3f}\",\n                     \"scores\": {\"class_score\": conf},\n                     \"domain\": \"pixel\"})\n                avg_conf_per_class[cls] += conf\n\n                if cls in pred_class_count:\n                    pred_class_count[cls] += 1\n                else:\n                    pred_class_count[cls] = 1\n\n        for pred_class in pred_class_count.keys():\n            avg_conf_per_class[pred_class] = avg_conf_per_class[pred_class] / pred_class_count[pred_class]\n\n        boxes = {\"predictions\": {\"box_data\": box_data, \"class_labels\": names}}  # inference-space\n        id = self.val_table_path_map[Path(path).name]\n        self.result_table.add_data(self.current_epoch,\n                                   id,\n                                   self.val_table.data[id][1],\n                                   wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set),\n                                   *avg_conf_per_class\n                                   )\n\n    def val_one_image(self, pred, predn, path, names, im):\n        \"\"\"\n        Log validation data for one image. updates the result Table if validation dataset is uploaded and log bbox media panel\n\n        arguments:\n        pred (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class]\n        predn (list): list of predictions in the native space - [xmin, ymin, xmax, ymax, confidence, class]\n        path (str): local path of the current evaluation image\n        \"\"\"\n        if self.val_table and self.result_table:  # Log Table if Val dataset is uploaded as artifact\n            self.log_training_progress(predn, path, names)\n\n        if len(self.bbox_media_panel_images) < self.max_imgs_to_log and self.current_epoch > 0:\n            if self.current_epoch % self.bbox_interval == 0:\n                box_data = [{\"position\": {\"minX\": xyxy[0], \"minY\": xyxy[1], \"maxX\": xyxy[2], \"maxY\": xyxy[3]},\n                             \"class_id\": int(cls),\n                             \"box_caption\": f\"{names[cls]} {conf:.3f}\",\n                             \"scores\": {\"class_score\": conf},\n                             \"domain\": \"pixel\"} for *xyxy, conf, cls in pred.tolist()]\n                boxes = {\"predictions\": {\"box_data\": box_data, \"class_labels\": names}}  # inference-space\n                self.bbox_media_panel_images.append(wandb.Image(im, boxes=boxes, caption=path.name))\n\n    def log(self, log_dict):\n        \"\"\"\n        save the metrics to the logging dictionary\n\n        arguments:\n        log_dict (Dict) -- metrics/media to be logged in current step\n        \"\"\"\n        if self.wandb_run:\n            for key, value in log_dict.items():\n                self.log_dict[key] = value\n\n    def end_epoch(self, best_result=False):\n        \"\"\"\n        commit the log_dict, model artifacts and Tables to W&B and flush the log_dict.\n\n        arguments:\n        best_result (boolean): Boolean representing if the result of this evaluation is best or not\n        \"\"\"\n        if self.wandb_run:\n            with all_logging_disabled():\n                if self.bbox_media_panel_images:\n                    self.log_dict[\"BoundingBoxDebugger\"] = self.bbox_media_panel_images\n                try:\n                    wandb.log(self.log_dict)\n                except BaseException as e:\n                    LOGGER.info(\n                        f\"An error occurred in wandb logger. The training will proceed without interruption. More info\\n{e}\")\n                    self.wandb_run.finish()\n                    self.wandb_run = None\n\n                self.log_dict = {}\n                self.bbox_media_panel_images = []\n            if self.result_artifact:\n                self.result_artifact.add(self.result_table, 'result')\n                wandb.log_artifact(self.result_artifact, aliases=['latest', 'last', 'epoch ' + str(self.current_epoch),\n                                                                  ('best' if best_result else '')])\n\n                wandb.log({\"evaluation\": self.result_table})\n                columns = [\"epoch\", \"id\", \"ground truth\", \"prediction\"]\n                columns.extend(self.data_dict['names'])\n                self.result_table = wandb.Table(columns)\n                self.result_artifact = wandb.Artifact(\"run_\" + wandb.run.id + \"_progress\", \"evaluation\")\n\n    def finish_run(self):\n        \"\"\"\n        Log metrics if any and finish the current W&B run\n        \"\"\"\n        if self.wandb_run:\n            if self.log_dict:\n                with all_logging_disabled():\n                    wandb.log(self.log_dict)\n            wandb.run.finish()\n\n\n@contextmanager\ndef all_logging_disabled(highest_level=logging.CRITICAL):\n    \"\"\" source - https://gist.github.com/simon-weber/7853144\n    A context manager that will prevent any logging messages triggered during the body from being processed.\n    :param highest_level: the maximum logging level in use.\n      This would only need to be changed if a custom level greater than CRITICAL is defined.\n    \"\"\"\n    previous_level = logging.root.manager.disable\n    logging.disable(highest_level)\n    try:\n        yield\n    finally:\n        logging.disable(previous_level)\n"
  },
  {
    "path": "utils/loss.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nLoss functions\n\"\"\"\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.metrics import bbox_iou\nfrom utils.torch_utils import is_parallel\n\n\ndef smooth_BCE(eps=0.1):  # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441\n    # return positive, negative label smoothing BCE targets\n    return 1.0 - 0.5 * eps, 0.5 * eps\n\n\nclass BCEBlurWithLogitsLoss(nn.Module):\n    # BCEwithLogitLoss() with reduced missing label effects.\n    def __init__(self, alpha=0.05):\n        super().__init__()\n        self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none')  # must be nn.BCEWithLogitsLoss()\n        self.alpha = alpha\n\n    def forward(self, pred, true):\n        loss = self.loss_fcn(pred, true)\n        pred = torch.sigmoid(pred)  # prob from logits\n        dx = pred - true  # reduce only missing label effects\n        # dx = (pred - true).abs()  # reduce missing label and false label effects\n        alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))\n        loss *= alpha_factor\n        return loss.mean()\n\n\nclass FocalLoss(nn.Module):\n    # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)\n    def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):\n        super().__init__()\n        self.loss_fcn = loss_fcn  # must be nn.BCEWithLogitsLoss()\n        self.gamma = gamma\n        self.alpha = alpha\n        self.reduction = loss_fcn.reduction\n        self.loss_fcn.reduction = 'none'  # required to apply FL to each element\n\n    def forward(self, pred, true):\n        loss = self.loss_fcn(pred, true)\n        # p_t = torch.exp(-loss)\n        # loss *= self.alpha * (1.000001 - p_t) ** self.gamma  # non-zero power for gradient stability\n\n        # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py\n        pred_prob = torch.sigmoid(pred)  # prob from logits\n        p_t = true * pred_prob + (1 - true) * (1 - pred_prob)\n        alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)\n        modulating_factor = (1.0 - p_t) ** self.gamma\n        loss *= alpha_factor * modulating_factor\n\n        if self.reduction == 'mean':\n            return loss.mean()\n        elif self.reduction == 'sum':\n            return loss.sum()\n        else:  # 'none'\n            return loss\n\n\nclass QFocalLoss(nn.Module):\n    # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)\n    def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):\n        super().__init__()\n        self.loss_fcn = loss_fcn  # must be nn.BCEWithLogitsLoss()\n        self.gamma = gamma\n        self.alpha = alpha\n        self.reduction = loss_fcn.reduction\n        self.loss_fcn.reduction = 'none'  # required to apply FL to each element\n\n    def forward(self, pred, true):\n        loss = self.loss_fcn(pred, true)\n\n        pred_prob = torch.sigmoid(pred)  # prob from logits\n        alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)\n        modulating_factor = torch.abs(true - pred_prob) ** self.gamma\n        loss *= alpha_factor * modulating_factor\n\n        if self.reduction == 'mean':\n            return loss.mean()\n        elif self.reduction == 'sum':\n            return loss.sum()\n        else:  # 'none'\n            return loss\n\n\nclass ComputeLoss:\n    # Compute losses\n    def __init__(self, model, autobalance=False):\n        self.sort_obj_iou = False\n        device = next(model.parameters()).device  # get model device\n        h = model.hyp  # hyperparameters\n\n        # Define criteria\n        BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))\n        BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))\n\n        # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3\n        self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0))  # positive, negative BCE targets\n\n        # Focal loss\n        g = h['fl_gamma']  # focal loss gamma\n        if g > 0:\n            BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)\n\n        det = model.module.model[-1] if is_parallel(model) else model.model[-1]  # Detect() module\n        self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, 0.02])  # P3-P7\n        self.ssi = list(det.stride).index(16) if autobalance else 0  # stride 16 index\n        self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, 1.0, h, autobalance\n        for k in 'na', 'nc', 'nl', 'anchors':\n            setattr(self, k, getattr(det, k))\n\n    def __call__(self, p, targets):  # predictions, targets, model\n        device = targets.device\n        lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)\n        tcls, tbox, indices, anchors = self.build_targets(p, targets)  # targets\n\n        # Losses\n        for i, pi in enumerate(p):  # layer index, layer predictions\n            b, a, gj, gi = indices[i]  # image, anchor, gridy, gridx\n            tobj = torch.zeros_like(pi[..., 0], device=device)  # target obj\n\n            n = b.shape[0]  # number of targets\n            if n:\n                ps = pi[b, a, gj, gi]  # prediction subset corresponding to targets\n\n                # Regression\n                pxy = ps[:, :2].sigmoid() * 2 - 0.5\n                pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]\n                pbox = torch.cat((pxy, pwh), 1)  # predicted box\n                iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True)  # iou(prediction, target)\n                lbox += (1.0 - iou).mean()  # iou loss\n\n                # Objectness\n                score_iou = iou.detach().clamp(0).type(tobj.dtype)\n                if self.sort_obj_iou:\n                    sort_id = torch.argsort(score_iou)\n                    b, a, gj, gi, score_iou = b[sort_id], a[sort_id], gj[sort_id], gi[sort_id], score_iou[sort_id]\n                tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * score_iou  # iou ratio\n\n                # Classification\n                if self.nc > 1:  # cls loss (only if multiple classes)\n                    t = torch.full_like(ps[:, 5:], self.cn, device=device)  # targets\n                    t[range(n), tcls[i]] = self.cp\n                    lcls += self.BCEcls(ps[:, 5:], t)  # BCE\n\n                # Append targets to text file\n                # with open('targets.txt', 'a') as file:\n                #     [file.write('%11.5g ' * 4 % tuple(x) + '\\n') for x in torch.cat((txy[i], twh[i]), 1)]\n\n            obji = self.BCEobj(pi[..., 4], tobj)\n            lobj += obji * self.balance[i]  # obj loss\n            if self.autobalance:\n                self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()\n\n        if self.autobalance:\n            self.balance = [x / self.balance[self.ssi] for x in self.balance]\n        lbox *= self.hyp['box']\n        lobj *= self.hyp['obj']\n        lcls *= self.hyp['cls']\n        bs = tobj.shape[0]  # batch size\n\n        return (lbox + lobj + lcls) * bs, torch.cat((lbox, lobj, lcls)).detach()\n\n    def build_targets(self, p, targets):\n        # Build targets for compute_loss(), input targets(image,class,x,y,w,h)\n        na, nt = self.na, targets.shape[0]  # number of anchors, targets\n        tcls, tbox, indices, anch = [], [], [], []\n        gain = torch.ones(7, device=targets.device)  # normalized to gridspace gain\n        ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt)  # same as .repeat_interleave(nt)\n        targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2)  # append anchor indices\n\n        g = 0.5  # bias\n        off = torch.tensor([[0, 0],\n                            [1, 0], [0, 1], [-1, 0], [0, -1],  # j,k,l,m\n                            # [1, 1], [1, -1], [-1, 1], [-1, -1],  # jk,jm,lk,lm\n                            ], device=targets.device).float() * g  # offsets\n\n        for i in range(self.nl):\n            anchors = self.anchors[i]\n            gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]]  # xyxy gain\n\n            # Match targets to anchors\n            t = targets * gain\n            if nt:\n                # Matches\n                r = t[:, :, 4:6] / anchors[:, None]  # wh ratio\n                j = torch.max(r, 1 / r).max(2)[0] < self.hyp['anchor_t']  # compare\n                # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t']  # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))\n                t = t[j]  # filter\n\n                # Offsets\n                gxy = t[:, 2:4]  # grid xy\n                gxi = gain[[2, 3]] - gxy  # inverse\n                j, k = ((gxy % 1 < g) & (gxy > 1)).T\n                l, m = ((gxi % 1 < g) & (gxi > 1)).T\n                j = torch.stack((torch.ones_like(j), j, k, l, m))\n                t = t.repeat((5, 1, 1))[j]\n                offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]\n            else:\n                t = targets[0]\n                offsets = 0\n\n            # Define\n            b, c = t[:, :2].long().T  # image, class\n            gxy = t[:, 2:4]  # grid xy\n            gwh = t[:, 4:6]  # grid wh\n            gij = (gxy - offsets).long()\n            gi, gj = gij.T  # grid xy indices\n\n            # Append\n            a = t[:, 6].long()  # anchor indices\n            indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1)))  # image, anchor, grid indices\n            tbox.append(torch.cat((gxy - gij, gwh), 1))  # box\n            anch.append(anchors[a])  # anchors\n            tcls.append(c)  # class\n\n        return tcls, tbox, indices, anch\n"
  },
  {
    "path": "utils/metrics.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nModel validation metrics\n\"\"\"\n\nimport math\nimport warnings\nfrom pathlib import Path\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\n\n\ndef fitness(x):\n    # Model fitness as a weighted combination of metrics\n    w = [0.0, 0.0, 0.1, 0.9]  # weights for [P, R, mAP@0.5, mAP@0.5:0.95]\n    return (x[:, :4] * w).sum(1)\n\n\ndef ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=(), eps=1e-16):\n    \"\"\" Compute the average precision, given the recall and precision curves.\n    Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.\n    # Arguments\n        tp:  True positives (nparray, nx1 or nx10).\n        conf:  Objectness value from 0-1 (nparray).\n        pred_cls:  Predicted object classes (nparray).\n        target_cls:  True object classes (nparray).\n        plot:  Plot precision-recall curve at mAP@0.5\n        save_dir:  Plot save directory\n    # Returns\n        The average precision as computed in py-faster-rcnn.\n    \"\"\"\n\n    # Sort by objectness\n    i = np.argsort(-conf)\n    tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]\n\n    # Find unique classes\n    unique_classes, nt = np.unique(target_cls, return_counts=True)\n    nc = unique_classes.shape[0]  # number of classes, number of detections\n\n    # Create Precision-Recall curve and compute AP for each class\n    px, py = np.linspace(0, 1, 1000), []  # for plotting\n    ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))\n    for ci, c in enumerate(unique_classes):\n        i = pred_cls == c\n        n_l = nt[ci]  # number of labels\n        n_p = i.sum()  # number of predictions\n\n        if n_p == 0 or n_l == 0:\n            continue\n        else:\n            # Accumulate FPs and TPs\n            fpc = (1 - tp[i]).cumsum(0)\n            tpc = tp[i].cumsum(0)\n\n            # Recall\n            recall = tpc / (n_l + eps)  # recall curve\n            r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0)  # negative x, xp because xp decreases\n\n            # Precision\n            precision = tpc / (tpc + fpc)  # precision curve\n            p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1)  # p at pr_score\n\n            # AP from recall-precision curve\n            for j in range(tp.shape[1]):\n                ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])\n                if plot and j == 0:\n                    py.append(np.interp(px, mrec, mpre))  # precision at mAP@0.5\n\n    # Compute F1 (harmonic mean of precision and recall)\n    f1 = 2 * p * r / (p + r + eps)\n    names = [v for k, v in names.items() if k in unique_classes]  # list: only classes that have data\n    names = {i: v for i, v in enumerate(names)}  # to dict\n    if plot:\n        plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)\n        plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')\n        plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')\n        plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')\n\n    i = f1.mean(0).argmax()  # max F1 index\n    p, r, f1 = p[:, i], r[:, i], f1[:, i]\n    tp = (r * nt).round()  # true positives\n    fp = (tp / (p + eps) - tp).round()  # false positives\n    return tp, fp, p, r, f1, ap, unique_classes.astype('int32')\n\n\ndef compute_ap(recall, precision):\n    \"\"\" Compute the average precision, given the recall and precision curves\n    # Arguments\n        recall:    The recall curve (list)\n        precision: The precision curve (list)\n    # Returns\n        Average precision, precision curve, recall curve\n    \"\"\"\n\n    # Append sentinel values to beginning and end\n    mrec = np.concatenate(([0.0], recall, [1.0]))\n    mpre = np.concatenate(([1.0], precision, [0.0]))\n\n    # Compute the precision envelope\n    mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))\n\n    # Integrate area under curve\n    method = 'interp'  # methods: 'continuous', 'interp'\n    if method == 'interp':\n        x = np.linspace(0, 1, 101)  # 101-point interp (COCO)\n        ap = np.trapz(np.interp(x, mrec, mpre), x)  # integrate\n    else:  # 'continuous'\n        i = np.where(mrec[1:] != mrec[:-1])[0]  # points where x axis (recall) changes\n        ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])  # area under curve\n\n    return ap, mpre, mrec\n\n\nclass ConfusionMatrix:\n    # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix\n    def __init__(self, nc, conf=0.25, iou_thres=0.45):\n        self.matrix = np.zeros((nc + 1, nc + 1))\n        self.nc = nc  # number of classes\n        self.conf = conf\n        self.iou_thres = iou_thres\n\n    def process_batch(self, detections, labels):\n        \"\"\"\n        Return intersection-over-union (Jaccard index) of boxes.\n        Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n        Arguments:\n            detections (Array[N, 6]), x1, y1, x2, y2, conf, class\n            labels (Array[M, 5]), class, x1, y1, x2, y2\n        Returns:\n            None, updates confusion matrix accordingly\n        \"\"\"\n        detections = detections[detections[:, 4] > self.conf]\n        gt_classes = labels[:, 0].int()\n        detection_classes = detections[:, 5].int()\n        iou = box_iou(labels[:, 1:], detections[:, :4])\n\n        x = torch.where(iou > self.iou_thres)\n        if x[0].shape[0]:\n            matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()\n            if x[0].shape[0] > 1:\n                matches = matches[matches[:, 2].argsort()[::-1]]\n                matches = matches[np.unique(matches[:, 1], return_index=True)[1]]\n                matches = matches[matches[:, 2].argsort()[::-1]]\n                matches = matches[np.unique(matches[:, 0], return_index=True)[1]]\n        else:\n            matches = np.zeros((0, 3))\n\n        n = matches.shape[0] > 0\n        m0, m1, _ = matches.transpose().astype(np.int16)\n        for i, gc in enumerate(gt_classes):\n            j = m0 == i\n            if n and sum(j) == 1:\n                self.matrix[detection_classes[m1[j]], gc] += 1  # correct\n            else:\n                self.matrix[self.nc, gc] += 1  # background FP\n\n        if n:\n            for i, dc in enumerate(detection_classes):\n                if not any(m1 == i):\n                    self.matrix[dc, self.nc] += 1  # background FN\n\n    def matrix(self):\n        return self.matrix\n\n    def tp_fp(self):\n        tp = self.matrix.diagonal()  # true positives\n        fp = self.matrix.sum(1) - tp  # false positives\n        # fn = self.matrix.sum(0) - tp  # false negatives (missed detections)\n        return tp[:-1], fp[:-1]  # remove background class\n\n    def plot(self, normalize=True, save_dir='', names=()):\n        try:\n            import seaborn as sn\n\n            array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-6) if normalize else 1)  # normalize columns\n            array[array < 0.005] = np.nan  # don't annotate (would appear as 0.00)\n\n            fig = plt.figure(figsize=(12, 9), tight_layout=True)\n            sn.set(font_scale=1.0 if self.nc < 50 else 0.8)  # for label size\n            labels = (0 < len(names) < 99) and len(names) == self.nc  # apply names to ticklabels\n            with warnings.catch_warnings():\n                warnings.simplefilter('ignore')  # suppress empty matrix RuntimeWarning: All-NaN slice encountered\n                sn.heatmap(array, annot=self.nc < 30, annot_kws={\"size\": 8}, cmap='Blues', fmt='.2f', square=True,\n                           xticklabels=names + ['background FP'] if labels else \"auto\",\n                           yticklabels=names + ['background FN'] if labels else \"auto\").set_facecolor((1, 1, 1))\n            fig.axes[0].set_xlabel('True')\n            fig.axes[0].set_ylabel('Predicted')\n            fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)\n            plt.close()\n        except Exception as e:\n            print(f'WARNING: ConfusionMatrix plot failure: {e}')\n\n    def print(self):\n        for i in range(self.nc + 1):\n            print(' '.join(map(str, self.matrix[i])))\n\n\ndef bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):\n    # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4\n    box2 = box2.T\n\n    # Get the coordinates of bounding boxes\n    if x1y1x2y2:  # x1, y1, x2, y2 = box1\n        b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]\n        b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]\n    else:  # transform from xywh to xyxy\n        b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2\n        b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2\n        b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2\n        b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2\n\n    # Intersection area\n    inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \\\n            (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)\n\n    # Union Area\n    w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps\n    w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps\n    union = w1 * h1 + w2 * h2 - inter + eps\n\n    iou = inter / union\n    if GIoU or DIoU or CIoU:\n        cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1)  # convex (smallest enclosing box) width\n        ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1)  # convex height\n        if CIoU or DIoU:  # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1\n            c2 = cw ** 2 + ch ** 2 + eps  # convex diagonal squared\n            rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +\n                    (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4  # center distance squared\n            if DIoU:\n                return iou - rho2 / c2  # DIoU\n            elif CIoU:  # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47\n                v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)\n                with torch.no_grad():\n                    alpha = v / (v - iou + (1 + eps))\n                return iou - (rho2 / c2 + v * alpha)  # CIoU\n        else:  # GIoU https://arxiv.org/pdf/1902.09630.pdf\n            c_area = cw * ch + eps  # convex area\n            return iou - (c_area - union) / c_area  # GIoU\n    else:\n        return iou  # IoU\n\n\ndef box_iou(box1, box2):\n    # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py\n    \"\"\"\n    Return intersection-over-union (Jaccard index) of boxes.\n    Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n    Arguments:\n        box1 (Tensor[N, 4])\n        box2 (Tensor[M, 4])\n    Returns:\n        iou (Tensor[N, M]): the NxM matrix containing the pairwise\n            IoU values for every element in boxes1 and boxes2\n    \"\"\"\n\n    def box_area(box):\n        # box = 4xn\n        return (box[2] - box[0]) * (box[3] - box[1])\n\n    area1 = box_area(box1.T)\n    area2 = box_area(box2.T)\n\n    # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)\n    inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)\n    return inter / (area1[:, None] + area2 - inter)  # iou = inter / (area1 + area2 - inter)\n\n\ndef bbox_ioa(box1, box2, eps=1E-7):\n    \"\"\" Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2\n    box1:       np.array of shape(4)\n    box2:       np.array of shape(nx4)\n    returns:    np.array of shape(n)\n    \"\"\"\n\n    box2 = box2.transpose()\n\n    # Get the coordinates of bounding boxes\n    b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]\n    b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]\n\n    # Intersection area\n    inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \\\n                 (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)\n\n    # box2 area\n    box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + eps\n\n    # Intersection over box2 area\n    return inter_area / box2_area\n\n\ndef wh_iou(wh1, wh2):\n    # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2\n    wh1 = wh1[:, None]  # [N,1,2]\n    wh2 = wh2[None]  # [1,M,2]\n    inter = torch.min(wh1, wh2).prod(2)  # [N,M]\n    return inter / (wh1.prod(2) + wh2.prod(2) - inter)  # iou = inter / (area1 + area2 - inter)\n\n\n# Plots ----------------------------------------------------------------------------------------------------------------\n\ndef plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):\n    # Precision-recall curve\n    fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)\n    py = np.stack(py, axis=1)\n\n    if 0 < len(names) < 21:  # display per-class legend if < 21 classes\n        for i, y in enumerate(py.T):\n            ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}')  # plot(recall, precision)\n    else:\n        ax.plot(px, py, linewidth=1, color='grey')  # plot(recall, precision)\n\n    ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())\n    ax.set_xlabel('Recall')\n    ax.set_ylabel('Precision')\n    ax.set_xlim(0, 1)\n    ax.set_ylim(0, 1)\n    plt.legend(bbox_to_anchor=(1.04, 1), loc=\"upper left\")\n    fig.savefig(Path(save_dir), dpi=250)\n    plt.close()\n\n\ndef plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):\n    # Metric-confidence curve\n    fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)\n\n    if 0 < len(names) < 21:  # display per-class legend if < 21 classes\n        for i, y in enumerate(py):\n            ax.plot(px, y, linewidth=1, label=f'{names[i]}')  # plot(confidence, metric)\n    else:\n        ax.plot(px, py.T, linewidth=1, color='grey')  # plot(confidence, metric)\n\n    y = py.mean(0)\n    ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')\n    ax.set_xlabel(xlabel)\n    ax.set_ylabel(ylabel)\n    ax.set_xlim(0, 1)\n    ax.set_ylim(0, 1)\n    plt.legend(bbox_to_anchor=(1.04, 1), loc=\"upper left\")\n    fig.savefig(Path(save_dir), dpi=250)\n    plt.close()\n"
  },
  {
    "path": "utils/plots.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPlotting utils\n\"\"\"\n\nimport math\nimport os\nfrom copy import copy\nfrom pathlib import Path\n\nimport cv2\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sn\nimport torch\nfrom PIL import Image, ImageDraw, ImageFont\n\nfrom utils.general import (LOGGER, Timeout, check_requirements, clip_coords, increment_path, is_ascii, is_chinese,\n                           try_except, user_config_dir, xywh2xyxy, xyxy2xywh)\nfrom utils.metrics import fitness\n\n# Settings\nCONFIG_DIR = user_config_dir()  # Ultralytics settings dir\nRANK = int(os.getenv('RANK', -1))\nmatplotlib.rc('font', **{'size': 11})\nmatplotlib.use('Agg')  # for writing to files only\n\n\nclass Colors:\n    # Ultralytics color palette https://ultralytics.com/\n    def __init__(self):\n        # hex = matplotlib.colors.TABLEAU_COLORS.values()\n        hex = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB',\n               '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7')\n        self.palette = [self.hex2rgb('#' + c) for c in hex]\n        self.n = len(self.palette)\n\n    def __call__(self, i, bgr=False):\n        c = self.palette[int(i) % self.n]\n        return (c[2], c[1], c[0]) if bgr else c\n\n    @staticmethod\n    def hex2rgb(h):  # rgb order (PIL)\n        return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))\n\n\ncolors = Colors()  # create instance for 'from utils.plots import colors'\n\n\ndef check_font(font='Arial.ttf', size=10):\n    # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary\n    font = Path(font)\n    font = font if font.exists() else (CONFIG_DIR / font.name)\n    try:\n        return ImageFont.truetype(str(font) if font.exists() else font.name, size)\n    except Exception as e:  # download if missing\n        url = \"https://ultralytics.com/assets/\" + font.name\n        print(f'Downloading {url} to {font}...')\n        torch.hub.download_url_to_file(url, str(font), progress=False)\n        try:\n            return ImageFont.truetype(str(font), size)\n        except TypeError:\n            check_requirements('Pillow>=8.4.0')  # known issue https://github.com/ultralytics/yolov5/issues/5374\n\n\nclass Annotator:\n    if RANK in (-1, 0):\n        check_font()  # download TTF if necessary\n\n    # YOLOv5 Annotator for train/val mosaics and jpgs and detect/hub inference annotations\n    def __init__(self, im, line_width=None, font_size=None, font='Arial.ttf', pil=False, example='abc'):\n        assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to Annotator() input images.'\n        self.pil = pil or not is_ascii(example) or is_chinese(example)\n        if self.pil:  # use PIL\n            self.im = im if isinstance(im, Image.Image) else Image.fromarray(im)\n            self.draw = ImageDraw.Draw(self.im)\n            self.font = check_font(font='Arial.Unicode.ttf' if is_chinese(example) else font,\n                                   size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12))\n        else:  # use cv2\n            self.im = im\n        self.lw = line_width or max(round(sum(im.shape) / 2 * 0.003), 2)  # line width\n\n    def box_label(self, box, label='', color=(128, 128, 128), txt_color=(255, 255, 255)):\n        # Add one xyxy box to image with label\n        if self.pil or not is_ascii(label):\n            self.draw.rectangle(box, width=self.lw, outline=color)  # box\n            if label:\n                w, h = self.font.getsize(label)  # text width, height\n                outside = box[1] - h >= 0  # label fits outside box\n                self.draw.rectangle([box[0],\n                                     box[1] - h if outside else box[1],\n                                     box[0] + w + 1,\n                                     box[1] + 1 if outside else box[1] + h + 1], fill=color)\n                # self.draw.text((box[0], box[1]), label, fill=txt_color, font=self.font, anchor='ls')  # for PIL>8.0\n                self.draw.text((box[0], box[1] - h if outside else box[1]), label, fill=txt_color, font=self.font)\n        else:  # cv2\n            p1, p2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3]))\n            cv2.rectangle(self.im, p1, p2, color, thickness=self.lw, lineType=cv2.LINE_AA)\n            if label:\n                tf = max(self.lw - 1, 1)  # font thickness\n                w, h = cv2.getTextSize(label, 0, fontScale=self.lw / 3, thickness=tf)[0]  # text width, height\n                outside = p1[1] - h - 3 >= 0  # label fits outside box\n                p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3\n                cv2.rectangle(self.im, p1, p2, color, -1, cv2.LINE_AA)  # filled\n                cv2.putText(self.im, label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), 0, self.lw / 3, txt_color,\n                            thickness=tf, lineType=cv2.LINE_AA)\n\n    def rectangle(self, xy, fill=None, outline=None, width=1):\n        # Add rectangle to image (PIL-only)\n        self.draw.rectangle(xy, fill, outline, width)\n\n    def text(self, xy, text, txt_color=(255, 255, 255)):\n        # Add text to image (PIL-only)\n        w, h = self.font.getsize(text)  # text width, height\n        self.draw.text((xy[0], xy[1] - h + 1), text, fill=txt_color, font=self.font)\n\n    def result(self):\n        # Return annotated image as array\n        return np.asarray(self.im)\n\n\ndef feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detect/exp')):\n    \"\"\"\n    x:              Features to be visualized\n    module_type:    Module type\n    stage:          Module stage within model\n    n:              Maximum number of feature maps to plot\n    save_dir:       Directory to save results\n    \"\"\"\n    if 'Detect' not in module_type:\n        batch, channels, height, width = x.shape  # batch, channels, height, width\n        if height > 1 and width > 1:\n            f = save_dir / f\"stage{stage}_{module_type.split('.')[-1]}_features.png\"  # filename\n\n            blocks = torch.chunk(x[0].cpu(), channels, dim=0)  # select batch index 0, block by channels\n            n = min(n, channels)  # number of plots\n            fig, ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True)  # 8 rows x n/8 cols\n            ax = ax.ravel()\n            plt.subplots_adjust(wspace=0.05, hspace=0.05)\n            for i in range(n):\n                ax[i].imshow(blocks[i].squeeze())  # cmap='gray'\n                ax[i].axis('off')\n\n            print(f'Saving {f}... ({n}/{channels})')\n            plt.savefig(f, dpi=300, bbox_inches='tight')\n            plt.close()\n            np.save(str(f.with_suffix('.npy')), x[0].cpu().numpy())  # npy save\n\n\ndef hist2d(x, y, n=100):\n    # 2d histogram used in labels.png and evolve.png\n    xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n)\n    hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges))\n    xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1)\n    yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1)\n    return np.log(hist[xidx, yidx])\n\n\ndef butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):\n    from scipy.signal import butter, filtfilt\n\n    # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy\n    def butter_lowpass(cutoff, fs, order):\n        nyq = 0.5 * fs\n        normal_cutoff = cutoff / nyq\n        return butter(order, normal_cutoff, btype='low', analog=False)\n\n    b, a = butter_lowpass(cutoff, fs, order=order)\n    return filtfilt(b, a, data)  # forward-backward filter\n\n\ndef output_to_target(output):\n    # Convert model output to target format [batch_id, class_id, x, y, w, h, conf]\n    targets = []\n    for i, o in enumerate(output):\n        for *box, conf, cls in o.cpu().numpy():\n            targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf])\n    return np.array(targets)\n\n\ndef plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=1920, max_subplots=16):\n    # Plot image grid with labels\n    if isinstance(images, torch.Tensor):\n        images = images.cpu().float().numpy()\n    if isinstance(targets, torch.Tensor):\n        targets = targets.cpu().numpy()\n    if np.max(images[0]) <= 1:\n        images *= 255  # de-normalise (optional)\n    bs, _, h, w = images.shape  # batch size, _, height, width\n    bs = min(bs, max_subplots)  # limit plot images\n    ns = np.ceil(bs ** 0.5)  # number of subplots (square)\n\n    # Build Image\n    mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8)  # init\n    for i, im in enumerate(images):\n        if i == max_subplots:  # if last batch has fewer images than we expect\n            break\n        x, y = int(w * (i // ns)), int(h * (i % ns))  # block origin\n        im = im.transpose(1, 2, 0)\n        mosaic[y:y + h, x:x + w, :] = im\n\n    # Resize (optional)\n    scale = max_size / ns / max(h, w)\n    if scale < 1:\n        h = math.ceil(scale * h)\n        w = math.ceil(scale * w)\n        mosaic = cv2.resize(mosaic, tuple(int(x * ns) for x in (w, h)))\n\n    # Annotate\n    fs = int((h + w) * ns * 0.01)  # font size\n    annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True)\n    for i in range(i + 1):\n        x, y = int(w * (i // ns)), int(h * (i % ns))  # block origin\n        annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2)  # borders\n        if paths:\n            annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220))  # filenames\n        if len(targets) > 0:\n            ti = targets[targets[:, 0] == i]  # image targets\n            boxes = xywh2xyxy(ti[:, 2:6]).T\n            classes = ti[:, 1].astype('int')\n            labels = ti.shape[1] == 6  # labels if no conf column\n            conf = None if labels else ti[:, 6]  # check for confidence presence (label vs pred)\n\n            if boxes.shape[1]:\n                if boxes.max() <= 1.01:  # if normalized with tolerance 0.01\n                    boxes[[0, 2]] *= w  # scale to pixels\n                    boxes[[1, 3]] *= h\n                elif scale < 1:  # absolute coords need scale if image scales\n                    boxes *= scale\n            boxes[[0, 2]] += x\n            boxes[[1, 3]] += y\n            for j, box in enumerate(boxes.T.tolist()):\n                cls = classes[j]\n                color = colors(cls)\n                cls = names[cls] if names else cls\n                if labels or conf[j] > 0.25:  # 0.25 conf thresh\n                    label = f'{cls}' if labels else f'{cls} {conf[j]:.1f}'\n                    annotator.box_label(box, label, color=color)\n    annotator.im.save(fname)  # save\n\n\ndef plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):\n    # Plot LR simulating training for full epochs\n    optimizer, scheduler = copy(optimizer), copy(scheduler)  # do not modify originals\n    y = []\n    for _ in range(epochs):\n        scheduler.step()\n        y.append(optimizer.param_groups[0]['lr'])\n    plt.plot(y, '.-', label='LR')\n    plt.xlabel('epoch')\n    plt.ylabel('LR')\n    plt.grid()\n    plt.xlim(0, epochs)\n    plt.ylim(0)\n    plt.savefig(Path(save_dir) / 'LR.png', dpi=200)\n    plt.close()\n\n\ndef plot_val_txt():  # from utils.plots import *; plot_val()\n    # Plot val.txt histograms\n    x = np.loadtxt('val.txt', dtype=np.float32)\n    box = xyxy2xywh(x[:, :4])\n    cx, cy = box[:, 0], box[:, 1]\n\n    fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True)\n    ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0)\n    ax.set_aspect('equal')\n    plt.savefig('hist2d.png', dpi=300)\n\n    fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True)\n    ax[0].hist(cx, bins=600)\n    ax[1].hist(cy, bins=600)\n    plt.savefig('hist1d.png', dpi=200)\n\n\ndef plot_targets_txt():  # from utils.plots import *; plot_targets_txt()\n    # Plot targets.txt histograms\n    x = np.loadtxt('targets.txt', dtype=np.float32).T\n    s = ['x targets', 'y targets', 'width targets', 'height targets']\n    fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)\n    ax = ax.ravel()\n    for i in range(4):\n        ax[i].hist(x[i], bins=100, label=f'{x[i].mean():.3g} +/- {x[i].std():.3g}')\n        ax[i].legend()\n        ax[i].set_title(s[i])\n    plt.savefig('targets.jpg', dpi=200)\n\n\ndef plot_val_study(file='', dir='', x=None):  # from utils.plots import *; plot_val_study()\n    # Plot file=study.txt generated by val.py (or plot all study*.txt in dir)\n    save_dir = Path(file).parent if file else Path(dir)\n    plot2 = False  # plot additional results\n    if plot2:\n        ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel()\n\n    fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)\n    # for f in [save_dir / f'study_coco_{x}.txt' for x in ['yolov5n6', 'yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]:\n    for f in sorted(save_dir.glob('study*.txt')):\n        y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T\n        x = np.arange(y.shape[1]) if x is None else np.array(x)\n        if plot2:\n            s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_preprocess (ms/img)', 't_inference (ms/img)', 't_NMS (ms/img)']\n            for i in range(7):\n                ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8)\n                ax[i].set_title(s[i])\n\n        j = y[3].argmax() + 1\n        ax2.plot(y[5, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8,\n                 label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO'))\n\n    ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],\n             'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet')\n\n    ax2.grid(alpha=0.2)\n    ax2.set_yticks(np.arange(20, 60, 5))\n    ax2.set_xlim(0, 57)\n    ax2.set_ylim(25, 55)\n    ax2.set_xlabel('GPU Speed (ms/img)')\n    ax2.set_ylabel('COCO AP val')\n    ax2.legend(loc='lower right')\n    f = save_dir / 'study.png'\n    print(f'Saving {f}...')\n    plt.savefig(f, dpi=300)\n\n\n@try_except  # known issue https://github.com/ultralytics/yolov5/issues/5395\n@Timeout(30)  # known issue https://github.com/ultralytics/yolov5/issues/5611\ndef plot_labels(labels, names=(), save_dir=Path('')):\n    # plot dataset labels\n    LOGGER.info(f\"Plotting labels to {save_dir / 'labels.jpg'}... \")\n    c, b = labels[:, 0], labels[:, 1:].transpose()  # classes, boxes\n    nc = int(c.max() + 1)  # number of classes\n    x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height'])\n\n    # seaborn correlogram\n    sn.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9))\n    plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200)\n    plt.close()\n\n    # matplotlib labels\n    matplotlib.use('svg')  # faster\n    ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel()\n    y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8)\n    # [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)]  # update colors bug #3195\n    ax[0].set_ylabel('instances')\n    if 0 < len(names) < 30:\n        ax[0].set_xticks(range(len(names)))\n        ax[0].set_xticklabels(names, rotation=90, fontsize=10)\n    else:\n        ax[0].set_xlabel('classes')\n    sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9)\n    sn.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9)\n\n    # rectangles\n    labels[:, 1:3] = 0.5  # center\n    labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000\n    img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255)\n    for cls, *box in labels[:1000]:\n        ImageDraw.Draw(img).rectangle(box, width=1, outline=colors(cls))  # plot\n    ax[1].imshow(img)\n    ax[1].axis('off')\n\n    for a in [0, 1, 2, 3]:\n        for s in ['top', 'right', 'left', 'bottom']:\n            ax[a].spines[s].set_visible(False)\n\n    plt.savefig(save_dir / 'labels.jpg', dpi=200)\n    matplotlib.use('Agg')\n    plt.close()\n\n\ndef plot_evolve(evolve_csv='path/to/evolve.csv'):  # from utils.plots import *; plot_evolve()\n    # Plot evolve.csv hyp evolution results\n    evolve_csv = Path(evolve_csv)\n    data = pd.read_csv(evolve_csv)\n    keys = [x.strip() for x in data.columns]\n    x = data.values\n    f = fitness(x)\n    j = np.argmax(f)  # max fitness index\n    plt.figure(figsize=(10, 12), tight_layout=True)\n    matplotlib.rc('font', **{'size': 8})\n    for i, k in enumerate(keys[7:]):\n        v = x[:, 7 + i]\n        mu = v[j]  # best single result\n        plt.subplot(6, 5, i + 1)\n        plt.scatter(v, f, c=hist2d(v, f, 20), cmap='viridis', alpha=.8, edgecolors='none')\n        plt.plot(mu, f.max(), 'k+', markersize=15)\n        plt.title(f'{k} = {mu:.3g}', fontdict={'size': 9})  # limit to 40 characters\n        if i % 5 != 0:\n            plt.yticks([])\n        print(f'{k:>15}: {mu:.3g}')\n    f = evolve_csv.with_suffix('.png')  # filename\n    plt.savefig(f, dpi=200)\n    plt.close()\n    print(f'Saved {f}')\n\n\ndef plot_results(file='path/to/results.csv', dir=''):\n    # Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv')\n    save_dir = Path(file).parent if file else Path(dir)\n    fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True)\n    ax = ax.ravel()\n    files = list(save_dir.glob('results*.csv'))\n    assert len(files), f'No results.csv files found in {save_dir.resolve()}, nothing to plot.'\n    for fi, f in enumerate(files):\n        try:\n            data = pd.read_csv(f)\n            s = [x.strip() for x in data.columns]\n            x = data.values[:, 0]\n            for i, j in enumerate([1, 2, 3, 4, 5, 8, 9, 10, 6, 7]):\n                y = data.values[:, j]\n                # y[y == 0] = np.nan  # don't show zero values\n                ax[i].plot(x, y, marker='.', label=f.stem, linewidth=2, markersize=8)\n                ax[i].set_title(s[j], fontsize=12)\n                # if j in [8, 9, 10]:  # share train and val loss y axes\n                #     ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])\n        except Exception as e:\n            print(f'Warning: Plotting error for {f}: {e}')\n    ax[1].legend()\n    fig.savefig(save_dir / 'results.png', dpi=200)\n    plt.close()\n\n\ndef profile_idetection(start=0, stop=0, labels=(), save_dir=''):\n    # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection()\n    ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel()\n    s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS']\n    files = list(Path(save_dir).glob('frames*.txt'))\n    for fi, f in enumerate(files):\n        try:\n            results = np.loadtxt(f, ndmin=2).T[:, 90:-30]  # clip first and last rows\n            n = results.shape[1]  # number of rows\n            x = np.arange(start, min(stop, n) if stop else n)\n            results = results[:, x]\n            t = (results[0] - results[0].min())  # set t0=0s\n            results[0] = x\n            for i, a in enumerate(ax):\n                if i < len(results):\n                    label = labels[fi] if len(labels) else f.stem.replace('frames_', '')\n                    a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5)\n                    a.set_title(s[i])\n                    a.set_xlabel('time (s)')\n                    # if fi == len(files) - 1:\n                    #     a.set_ylim(bottom=0)\n                    for side in ['top', 'right']:\n                        a.spines[side].set_visible(False)\n                else:\n                    a.remove()\n        except Exception as e:\n            print(f'Warning: Plotting error for {f}; {e}')\n    ax[1].legend()\n    plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200)\n\n\ndef save_one_box(xyxy, im, file='image.jpg', gain=1.02, pad=10, square=False, BGR=False, save=True):\n    # Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop\n    xyxy = torch.tensor(xyxy).view(-1, 4)\n    b = xyxy2xywh(xyxy)  # boxes\n    if square:\n        b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1)  # attempt rectangle to square\n    b[:, 2:] = b[:, 2:] * gain + pad  # box wh * gain + pad\n    xyxy = xywh2xyxy(b).long()\n    clip_coords(xyxy, im.shape)\n    crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2]), ::(1 if BGR else -1)]\n    if save:\n        file.parent.mkdir(parents=True, exist_ok=True)  # make directory\n        cv2.imwrite(str(increment_path(file).with_suffix('.jpg')), crop)\n    return crop\n"
  },
  {
    "path": "utils/torch_utils.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPyTorch utils\n\"\"\"\n\nimport datetime\nimport math\nimport os\nimport platform\nimport subprocess\nimport time\nfrom contextlib import contextmanager\nfrom copy import deepcopy\nfrom pathlib import Path\n\nimport torch\nimport torch.distributed as dist\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom utils.general import LOGGER\n\ntry:\n    import thop  # for FLOPs computation\nexcept ImportError:\n    thop = None\n\n\n@contextmanager\ndef torch_distributed_zero_first(local_rank: int):\n    \"\"\"\n    Decorator to make all processes in distributed training wait for each local_master to do something.\n    \"\"\"\n    if local_rank not in [-1, 0]:\n        dist.barrier(device_ids=[local_rank])\n    yield\n    if local_rank == 0:\n        dist.barrier(device_ids=[0])\n\n\ndef date_modified(path=__file__):\n    # return human-readable file modification date, i.e. '2021-3-26'\n    t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)\n    return f'{t.year}-{t.month}-{t.day}'\n\n\ndef git_describe(path=Path(__file__).parent):  # path must be a directory\n    # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe\n    s = f'git -C {path} describe --tags --long --always'\n    try:\n        return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]\n    except subprocess.CalledProcessError as e:\n        return ''  # not a git repository\n\n\ndef select_device(device='', batch_size=0, newline=True):\n    # device = 'cpu' or '0' or '0,1,2,3'\n    s = f'YOLOv5 🚀 {git_describe() or date_modified()} torch {torch.__version__} '  # string\n    device = str(device).strip().lower().replace('cuda:', '')  # to string, 'cuda:0' to '0'\n    cpu = device == 'cpu'\n    if cpu:\n        os.environ['CUDA_VISIBLE_DEVICES'] = '-1'  # force torch.cuda.is_available() = False\n    elif device:  # non-cpu device requested\n        os.environ['CUDA_VISIBLE_DEVICES'] = device  # set environment variable\n        assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested'  # check availability\n\n    cuda = not cpu and torch.cuda.is_available()\n    if cuda:\n        devices = device.split(',') if device else '0'  # range(torch.cuda.device_count())  # i.e. 0,1,6,7\n        n = len(devices)  # device count\n        if n > 1 and batch_size > 0:  # check batch_size is divisible by device_count\n            assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'\n        space = ' ' * (len(s) + 1)\n        for i, d in enumerate(devices):\n            p = torch.cuda.get_device_properties(i)\n            s += f\"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2:.0f}MiB)\\n\"  # bytes to MB\n    else:\n        s += 'CPU\\n'\n\n    if not newline:\n        s = s.rstrip()\n    LOGGER.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s)  # emoji-safe\n    return torch.device('cuda:0' if cuda else 'cpu')\n\n\ndef time_sync():\n    # pytorch-accurate time\n    if torch.cuda.is_available():\n        torch.cuda.synchronize()\n    return time.time()\n\n\ndef profile(input, ops, n=10, device=None):\n    # YOLOv5 speed/memory/FLOPs profiler\n    #\n    # Usage:\n    #     input = torch.randn(16, 3, 640, 640)\n    #     m1 = lambda x: x * torch.sigmoid(x)\n    #     m2 = nn.SiLU()\n    #     profile(input, [m1, m2], n=100)  # profile over 100 iterations\n\n    results = []\n    device = device or select_device()\n    print(f\"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}\"\n          f\"{'input':>24s}{'output':>24s}\")\n\n    for x in input if isinstance(input, list) else [input]:\n        x = x.to(device)\n        x.requires_grad = True\n        for m in ops if isinstance(ops, list) else [ops]:\n            m = m.to(device) if hasattr(m, 'to') else m  # device\n            m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m\n            tf, tb, t = 0, 0, [0, 0, 0]  # dt forward, backward\n            try:\n                flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2  # GFLOPs\n            except:\n                flops = 0\n\n            try:\n                for _ in range(n):\n                    t[0] = time_sync()\n                    y = m(x)\n                    t[1] = time_sync()\n                    try:\n                        _ = (sum(yi.sum() for yi in y) if isinstance(y, list) else y).sum().backward()\n                        t[2] = time_sync()\n                    except Exception as e:  # no backward method\n                        # print(e)  # for debug\n                        t[2] = float('nan')\n                    tf += (t[1] - t[0]) * 1000 / n  # ms per op forward\n                    tb += (t[2] - t[1]) * 1000 / n  # ms per op backward\n                mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0  # (GB)\n                s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'\n                s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'\n                p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0  # parameters\n                print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}')\n                results.append([p, flops, mem, tf, tb, s_in, s_out])\n            except Exception as e:\n                print(e)\n                results.append(None)\n            torch.cuda.empty_cache()\n    return results\n\n\ndef is_parallel(model):\n    # Returns True if model is of type DP or DDP\n    return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)\n\n\ndef de_parallel(model):\n    # De-parallelize a model: returns single-GPU model if model is of type DP or DDP\n    return model.module if is_parallel(model) else model\n\n\ndef initialize_weights(model):\n    for m in model.modules():\n        t = type(m)\n        if t is nn.Conv2d:\n            pass  # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')\n        elif t is nn.BatchNorm2d:\n            m.eps = 1e-3\n            m.momentum = 0.03\n        elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:\n            m.inplace = True\n\n\ndef find_modules(model, mclass=nn.Conv2d):\n    # Finds layer indices matching module class 'mclass'\n    return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]\n\n\ndef sparsity(model):\n    # Return global model sparsity\n    a, b = 0, 0\n    for p in model.parameters():\n        a += p.numel()\n        b += (p == 0).sum()\n    return b / a\n\n\ndef prune(model, amount=0.3):\n    # Prune model to requested global sparsity\n    import torch.nn.utils.prune as prune\n    print('Pruning model... ', end='')\n    for name, m in model.named_modules():\n        if isinstance(m, nn.Conv2d):\n            prune.l1_unstructured(m, name='weight', amount=amount)  # prune\n            prune.remove(m, 'weight')  # make permanent\n    print(' %.3g global sparsity' % sparsity(model))\n\n\ndef fuse_conv_and_bn(conv, bn):\n    # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/\n    fusedconv = nn.Conv2d(conv.in_channels,\n                          conv.out_channels,\n                          kernel_size=conv.kernel_size,\n                          stride=conv.stride,\n                          padding=conv.padding,\n                          groups=conv.groups,\n                          bias=True).requires_grad_(False).to(conv.weight.device)\n\n    # prepare filters\n    w_conv = conv.weight.clone().view(conv.out_channels, -1)\n    w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))\n    fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))\n\n    # prepare spatial bias\n    b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias\n    b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))\n    fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)\n\n    return fusedconv\n\n\ndef model_info(model, verbose=False, img_size=640):\n    # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]\n    n_p = sum(x.numel() for x in model.parameters())  # number parameters\n    n_g = sum(x.numel() for x in model.parameters() if x.requires_grad)  # number gradients\n    if verbose:\n        print(f\"{'layer':>5} {'name':>40} {'gradient':>9} {'parameters':>12} {'shape':>20} {'mu':>10} {'sigma':>10}\")\n        for i, (name, p) in enumerate(model.named_parameters()):\n            name = name.replace('module_list.', '')\n            print('%5g %40s %9s %12g %20s %10.3g %10.3g' %\n                  (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))\n\n    try:  # FLOPs\n        from thop import profile\n        stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32\n        img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device)  # input\n        flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2  # stride GFLOPs\n        img_size = img_size if isinstance(img_size, list) else [img_size, img_size]  # expand if int/float\n        fs = ', %.1f GFLOPs' % (flops * img_size[0] / stride * img_size[1] / stride)  # 640x640 GFLOPs\n    except (ImportError, Exception):\n        fs = ''\n\n    LOGGER.info(f\"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}\")\n\n\ndef scale_img(img, ratio=1.0, same_shape=False, gs=32):  # img(16,3,256,416)\n    # scales img(bs,3,y,x) by ratio constrained to gs-multiple\n    if ratio == 1.0:\n        return img\n    else:\n        h, w = img.shape[2:]\n        s = (int(h * ratio), int(w * ratio))  # new size\n        img = F.interpolate(img, size=s, mode='bilinear', align_corners=False)  # resize\n        if not same_shape:  # pad/crop img\n            h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w))\n        return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447)  # value = imagenet mean\n\n\ndef copy_attr(a, b, include=(), exclude=()):\n    # Copy attributes from b to a, options to only include [...] and to exclude [...]\n    for k, v in b.__dict__.items():\n        if (len(include) and k not in include) or k.startswith('_') or k in exclude:\n            continue\n        else:\n            setattr(a, k, v)\n\n\nclass EarlyStopping:\n    # YOLOv5 simple early stopper\n    def __init__(self, patience=30):\n        self.best_fitness = 0.0  # i.e. mAP\n        self.best_epoch = 0\n        self.patience = patience or float('inf')  # epochs to wait after fitness stops improving to stop\n        self.possible_stop = False  # possible stop may occur next epoch\n\n    def __call__(self, epoch, fitness):\n        if fitness >= self.best_fitness:  # >= 0 to allow for early zero-fitness stage of training\n            self.best_epoch = epoch\n            self.best_fitness = fitness\n        delta = epoch - self.best_epoch  # epochs without improvement\n        self.possible_stop = delta >= (self.patience - 1)  # possible stop may occur next epoch\n        stop = delta >= self.patience  # stop training if patience exceeded\n        if stop:\n            LOGGER.info(f'Stopping training early as no improvement observed in last {self.patience} epochs. '\n                        f'Best results observed at epoch {self.best_epoch}, best model saved as best.pt.\\n'\n                        f'To update EarlyStopping(patience={self.patience}) pass a new patience value, '\n                        f'i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping.')\n        return stop\n\n\nclass ModelEMA:\n    \"\"\" Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models\n    Keep a moving average of everything in the model state_dict (parameters and buffers).\n    This is intended to allow functionality like\n    https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage\n    A smoothed version of the weights is necessary for some training schemes to perform well.\n    This class is sensitive where it is initialized in the sequence of model init,\n    GPU assignment and distributed training wrappers.\n    \"\"\"\n\n    def __init__(self, model, decay=0.9999, updates=0):\n        # Create EMA\n        self.ema = deepcopy(model.module if is_parallel(model) else model).eval()  # FP32 EMA\n        # if next(model.parameters()).device.type != 'cpu':\n        #     self.ema.half()  # FP16 EMA\n        self.updates = updates  # number of EMA updates\n        self.decay = lambda x: decay * (1 - math.exp(-x / 2000))  # decay exponential ramp (to help early epochs)\n        for p in self.ema.parameters():\n            p.requires_grad_(False)\n\n    def update(self, model):\n        # Update EMA parameters\n        with torch.no_grad():\n            self.updates += 1\n            d = self.decay(self.updates)\n\n            msd = model.module.state_dict() if is_parallel(model) else model.state_dict()  # model state_dict\n            for k, v in self.ema.state_dict().items():\n                if v.dtype.is_floating_point:\n                    v *= d\n                    v += (1 - d) * msd[k].detach()\n\n    def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):\n        # Update EMA attributes\n        copy_attr(self.ema, model, include, exclude)\n"
  },
  {
    "path": "val.py",
    "content": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nValidate a trained YOLOv5 model accuracy on a custom dataset\n\nUsage:\n    $ python path/to/val.py --data coco128.yaml --weights yolov5s.pt --img 640\n\"\"\"\n\nimport argparse\nimport json\nimport os\nimport sys\nfrom pathlib import Path\nfrom threading import Thread\n\nimport numpy as np\nimport torch\nfrom tqdm import tqdm\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[0]  # YOLOv5 root directory\nif str(ROOT) not in sys.path:\n    sys.path.append(str(ROOT))  # add ROOT to PATH\nROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative\n\nfrom models.common import DetectMultiBackend\nfrom utils.callbacks import Callbacks\nfrom utils.datasets import create_dataloader\nfrom utils.general import (LOGGER, box_iou, check_dataset, check_img_size, check_requirements, check_yaml,\n                           coco80_to_coco91_class, colorstr, increment_path, non_max_suppression, print_args,\n                           scale_coords, xywh2xyxy, xyxy2xywh)\nfrom utils.metrics import ConfusionMatrix, ap_per_class\nfrom utils.plots import output_to_target, plot_images, plot_val_study\nfrom utils.torch_utils import select_device, time_sync\n\n\ndef save_one_txt(predn, save_conf, shape, file):\n    # Save one txt result\n    gn = torch.tensor(shape)[[1, 0, 1, 0]]  # normalization gain whwh\n    for *xyxy, conf, cls in predn.tolist():\n        xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh\n        line = (cls, *xywh, conf) if save_conf else (cls, *xywh)  # label format\n        with open(file, 'a') as f:\n            f.write(('%g ' * len(line)).rstrip() % line + '\\n')\n\n\ndef save_one_json(predn, jdict, path, class_map):\n    # Save one JSON result {\"image_id\": 42, \"category_id\": 18, \"bbox\": [258.15, 41.29, 348.26, 243.78], \"score\": 0.236}\n    image_id = int(path.stem) if path.stem.isnumeric() else path.stem\n    box = xyxy2xywh(predn[:, :4])  # xywh\n    box[:, :2] -= box[:, 2:] / 2  # xy center to top-left corner\n    for p, b in zip(predn.tolist(), box.tolist()):\n        jdict.append({'image_id': image_id,\n                      'category_id': class_map[int(p[5])],\n                      'bbox': [round(x, 3) for x in b],\n                      'score': round(p[4], 5)})\n\n\ndef process_batch(detections, labels, iouv):\n    \"\"\"\n    Return correct predictions matrix. Both sets of boxes are in (x1, y1, x2, y2) format.\n    Arguments:\n        detections (Array[N, 6]), x1, y1, x2, y2, conf, class\n        labels (Array[M, 5]), class, x1, y1, x2, y2\n    Returns:\n        correct (Array[N, 10]), for 10 IoU levels\n    \"\"\"\n    correct = torch.zeros(detections.shape[0], iouv.shape[0], dtype=torch.bool, device=iouv.device)\n    iou = box_iou(labels[:, 1:], detections[:, :4])\n    x = torch.where((iou >= iouv[0]) & (labels[:, 0:1] == detections[:, 5]))  # IoU above threshold and classes match\n    if x[0].shape[0]:\n        matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()  # [label, detection, iou]\n        if x[0].shape[0] > 1:\n            matches = matches[matches[:, 2].argsort()[::-1]]\n            matches = matches[np.unique(matches[:, 1], return_index=True)[1]]\n            # matches = matches[matches[:, 2].argsort()[::-1]]\n            matches = matches[np.unique(matches[:, 0], return_index=True)[1]]\n        matches = torch.Tensor(matches).to(iouv.device)\n        correct[matches[:, 1].long()] = matches[:, 2:3] >= iouv\n    return correct\n\n\n@torch.no_grad()\ndef run(data,\n        weights=None,  # model.pt path(s)\n        batch_size=32,  # batch size\n        imgsz=640,  # inference size (pixels)\n        conf_thres=0.001,  # confidence threshold\n        iou_thres=0.6,  # NMS IoU threshold\n        task='val',  # train, val, test, speed or study\n        device='',  # cuda device, i.e. 0 or 0,1,2,3 or cpu\n        workers=8,  # max dataloader workers (per RANK in DDP mode)\n        single_cls=False,  # treat as single-class dataset\n        augment=False,  # augmented inference\n        verbose=False,  # verbose output\n        save_txt=False,  # save results to *.txt\n        save_hybrid=False,  # save label+prediction hybrid results to *.txt\n        save_conf=False,  # save confidences in --save-txt labels\n        save_json=False,  # save a COCO-JSON results file\n        project=ROOT / 'runs/val',  # save to project/name\n        name='exp',  # save to project/name\n        exist_ok=False,  # existing project/name ok, do not increment\n        half=True,  # use FP16 half-precision inference\n        dnn=False,  # use OpenCV DNN for ONNX inference\n        model=None,\n        dataloader=None,\n        save_dir=Path(''),\n        plots=True,\n        callbacks=Callbacks(),\n        compute_loss=None,\n        ):\n    # Initialize/load model and set device\n    training = model is not None\n    if training:  # called by train.py\n        device, pt, jit, engine = next(model.parameters()).device, True, False, False  # get model device, PyTorch model\n\n        half &= device.type != 'cpu'  # half precision only supported on CUDA\n        model.half() if half else model.float()\n    else:  # called directly\n        device = select_device(device, batch_size=batch_size)\n\n        # Directories\n        save_dir = increment_path(Path(project) / name, exist_ok=exist_ok)  # increment run\n        (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True)  # make dir\n\n        # Load model\n        model = DetectMultiBackend(weights, device=device, dnn=dnn)\n        stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine\n        imgsz = check_img_size(imgsz, s=stride)  # check image size\n        half &= (pt or jit or engine) and device.type != 'cpu'  # half precision only supported by PyTorch on CUDA\n        if pt or jit:\n            model.model.half() if half else model.model.float()\n        elif engine:\n            batch_size = model.batch_size\n        else:\n            half = False\n            batch_size = 1  # export.py models default to batch-size 1\n            device = torch.device('cpu')\n            LOGGER.info(f'Forcing --batch-size 1 square inference shape(1,3,{imgsz},{imgsz}) for non-PyTorch backends')\n\n        # Data\n        data = check_dataset(data)  # check\n\n    # Configure\n    model.eval()\n    is_coco = isinstance(data.get('val'), str) and data['val'].endswith('coco/val2017.txt')  # COCO dataset\n    nc = 1 if single_cls else int(data['nc'])  # number of classes\n    iouv = torch.linspace(0.5, 0.95, 10).to(device)  # iou vector for mAP@0.5:0.95\n    niou = iouv.numel()\n\n    # Dataloader\n    if not training:\n        model.warmup(imgsz=(1, 3, imgsz, imgsz), half=half)  # warmup\n        pad = 0.0 if task == 'speed' else 0.5\n        task = task if task in ('train', 'val', 'test') else 'val'  # path to train/val/test images\n        dataloader = create_dataloader(data[task], imgsz, batch_size, stride, single_cls, pad=pad, rect=pt,\n                                       workers=workers, prefix=colorstr(f'{task}: '))[0]\n\n    seen = 0\n    confusion_matrix = ConfusionMatrix(nc=nc)\n    names = {k: v for k, v in enumerate(model.names if hasattr(model, 'names') else model.module.names)}\n    class_map = coco80_to_coco91_class() if is_coco else list(range(1000))\n    s = ('%20s' + '%11s' * 6) % ('Class', 'Images', 'Labels', 'P', 'R', 'mAP@.5', 'mAP@.5:.95')\n    dt, p, r, f1, mp, mr, map50, map = [0.0, 0.0, 0.0], 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0\n    loss = torch.zeros(3, device=device)\n    jdict, stats, ap, ap_class = [], [], [], []\n    pbar = tqdm(dataloader, desc=s, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}')  # progress bar\n    for batch_i, (im, targets, paths, shapes) in enumerate(pbar):\n        t1 = time_sync()\n        if pt or jit or engine:\n            im = im.to(device, non_blocking=True)\n            targets = targets.to(device)\n        im = im.half() if half else im.float()  # uint8 to fp16/32\n        im /= 255  # 0 - 255 to 0.0 - 1.0\n        nb, _, height, width = im.shape  # batch size, channels, height, width\n        t2 = time_sync()\n        dt[0] += t2 - t1\n\n        # Inference\n        out, train_out = model(im) if training else model(im, augment=augment, val=True)  # inference, loss outputs\n        dt[1] += time_sync() - t2\n\n        # Loss\n        if compute_loss:\n            loss += compute_loss([x.float() for x in train_out], targets)[1]  # box, obj, cls\n\n        # NMS\n        targets[:, 2:] *= torch.Tensor([width, height, width, height]).to(device)  # to pixels\n        lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else []  # for autolabelling\n        t3 = time_sync()\n        out = non_max_suppression(out, conf_thres, iou_thres, labels=lb, multi_label=True, agnostic=single_cls)\n        dt[2] += time_sync() - t3\n\n        # Metrics\n        for si, pred in enumerate(out):\n            labels = targets[targets[:, 0] == si, 1:]\n            nl = len(labels)\n            tcls = labels[:, 0].tolist() if nl else []  # target class\n            path, shape = Path(paths[si]), shapes[si][0]\n            seen += 1\n\n            if len(pred) == 0:\n                if nl:\n                    stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))\n                continue\n\n            # Predictions\n            if single_cls:\n                pred[:, 5] = 0\n            predn = pred.clone()\n            scale_coords(im[si].shape[1:], predn[:, :4], shape, shapes[si][1])  # native-space pred\n\n            # Evaluate\n            if nl:\n                tbox = xywh2xyxy(labels[:, 1:5])  # target boxes\n                scale_coords(im[si].shape[1:], tbox, shape, shapes[si][1])  # native-space labels\n                labelsn = torch.cat((labels[:, 0:1], tbox), 1)  # native-space labels\n                correct = process_batch(predn, labelsn, iouv)\n                if plots:\n                    confusion_matrix.process_batch(predn, labelsn)\n            else:\n                correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool)\n            stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))  # (correct, conf, pcls, tcls)\n\n            # Save/log\n            if save_txt:\n                save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / (path.stem + '.txt'))\n            if save_json:\n                save_one_json(predn, jdict, path, class_map)  # append to COCO-JSON dictionary\n            callbacks.run('on_val_image_end', pred, predn, path, names, im[si])\n\n        # Plot images\n        if plots and batch_i < 3:\n            f = save_dir / f'val_batch{batch_i}_labels.jpg'  # labels\n            Thread(target=plot_images, args=(im, targets, paths, f, names), daemon=True).start()\n            f = save_dir / f'val_batch{batch_i}_pred.jpg'  # predictions\n            Thread(target=plot_images, args=(im, output_to_target(out), paths, f, names), daemon=True).start()\n\n    # Compute metrics\n    stats = [np.concatenate(x, 0) for x in zip(*stats)]  # to numpy\n    if len(stats) and stats[0].any():\n        tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)\n        ap50, ap = ap[:, 0], ap.mean(1)  # AP@0.5, AP@0.5:0.95\n        mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()\n        nt = np.bincount(stats[3].astype(np.int64), minlength=nc)  # number of targets per class\n    else:\n        nt = torch.zeros(1)\n\n    # Print results\n    pf = '%20s' + '%11i' * 2 + '%11.3g' * 4  # print format\n    LOGGER.info(pf % ('all', seen, nt.sum(), mp, mr, map50, map))\n\n    # Print results per class\n    if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats):\n        for i, c in enumerate(ap_class):\n            LOGGER.info(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i]))\n\n    # Print speeds\n    t = tuple(x / seen * 1E3 for x in dt)  # speeds per image\n    if not training:\n        shape = (batch_size, 3, imgsz, imgsz)\n        LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t)\n\n    # Plots\n    if plots:\n        confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))\n        callbacks.run('on_val_end')\n\n    # Save JSON\n    if save_json and len(jdict):\n        w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else ''  # weights\n        anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json')  # annotations json\n        pred_json = str(save_dir / f\"{w}_predictions.json\")  # predictions json\n        LOGGER.info(f'\\nEvaluating pycocotools mAP... saving {pred_json}...')\n        with open(pred_json, 'w') as f:\n            json.dump(jdict, f)\n\n        try:  # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb\n            check_requirements(['pycocotools'])\n            from pycocotools.coco import COCO\n            from pycocotools.cocoeval import COCOeval\n\n            anno = COCO(anno_json)  # init annotations api\n            pred = anno.loadRes(pred_json)  # init predictions api\n            eval = COCOeval(anno, pred, 'bbox')\n            if is_coco:\n                eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files]  # image IDs to evaluate\n            eval.evaluate()\n            eval.accumulate()\n            eval.summarize()\n            map, map50 = eval.stats[:2]  # update results (mAP@0.5:0.95, mAP@0.5)\n        except Exception as e:\n            LOGGER.info(f'pycocotools unable to run: {e}')\n\n    # Return results\n    model.float()  # for training\n    if not training:\n        s = f\"\\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}\" if save_txt else ''\n        LOGGER.info(f\"Results saved to {colorstr('bold', save_dir)}{s}\")\n    maps = np.zeros(nc) + map\n    for i, c in enumerate(ap_class):\n        maps[c] = ap[i]\n    return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t\n\n\ndef parse_opt():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')\n    parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)')\n    parser.add_argument('--batch-size', type=int, default=32, help='batch size')\n    parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)')\n    parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold')\n    parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold')\n    parser.add_argument('--task', default='val', help='train, val, test, speed or study')\n    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')\n    parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')\n    parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')\n    parser.add_argument('--augment', action='store_true', help='augmented inference')\n    parser.add_argument('--verbose', action='store_true', help='report mAP by class')\n    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')\n    parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt')\n    parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')\n    parser.add_argument('--save-json', action='store_true', help='save a COCO-JSON results file')\n    parser.add_argument('--project', default=ROOT / 'runs/val', help='save to project/name')\n    parser.add_argument('--name', default='exp', help='save to project/name')\n    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')\n    parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')\n    parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')\n    opt = parser.parse_args()\n    opt.data = check_yaml(opt.data)  # check YAML\n    opt.save_json |= opt.data.endswith('coco.yaml')\n    opt.save_txt |= opt.save_hybrid\n    print_args(FILE.stem, opt)\n    return opt\n\n\ndef main(opt):\n    check_requirements(requirements=ROOT / 'requirements.txt', exclude=('tensorboard', 'thop'))\n\n    if opt.task in ('train', 'val', 'test'):  # run normally\n        if opt.conf_thres > 0.001:  # https://github.com/ultralytics/yolov5/issues/1466\n            LOGGER.info(f'WARNING: confidence threshold {opt.conf_thres} >> 0.001 will produce invalid mAP values.')\n        run(**vars(opt))\n\n    else:\n        weights = opt.weights if isinstance(opt.weights, list) else [opt.weights]\n        opt.half = True  # FP16 for fastest results\n        if opt.task == 'speed':  # speed benchmarks\n            # python val.py --task speed --data coco.yaml --batch 1 --weights yolov5n.pt yolov5s.pt...\n            opt.conf_thres, opt.iou_thres, opt.save_json = 0.25, 0.45, False\n            for opt.weights in weights:\n                run(**vars(opt), plots=False)\n\n        elif opt.task == 'study':  # speed vs mAP benchmarks\n            # python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n.pt yolov5s.pt...\n            for opt.weights in weights:\n                f = f'study_{Path(opt.data).stem}_{Path(opt.weights).stem}.txt'  # filename to save to\n                x, y = list(range(256, 1536 + 128, 128)), []  # x axis (image sizes), y axis\n                for opt.imgsz in x:  # img-size\n                    LOGGER.info(f'\\nRunning {f} --imgsz {opt.imgsz}...')\n                    r, _, t = run(**vars(opt), plots=False)\n                    y.append(r + t)  # results and times\n                np.savetxt(f, y, fmt='%10.4g')  # save\n            os.system('zip -r study.zip study_*.txt')\n            plot_val_study(x=x)  # plot\n\n\nif __name__ == \"__main__\":\n    opt = parse_opt()\n    main(opt)\n"
  }
]