master c384516e6843 cached
33 files
3.4 MB
887.0k tokens
213 symbols
1 requests
Download .txt
Showing preview only (3,552K chars total). Download the full file or copy to clipboard to get everything.
Repository: KevinMuyaoGuo/yolov5s_for_satellite_imagery
Branch: master
Commit: c384516e6843
Files: 33
Total size: 3.4 MB

Directory structure:
gitextract_an16xrke/

├── .dockerignore
├── .gitattributes
├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── detect.py
├── hubconf.py
├── models/
│   ├── __init__.py
│   ├── common.py
│   ├── experimental.py
│   ├── export.py
│   ├── hub/
│   │   ├── yolov3-spp.yaml
│   │   ├── yolov5-fpn.yaml
│   │   └── yolov5-panet.yaml
│   ├── yolo.py
│   ├── yolov5l.yaml
│   ├── yolov5m.yaml
│   ├── yolov5s.yaml
│   └── yolov5x.yaml
├── requirements.txt
├── sotabench.py
├── test.py
├── train.py
├── tutorial.ipynb
├── utils/
│   ├── __init__.py
│   ├── activations.py
│   ├── datasets.py
│   ├── evolve.sh
│   ├── general.py
│   ├── google_utils.py
│   └── torch_utils.py
└── weights/
    └── download_weights.sh

================================================
FILE CONTENTS
================================================

================================================
FILE: .dockerignore
================================================
# Repo-specific DockerIgnore -------------------------------------------------------------------------------------------
# .git
.cache
.idea
runs
output
coco
storage.googleapis.com

data/samples/*
**/results*.txt
*.jpg

# Neural Network weights -----------------------------------------------------------------------------------------------
**/*.weights
**/*.pt
**/*.pth
**/*.onnx
**/*.mlmodel
**/*.torchscript


# Below Copied From .gitignore -----------------------------------------------------------------------------------------
# Below Copied From .gitignore -----------------------------------------------------------------------------------------


# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# dotenv
.env

# virtualenv
.venv
venv*/
ENV/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/


# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------

# General
.DS_Store
.AppleDouble
.LSOverride

# Icon must end with two \r
Icon
Icon?

# Thumbnails
._*

# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk


# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839

# User-specific stuff:
.idea/*
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/dictionaries
.html  # Bokeh Plots
.pg  # TensorFlow Frozen Graphs
.avi # videos

# Sensitive or high-churn files:
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml

# Gradle:
.idea/**/gradle.xml
.idea/**/libraries

# CMake
cmake-build-debug/
cmake-build-release/

# Mongo Explorer plugin:
.idea/**/mongoSettings.xml

## File-based project format:
*.iws

## Plugin-specific files:

# IntelliJ
out/

# mpeltonen/sbt-idea plugin
.idea_modules/

# JIRA plugin
atlassian-ide-plugin.xml

# Cursive Clojure plugin
.idea/replstate.xml

# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties


================================================
FILE: .gitattributes
================================================
# this drop notebooks from GitHub language stats
*.ipynb linguist-vendored


================================================
FILE: .gitignore
================================================
# Repo-specific GitIgnore ----------------------------------------------------------------------------------------------
*.jpg
*.jpeg
*.png
*.bmp
*.tif
*.tiff
*.heic
*.JPG
*.JPEG
*.PNG
*.BMP
*.TIF
*.TIFF
*.HEIC
*.mp4
*.mov
*.MOV
*.avi
*.data
*.json

*.cfg
!cfg/yolov3*.cfg

storage.googleapis.com
runs/*
data/*
!data/samples/zidane.jpg
!data/samples/bus.jpg
!data/coco.names
!data/coco_paper.names
!data/coco.data
!data/coco_*.data
!data/coco_*.txt
!data/trainvalno5k.shapes
!data/*.sh

pycocotools/*
results*.txt
gcp_test*.sh

# MATLAB GitIgnore -----------------------------------------------------------------------------------------------------
*.m~
*.mat
!targets*.mat

# Neural Network weights -----------------------------------------------------------------------------------------------
*.weights
*.pt
*.onnx
*.mlmodel
*.torchscript
darknet53.conv.74
yolov3-tiny.conv.15

# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# dotenv
.env

# virtualenv
.venv
venv/
ENV/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/


# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------

# General
.DS_Store
.AppleDouble
.LSOverride

# Icon must end with two \r
Icon
Icon?

# Thumbnails
._*

# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk


# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839

# User-specific stuff:
.idea/*
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/dictionaries
.html  # Bokeh Plots
.pg  # TensorFlow Frozen Graphs
.avi # videos

# Sensitive or high-churn files:
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml

# Gradle:
.idea/**/gradle.xml
.idea/**/libraries

# CMake
cmake-build-debug/
cmake-build-release/

# Mongo Explorer plugin:
.idea/**/mongoSettings.xml

## File-based project format:
*.iws

## Plugin-specific files:

# IntelliJ
out/

# mpeltonen/sbt-idea plugin
.idea_modules/

# JIRA plugin
atlassian-ide-plugin.xml

# Cursive Clojure plugin
.idea/replstate.xml

# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties


================================================
FILE: Dockerfile
================================================
# Start FROM Nvidia PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch
FROM nvcr.io/nvidia/pytorch:20.08-py3

# Install dependencies
RUN pip install --upgrade pip
# COPY requirements.txt .
# RUN pip install -r requirements.txt
RUN pip install gsutil

# Create working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Copy contents
COPY . /usr/src/app

# Copy weights
#RUN python3 -c "from models import *; \
#attempt_download('weights/yolov5s.pt'); \
#attempt_download('weights/yolov5m.pt'); \
#attempt_download('weights/yolov5l.pt')"


# ---------------------------------------------------  Extras Below  ---------------------------------------------------

# Build and Push
# t=ultralytics/yolov5:latest && sudo docker build -t $t . && sudo docker push $t
# for v in {300..303}; do t=ultralytics/coco:v$v && sudo docker build -t $t . && sudo docker push $t; done

# Pull and Run
# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host $t

# Pull and Run with local directory access
# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/coco:/usr/src/coco $t

# Kill all
# sudo docker kill "$(sudo docker ps -q)"

# Kill all image-based
# sudo docker kill $(sudo docker ps -a -q --filter ancestor=ultralytics/yolov5:latest)

# Bash into running container
# sudo docker container exec -it ba65811811ab bash

# Bash into stopped container
# sudo docker commit 092b16b25c5b usr/resume && sudo docker run -it --gpus all --ipc=host -v "$(pwd)"/coco:/usr/src/coco --entrypoint=sh usr/resume

# Send weights to GCP
# python -c "from utils.general import *; strip_optimizer('runs/exp0_*/weights/best.pt', 'tmp.pt')" && gsutil cp tmp.pt gs://*.pt

# Clean up
# docker system prune -a --volumes


================================================
FILE: LICENSE
================================================
GNU GENERAL PUBLIC LICENSE
                       Version 3, 29 June 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU General Public License is a free, copyleft license for
software and other kinds of works.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.  We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors.  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights.  Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received.  You must make sure that they, too, receive
or can get the source code.  And you must show them these terms so they
know their rights.

  Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.

  For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software.  For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.

  Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so.  This is fundamentally incompatible with the aim of
protecting users' freedom to change the software.  The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable.  Therefore, we
have designed this version of the GPL to prohibit the practice for those
products.  If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.

  Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary.  To prevent this, the GPL assures that
patents cannot be used to render the program non-free.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Use with the GNU Affero General Public License.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see <http://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>
    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.

  The GNU General Public License does not permit incorporating your program
into proprietary programs.  If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library.  If this is what you want to do, use the GNU Lesser General
Public License instead of this License.  But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

================================================
FILE: README.md
================================================
# 基于 YOLOv5 的卫星图像目标检测 



## 1 数据准备



### 1.1 数据集采集

本项目使用 DOTA 数据集,数据集链接:[下载地址](https://captain-whu.github.io/DOTA/dataset.html)

- 原数据集中待检测的目标

  <table>
  	<tr>
    	<td>
        <center><img src="./README_figures/class_info.png" align="center" style="width: 80px;"></center>
        <center><font size=2>图1-1 目标类别</font></center>
      </td>
    </tr>
    <tr>
      <td>
        <center><img src="./README_figures/16_classes.png" align="center" style="width: 550px;"></center>
         <center><font size=2>图1-2 待检测目标一览</font></center>
      </td>   
    </tr>
  </table>

- 原数据集中的图像

  由 图1-3 可以看出原数据集中的图像尺寸和分辨率大小不一。

  <table>
  	<tr>
    	<td>
        <center><img src="./README_figures/original_image.png" align="center" style="width: 380px;"></center>
        <center><font size=2>图1-3 数据集图像一览</font></center>
      </td>
    </tr>
  </table>

- 原数据集中的标签

  标签文件内容如 图1-4 所示。

  <table>
  	<tr>
    	<td>
        <center><img src="./README_figures/DOTA_format_label.png" align="center" style="width: 600px;"></center>
        <center><font size=2>图1-4 数据集标签一览</font></center>
      </td>
    </tr>
  </table>


  标签格式如下:

  ```
  'imagesource':<imagesource>
  'gsd':<gsd>
  <x1> <y1> <x2> <y2> <x3> <y3> <x4> <y4> <class> <difficulty>
  ```

  其中,

  - 1,2,3,4 分别为标记目标的四个点
  - `x`,`y` 为标记点的坐标值
  - `class` 为目标类别
  - `difficulty` 为检测难度( 0/1:简单/困难 )



### 1.2 标签格式转换和标签分布统计

YOLO 模型的标签输入有着特定的格式:

```
<class> <x_center> <y_center> <width> <height>
```

其中,

- `class` 为目标类别
- `x_center` 为标记框中心点的 x 坐标与图片宽度的比值
- `y_center` 为标记框中心点的 y 坐标与图片高度的比值
- `width` 为标记框的宽度与图片宽度的比值
- `height` 为标记框的高度与图片高度的比值

而原数据集中的标签数据格式和 YOLO 模型要求输入的标签数据格式不一致,因此我们将 DOTA 标签数据格式批量转换成 YOLO 标签数据格式,转换格式后标签文件内容如 图1-5 所示。

<table>
	<tr>
  	<td>
      <center><img src="./README_figures/YOLO_format_label.png" align="center" style="width: 550px;"></center>
      <center><font size=2>图1-5 格式转换后的数据集标签一览</font></center>
    </td>
  </tr>
</table>


格式转换后,我们统计了标签中各个参数的分布。

- 目标的数量分布

  <table>
  	<tr>
    	<td>
        <center><img src="./README_figures/obj_class_dist.png" align="center" style="width: 300px;"></center>
        <center><font size=2>图1-6 各个类别目标分布数量的柱状图</font></center>
      </td>
    </tr>
  </table>

- 目标的尺寸分布

  <table>
  	<tr>
    	<td>
        <center><img src="./README_figures/obj_size_dist.png" align="center" style="width: 300px;"></center>
        <center><font size=2>图1-7 目标尺寸分布的散点热力图</font></center>
      </td>
    </tr>
  </table>

- 目标的位置分布

  <table>
  	<tr>
    	<td>
        <center><img src="./README_figures/obj_loc_dist.png" align="center" style="width: 300px;"></center>
        <center><font size=2>图1-8 目标位置分布的散点热力图</font></center>
      </td>
    </tr>
  </table>



### 1.3 图像分割和尺寸调整

YOLO 模型的图像输入尺寸是固定的,由于原数据集中的图像尺寸不一,我们将原数据集中的图像按目标分布的位置分割成一个个包含目标的子图,并将每个子图尺寸调整为 1024×1024。分割前的图像如 图1-9 所示,分割后的子图效果如 图1-10 所示。

<table>
	<tr>
  	<td>
      <center><img src="./README_figures/full_img.png" align="center" style="width: 450px;"></center>
      <center><font size=2>图1-9 分割前的图像</font></center>
    </td>
  </tr>
  <tr>
    <td>
      <center><img src="./README_figures/split_img.png" align="center" style="width: 550px;"></center>
       <center><font size=2>图1-10 分割后的效果</font></center>
    </td>   
  </tr>
</table>




### 1.4 数据集目录结构调整

YOLOv5 在读取数据时使用特定的目录结构,调整后的数据集目录结构如 图1-11 所示,各个目录中的内容如 图1-12 ~ 图1-15 所示。

<table>
  <tr>
    <td>
      <center><img src="./README_figures/dataset_dir.png" align="center" style="width: 200px;"></center>
      <center><font size=2>图1-11 输入数据目录结构</font></center>
    </td>
  </tr>
  <tr>
  	<td>
      <center><img src="./README_figures/image_train_dir.png" align="center" style="width: 250px;"></center>
      <center><font size=2>图1-12 训练集图片目录内容</font></center>
    </td>
    <td>
      <center><img src="./README_figures/image_val_dir.png" align="center" style="width: 220px;"></center>
      <center><font size=2>图1-13 验证集图片目录内容</font></center>
    </td>
  </tr>
  <tr>
  	<td>
      <center><img src="./README_figures/label_train_dir.png" align="center" style="width: 230px;"></center>
      <center><font size=2>图1-14 训练集标签目录内容</font></center>
    </td>
    <td>
      <center><img src="./README_figures/label_val_dir.png" align="center" style="width: 200px;"></center>
      <center><font size=2>图1-15 验证集标签目录内容</font></center>
    </td>
  </tr>
</table>






## 2 模型训练



### 2.1 环境搭建

我们使用远程 GPU 服务器(MistGPU)训练模型,服务器硬件配置如 图2-1 所示。

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/server_config.png" align="center" style="width: 160px;"></center>      
      <center><font size=2>图2-1 GPU 服务器硬件配置</font></center>    
    </td>  
  </tr>
</table>


服务器软件配置:

- 操作系统:Ubuntu 18.04.4
- 开发和运行环境:
  - PyTorch 版本:1.6.0
  - CUDA 版本:10.2      

本地连接服务器的工具:FinalShell 1.0

首先我们在本地修改工程中相关配置文件并测试跑通:

- 在 `data/` 目录中创建 `DOTA.yaml` 配置文件,并设置自定义的数据集路径,如 图2-2所示
- 修改 `yolov5s(/*m/*l/*x).yaml` 中的 `nc` 参数值为自定义目标类别数

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/DOTA_yaml.png" align="center" style="width: 450px;"></center>      
      <center><font size=2>图2-2 自定义数据集路径</font></center>    
    </td>  
  </tr>
</table>


然后我们将在本地调试好的 YOLOv5 工程打包压缩,将预处理好的数据集打包压缩,将两者上传至服务器。上传后分别用 unzip 命令解压,注意 YOLOv5 工程目录和数据集根目录要在同一级目录下,如 图2-3 所示。

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/server_dir.png" align="center" style="width: 350px;"></center>      
      <center><font size=2>图2-3 数据集和YOLOv5目录的位置</font></center>    
    </td>  
  </tr>
</table>


进入 `yolov5/` 目录,使用以下命令安装 YOLOv5 工程所需的配置文件:

```
pip install -U -r requirements.txt
```



### 2.2 训练

在 `yolov5/` 目录,运行 `train.py` 文件开始训练:

```
screen python3 train.py --weight weights/yolov5s.pt --batch 16 --epochs 100 --cache
```

其中的参数说明:

- `--weight`:使用的预训练权重,这里示范使用的是 yolov5s 模型的预训练权重
- `--batch`:mini-batch 的大小,这里使用 16
- `--epochs`:训练的迭代次数,这里我们训练 100 个 epoch
- `--cache`:使用数据缓存,加速训练进程

另:命令开头使用的 `screen` 是 linux 的窗口管理器,可以使训练过程在后台运行,当网络不稳定导致服务器连接中断时,可以使用 `screen -r` 命令重新进入训练界面。

训练开始时的日志信息如 图2-4 所示,包括配置信息、网络架构等。

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/train_log.png" align="center" style="width: 800px;"></center>      
      <center><font size=2>图2-4 训练开始日志信息</font></center>    
    </td>  
  </tr>
</table>



训练过程的前3个 epoch 如 图2-5 所示。

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/train_shot.png" align="center" style="width: 800px;"></center>      
      <center><font size=2>图2-5 训练过程日志信息</font></center>    
    </td>  
  </tr>
</table>


其中,

- `Epoch`:迭代数
- `gpu_mem`:使用的显存大小
- `GIoU`:GIoU 损失函数均值,此处打印的值是 L_GIoU=1-GIoU
- `obj`:目标检测的损失函数均值
- `cls`:目标分类的损失函数均值
- `total`:【暂不清楚】
- `targets`:【暂不清楚】
- `img_size`:图片尺寸(分辨率)
- `Class`:验证的目标类别
- `Images`:图片总数
- `Targets`:目标总数
- `P`:准确率【TP / (TP + FP),即“找对的/找到的”】
- `R`:召回率【TP / (TP + FN),即“找对的/该找对的”】
- `mAP@.5`:AP 是以 Precision 和 Recall 为两轴作图后曲线围成的面积,m 表示平均,@ 后面的数表示判定IoU 为正负样本的阈值
- `mAP@.5:.95`:表示在不同 GIoU 阈值(从0.5到0.95,步长0.05,即:0.5、0.55、0.6、0.65、0.7、0.75、0.8、0.85、0.9、0.95)上的平均 mAP

另:

- GIoU(Generalized Intersection over Union)定义如 图2-6 所示;IoU 定义如 图2-7 所示。

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/GIoU.png" align="center" style="width: 540px;"></center>      
      <center><font size=2>图2-6 GIoU 定义</font></center>    
    </td>  
  </tr>
  <tr>    
    <td>      
      <center><img src="./README_figures/IoU.png" align="center" style="width: 400px;"></center>      
      <center><font size=2>图2-7 IoU 定义</font></center>    
    </td>  
  </tr>
</table>


- mAP(mean Average Precision)定义:

  - mAP: mean Average Precision, 即各类别AP的平均值
  - AP: PR 曲线下面积
  - PR曲线: Precision-Recall 曲线
  - Precision: TP / (TP + FP)
  - Recall: TP / (TP + FN)
  - TP: IoU>0.5 的检测框数量(同一Ground Truth只计算一次)
  - FP: IoU<=0.5 的检测框数量,或者是检测到同一个 Ground Truth 的多余检测框的数量
  - FN: 没有检测到的 Ground Truth 的数量

  附:mAP 讲解:[理解目标检测当中的mAP](https://blog.csdn.net/hsqyc/article/details/81702437?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase)

训练一个 epoch 后可以在 `runs/` 目录下看到改 epoch 各个训练 batch 的在图上的标签,如图 2-8 所示。

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/train_batch_labels.jpg" align="center" style="width:600px;"></center>      
      <center><font size=2>图2-8 训练 batch 标签</font></center>    
    </td>  
  </tr>
</table>




### 2.3 训练和验证的结果

我们使用上述方法和数据集分别在基于 COCO 数据集的预训练模型 yolov5s 和 yolov5m 的基础上训练了自己的权重。yolov5s 模型训练用时 10 小时左右,yolov5m 模型训练用时 20 小时左右。

我们记录了每一个 epoch 的训练过程和验证过程的指标值,并绘制了评价指标曲线。yolov5s 模型的评价指标曲线如 图2-9 所示,yolov5m 评价指标模型的曲线如 图2-10 所示。

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/s_results.png" align="center" style="width: 700px;"></center>      
      <center><font size=2>图2-9 yolov5s 模型评价指标曲线</font></center>    
    </td>  
  </tr>
  <tr>    
    <td>      
      <center><img src="./README_figures/m_results.png" align="center" style="width: 700px;"></center>      
      <center><font size=2>图2-10 yolov5m 模型评价指标曲线</font></center>    
    </td>  
  </tr>
</table>


其中,

- GIoU:训练集 GIoU 损失函数均值;该值越小,检测方框的精确度越高
- val GIoU:验证集 GIoU 损失函数均值;该值越小,检测方框的精确度越高
- Objectness:训练集目标检测损失函数均值;该值越小,目标检测的准确率越高
- val Objectness:验证集目标检测损失函数均值;该值越小,目标检测的准确率越高
- Classification:训练集目标分类的损失函数均值;该值越小,目标分类准确度越高
- val Classification:验证集目标分类的损失函数均值;该值越小,目标分类准确度越高
- Precision:准确率
- Recall:召回率
- mAP@0.5:GIoU 阈值为 0.5 的平均 AP 值
- mAP@0.5:0.95:在不同 IoU 阈值(从0.5到0.95,步长0.05)上的平均 mAP

由结果曲线可以看出,yolov5s 和 yolov5m 都取得了很好的结果,大部分曲线随着 epoch 数增加都趋于收敛。我们有以下结论:

1. yolov5m 的验证集目标检测损失函数值在 50 个 epoch 后有轻微的上升的趋势,说明在验证集上存在轻微的过拟合问题
2. yolov5s 和 yolov5m 在目标分类上都有着很好的效果
3. 同等迭代次数训练后,yolov5m 的准确率会超过 0.6,高于 yolov5s 的准确率
4. yolov5s 和 yolov5m 的 mAP 收敛值相差很小,但 yolov5m 会高于 yolov5s,即 yolov5m 的效果会稍好于 yolov5s





## 3 模型测试



### 3.1 模型效果测试

我们分别对训练好的 yolov5s 和 yolov5m 模型做卫星图像目标检测测试,测试结果如 图3-1 ~ 图3-4 所示。 

<table>
  <tr>
    <td>
      <center><img src="./README_figures/s_test_res1.png" align="center" style="width: 350px;"></center>
      <center><font size=2>图3-1-a yolov5s 测试结果1</font></center>
    </td>
    <td>
      <center><img src="./README_figures/m_test_res1.png" align="center" style="width: 350px;"></center>
      <center><font size=2>图3-1-b yolov5m 测试结果1</font></center>
    </td>
  </tr>
  <tr>
  	<td>
      <center><img src="./README_figures/s_test_res2.png" align="center" style="width: 350px;"></center>
      <center><font size=2>图3-2-a yolov5s 测试结果2</font></center>
    </td>
    <td>
      <center><img src="./README_figures/m_test_res2.png" align="center" style="width: 350px;"></center>
      <center><font size=2>图3-2-b yolov5m 测试结果2</font></center>
    </td>
  </tr>
  <tr>
  	<td>
      <center><img src="./README_figures/s_test_res3.png" align="center" style="width: 350px;"></center>
      <center><font size=2>图3-3-a yolov5s 测试结果3</font></center>
    </td>
    <td>
      <center><img src="./README_figures/m_test_res3.png" align="center" style="width: 350px;"></center>
      <center><font size=2>图3-3-b yolov5m 测试结果3</font></center>
    </td>
  </tr>
  <tr>
  	<td>
      <center><img src="./README_figures/s_test_res4.png" align="center" style="width: 350px;"></center>
      <center><font size=2>图3-4-a yolov5s 测试结果4</font></center>
    </td>
    <td>
      <center><img src="./README_figures/m_test_res4.png" align="center" style="width: 350px;"></center>
      <center><font size=2>图3-4-b yolov5m 测试结果4</font></center>
    </td>
  </tr>
</table>


yolov5s 和 yolov5m 模型的每张图片推断用时如 图3-5 和 图3-6 所示。

<table>  
  <tr>    
    <td>      
      <center><img src="./README_figures/s_infer_shot.png" align="center" style="width: 700px;"></center>      
      <center><font size=2>图3-5 yolov5s 模型推断用时</font></center>    
    </td>  
  </tr>
  <tr>    
    <td>      
      <center><img src="./README_figures/m_infer_shot.png" align="center" style="width: 700px;"></center>      
      <center><font size=2>图3-6 yolov5m 模型推断用时</font></center>    
    </td>  
  </tr>
</table>




### 3.2 模型效果总结

总的来说,yolov5s 和 yolov5m 模型效果相差不大,都能检测到给定图像中的大部分目标并正确分类。

 模型效果总结:

- yolov5m 在给定图像的目标检测数量上强于 yolov5s
- yolov5m 极少数情况下会将图像中个别目标错误分类,yolov5s 这种情况较为少见
- yolov5s 和 yolov5m 都能准确地框住目标,且两者都能准确地检测到图像边缘的不完整目标
- yolov5s 和 yolov5m 在图像分辨率较低或者目标有半透明云层遮挡情况下都能检测到目标
- yolov5s 比 yolov5m 推断用时少大约一倍



## 4 未来工作

日后将考虑训练 yolov5l 以及 yolov5x 模型,增加所有模型的训练 epoch 数到 300,并对比各个模型的优劣。


================================================
FILE: detect.py
================================================
import argparse
import os
import platform
import shutil
import time
from pathlib import Path

import cv2
import torch
import torch.backends.cudnn as cudnn
from numpy import random

from models.experimental import attempt_load
from utils.datasets import LoadStreams, LoadImages
from utils.general import (
    check_img_size, non_max_suppression, apply_classifier, scale_coords,
    xyxy2xywh, plot_one_box, strip_optimizer, set_logging)
from utils.torch_utils import select_device, load_classifier, time_synchronized


def detect(save_img=False):
    out, source, weights, view_img, save_txt, imgsz = \
        opt.output, opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size
    webcam = source.isnumeric() or source.startswith('rtsp') or source.startswith('http') or source.endswith('.txt')

    # Initialize
    set_logging()
    device = select_device(opt.device)
    if os.path.exists(out):
        shutil.rmtree(out)  # delete output folder
    os.makedirs(out)  # make new output folder
    half = device.type != 'cpu'  # half precision only supported on CUDA

    # Load model
    model = attempt_load(weights, map_location=device)  # load FP32 model
    imgsz = check_img_size(imgsz, s=model.stride.max())  # check img_size
    if half:
        model.half()  # to FP16

    # Second-stage classifier
    classify = False
    if classify:
        modelc = load_classifier(name='resnet101', n=2)  # initialize
        modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model'])  # load weights
        modelc.to(device).eval()

    # Set Dataloader
    vid_path, vid_writer = None, None
    if webcam:
        view_img = True
        cudnn.benchmark = True  # set True to speed up constant image size inference
        dataset = LoadStreams(source, img_size=imgsz)
    else:
        save_img = True
        dataset = LoadImages(source, img_size=imgsz)

    # Get names and colors
    names = model.module.names if hasattr(model, 'module') else model.names
    colors = [[random.randint(0, 255) for _ in range(3)] for _ in range(len(names))]

    # Run inference
    t0 = time.time()
    img = torch.zeros((1, 3, imgsz, imgsz), device=device)  # init img
    _ = model(img.half() if half else img) if device.type != 'cpu' else None  # run once
    for path, img, im0s, vid_cap in dataset:
        img = torch.from_numpy(img).to(device)
        img = img.half() if half else img.float()  # uint8 to fp16/32
        img /= 255.0  # 0 - 255 to 0.0 - 1.0
        if img.ndimension() == 3:
            img = img.unsqueeze(0)

        # Inference
        t1 = time_synchronized()
        pred = model(img, augment=opt.augment)[0]

        # Apply NMS
        pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms)
        t2 = time_synchronized()

        # Apply Classifier
        if classify:
            pred = apply_classifier(pred, modelc, img, im0s)

        # Process detections
        for i, det in enumerate(pred):  # detections per image
            if webcam:  # batch_size >= 1
                p, s, im0 = path[i], '%g: ' % i, im0s[i].copy()
            else:
                p, s, im0 = path, '', im0s

            save_path = str(Path(out) / Path(p).name)
            txt_path = str(Path(out) / Path(p).stem) + ('_%g' % dataset.frame if dataset.mode == 'video' else '')
            s += '%gx%g ' % img.shape[2:]  # print string
            gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
            if det is not None and len(det):
                # Rescale boxes from img_size to im0 size
                det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

                # Print results
                for c in det[:, -1].unique():
                    n = (det[:, -1] == c).sum()  # detections per class
                    s += '%g %ss, ' % (n, names[int(c)])  # add to string

                # Write results
                for *xyxy, conf, cls in reversed(det):
                    if save_txt:  # Write to file
                        xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                        with open(txt_path + '.txt', 'a') as f:
                            f.write(('%g ' * 5 + '\n') % (cls, *xywh))  # label format

                    if save_img or view_img:  # Add bbox to image
                        label = '%s %.2f' % (names[int(cls)], conf)
                        plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)

            # Print time (inference + NMS)
            print('%sDone. (%.3fs)' % (s, t2 - t1))

            # Stream results
            if view_img:
                cv2.imshow(p, im0)
                if cv2.waitKey(1) == ord('q'):  # q to quit
                    raise StopIteration

            # Save results (image with detections)
            if save_img:
                if dataset.mode == 'images':
                    cv2.imwrite(save_path, im0)
                else:
                    if vid_path != save_path:  # new video
                        vid_path = save_path
                        if isinstance(vid_writer, cv2.VideoWriter):
                            vid_writer.release()  # release previous video writer

                        fourcc = 'mp4v'  # output video codec
                        fps = vid_cap.get(cv2.CAP_PROP_FPS)
                        w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
                        h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
                        vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*fourcc), fps, (w, h))
                    vid_writer.write(im0)

    if save_txt or save_img:
        print('Results saved to %s' % Path(out))
        if platform.system() == 'Darwin' and not opt.update:  # MacOS
            os.system('open ' + save_path)

    print('Done. (%.3fs)' % (time.time() - t0))


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)')
    parser.add_argument('--source', type=str, default='inference/images', help='source')  # file/folder, 0 for webcam
    parser.add_argument('--output', type=str, default='inference/output', help='output folder')  # output folder
    parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
    parser.add_argument('--conf-thres', type=float, default=0.4, help='object confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.5, help='IOU threshold for NMS')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--view-img', action='store_true', help='display results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--update', action='store_true', help='update all models')
    opt = parser.parse_args()
    print(opt)

    with torch.no_grad():
        if opt.update:  # update all models (to fix SourceChangeWarning)
            for opt.weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']:
                detect()
                strip_optimizer(opt.weights)
        else:
            detect()


================================================
FILE: hubconf.py
================================================
"""File for accessing YOLOv5 via PyTorch Hub https://pytorch.org/hub/

Usage:
    import torch
    model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, channels=3, classes=80)
"""

dependencies = ['torch', 'yaml']
import os

import torch

from models.yolo import Model
from utils.google_utils import attempt_download


def create(name, pretrained, channels, classes):
    """Creates a specified YOLOv5 model

    Arguments:
        name (str): name of model, i.e. 'yolov5s'
        pretrained (bool): load pretrained weights into the model
        channels (int): number of input channels
        classes (int): number of model classes

    Returns:
        pytorch model
    """
    config = os.path.join(os.path.dirname(__file__), 'models', '%s.yaml' % name)  # model.yaml path
    try:
        model = Model(config, channels, classes)
        if pretrained:
            ckpt = '%s.pt' % name  # checkpoint filename
            attempt_download(ckpt)  # download if not found locally
            state_dict = torch.load(ckpt, map_location=torch.device('cpu'))['model'].float().state_dict()  # to FP32
            state_dict = {k: v for k, v in state_dict.items() if model.state_dict()[k].shape == v.shape}  # filter
            model.load_state_dict(state_dict, strict=False)  # load
        return model

    except Exception as e:
        help_url = 'https://github.com/ultralytics/yolov5/issues/36'
        s = 'Cache maybe be out of date, deleting cache and retrying may solve this. See %s for help.' % help_url
        raise Exception(s) from e


def yolov5s(pretrained=False, channels=3, classes=80):
    """YOLOv5-small model from https://github.com/ultralytics/yolov5

    Arguments:
        pretrained (bool): load pretrained weights into the model, default=False
        channels (int): number of input channels, default=3
        classes (int): number of model classes, default=80

    Returns:
        pytorch model
    """
    return create('yolov5s', pretrained, channels, classes)


def yolov5m(pretrained=False, channels=3, classes=80):
    """YOLOv5-medium model from https://github.com/ultralytics/yolov5

    Arguments:
        pretrained (bool): load pretrained weights into the model, default=False
        channels (int): number of input channels, default=3
        classes (int): number of model classes, default=80

    Returns:
        pytorch model
    """
    return create('yolov5m', pretrained, channels, classes)


def yolov5l(pretrained=False, channels=3, classes=80):
    """YOLOv5-large model from https://github.com/ultralytics/yolov5

    Arguments:
        pretrained (bool): load pretrained weights into the model, default=False
        channels (int): number of input channels, default=3
        classes (int): number of model classes, default=80

    Returns:
        pytorch model
    """
    return create('yolov5l', pretrained, channels, classes)


def yolov5x(pretrained=False, channels=3, classes=80):
    """YOLOv5-xlarge model from https://github.com/ultralytics/yolov5

    Arguments:
        pretrained (bool): load pretrained weights into the model, default=False
        channels (int): number of input channels, default=3
        classes (int): number of model classes, default=80

    Returns:
        pytorch model
    """
    return create('yolov5x', pretrained, channels, classes)


================================================
FILE: models/__init__.py
================================================


================================================
FILE: models/common.py
================================================
# This file contains modules common to various models
import math

import torch
import torch.nn as nn


def autopad(k, p=None):  # kernel, padding
    # Pad to 'same'
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p


def DWConv(c1, c2, k=1, s=1, act=True):
    # Depthwise convolution
    return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act)


class Conv(nn.Module):
    # Standard convolution
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super(Conv, self).__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = nn.Hardswish() if act else nn.Identity()

    def forward(self, x):
        return self.act(self.bn(self.conv(x)))

    def fuseforward(self, x):
        return self.act(self.conv(x))


class Bottleneck(nn.Module):
    # Standard bottleneck
    def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, shortcut, groups, expansion
        super(Bottleneck, self).__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_, c2, 3, 1, g=g)
        self.add = shortcut and c1 == c2

    def forward(self, x):
        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))


class BottleneckCSP(nn.Module):
    # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion
        super(BottleneckCSP, self).__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
        self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
        self.cv4 = Conv(2 * c_, c2, 1, 1)
        self.bn = nn.BatchNorm2d(2 * c_)  # applied to cat(cv2, cv3)
        self.act = nn.LeakyReLU(0.1, inplace=True)
        self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])

    def forward(self, x):
        y1 = self.cv3(self.m(self.cv1(x)))
        y2 = self.cv2(x)
        return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))


class SPP(nn.Module):
    # Spatial pyramid pooling layer used in YOLOv3-SPP
    def __init__(self, c1, c2, k=(5, 9, 13)):
        super(SPP, self).__init__()
        c_ = c1 // 2  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
        self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])

    def forward(self, x):
        x = self.cv1(x)
        return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))


class Focus(nn.Module):
    # Focus wh information into c-space
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super(Focus, self).__init__()
        self.conv = Conv(c1 * 4, c2, k, s, p, g, act)

    def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
        return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))


class Concat(nn.Module):
    # Concatenate a list of tensors along dimension
    def __init__(self, dimension=1):
        super(Concat, self).__init__()
        self.d = dimension

    def forward(self, x):
        return torch.cat(x, self.d)


class Flatten(nn.Module):
    # Use after nn.AdaptiveAvgPool2d(1) to remove last 2 dimensions
    @staticmethod
    def forward(x):
        return x.view(x.size(0), -1)


class Classify(nn.Module):
    # Classification head, i.e. x(b,c1,20,20) to x(b,c2)
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1):  # ch_in, ch_out, kernel, stride, padding, groups
        super(Classify, self).__init__()
        self.aap = nn.AdaptiveAvgPool2d(1)  # to x(b,c1,1,1)
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)  # to x(b,c2,1,1)
        self.flat = Flatten()

    def forward(self, x):
        z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1)  # cat if list
        return self.flat(self.conv(z))  # flatten to x(b,c2)


================================================
FILE: models/experimental.py
================================================
# This file contains experimental modules

import numpy as np
import torch
import torch.nn as nn

from models.common import Conv, DWConv
from utils.google_utils import attempt_download


class CrossConv(nn.Module):
    # Cross Convolution Downsample
    def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
        # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
        super(CrossConv, self).__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, (1, k), (1, s))
        self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
        self.add = shortcut and c1 == c2

    def forward(self, x):
        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))


class C3(nn.Module):
    # Cross Convolution CSP
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion
        super(C3, self).__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
        self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
        self.cv4 = Conv(2 * c_, c2, 1, 1)
        self.bn = nn.BatchNorm2d(2 * c_)  # applied to cat(cv2, cv3)
        self.act = nn.LeakyReLU(0.1, inplace=True)
        self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])

    def forward(self, x):
        y1 = self.cv3(self.m(self.cv1(x)))
        y2 = self.cv2(x)
        return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))


class Sum(nn.Module):
    # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
    def __init__(self, n, weight=False):  # n: number of inputs
        super(Sum, self).__init__()
        self.weight = weight  # apply weights boolean
        self.iter = range(n - 1)  # iter object
        if weight:
            self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True)  # layer weights

    def forward(self, x):
        y = x[0]  # no weight
        if self.weight:
            w = torch.sigmoid(self.w) * 2
            for i in self.iter:
                y = y + x[i + 1] * w[i]
        else:
            for i in self.iter:
                y = y + x[i + 1]
        return y


class GhostConv(nn.Module):
    # Ghost Convolution https://github.com/huawei-noah/ghostnet
    def __init__(self, c1, c2, k=1, s=1, g=1, act=True):  # ch_in, ch_out, kernel, stride, groups
        super(GhostConv, self).__init__()
        c_ = c2 // 2  # hidden channels
        self.cv1 = Conv(c1, c_, k, s, g, act)
        self.cv2 = Conv(c_, c_, 5, 1, c_, act)

    def forward(self, x):
        y = self.cv1(x)
        return torch.cat([y, self.cv2(y)], 1)


class GhostBottleneck(nn.Module):
    # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
    def __init__(self, c1, c2, k, s):
        super(GhostBottleneck, self).__init__()
        c_ = c2 // 2
        self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1),  # pw
                                  DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(),  # dw
                                  GhostConv(c_, c2, 1, 1, act=False))  # pw-linear
        self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
                                      Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()

    def forward(self, x):
        return self.conv(x) + self.shortcut(x)


class MixConv2d(nn.Module):
    # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
    def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
        super(MixConv2d, self).__init__()
        groups = len(k)
        if equal_ch:  # equal c_ per group
            i = torch.linspace(0, groups - 1E-6, c2).floor()  # c2 indices
            c_ = [(i == g).sum() for g in range(groups)]  # intermediate channels
        else:  # equal weight.numel() per group
            b = [c2] + [0] * groups
            a = np.eye(groups + 1, groups, k=-1)
            a -= np.roll(a, 1, axis=1)
            a *= np.array(k) ** 2
            a[0] = 1
            c_ = np.linalg.lstsq(a, b, rcond=None)[0].round()  # solve for equal weight indices, ax = b

        self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
        self.bn = nn.BatchNorm2d(c2)
        self.act = nn.LeakyReLU(0.1, inplace=True)

    def forward(self, x):
        return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))


class Ensemble(nn.ModuleList):
    # Ensemble of models
    def __init__(self):
        super(Ensemble, self).__init__()

    def forward(self, x, augment=False):
        y = []
        for module in self:
            y.append(module(x, augment)[0])
        # y = torch.stack(y).max(0)[0]  # max ensemble
        # y = torch.cat(y, 1)  # nms ensemble
        y = torch.stack(y).mean(0)  # mean ensemble
        return y, None  # inference, train output


def attempt_load(weights, map_location=None):
    # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
    model = Ensemble()
    for w in weights if isinstance(weights, list) else [weights]:
        attempt_download(w)
        model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval())  # load FP32 model

    if len(model) == 1:
        return model[-1]  # return model
    else:
        print('Ensemble created with %s\n' % weights)
        for k in ['names', 'stride']:
            setattr(model, k, getattr(model[-1], k))
        return model  # return ensemble


================================================
FILE: models/export.py
================================================
"""Exports a YOLOv5 *.pt model to ONNX and TorchScript formats

Usage:
    $ export PYTHONPATH="$PWD" && python models/export.py --weights ./weights/yolov5s.pt --img 640 --batch 1
"""

import argparse

import torch
import torch.nn as nn

import models
from models.experimental import attempt_load
from utils.activations import Hardswish
from utils.general import set_logging

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default='./yolov5s.pt', help='weights path')  # from yolov5/models/
    parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size')  # height, width
    parser.add_argument('--batch-size', type=int, default=1, help='batch size')
    opt = parser.parse_args()
    opt.img_size *= 2 if len(opt.img_size) == 1 else 1  # expand
    print(opt)
    set_logging()

    # Input
    img = torch.zeros((opt.batch_size, 3, *opt.img_size))  # image size(1,3,320,192) iDetection

    # Load PyTorch model
    model = attempt_load(opt.weights, map_location=torch.device('cpu'))  # load FP32 model

    # Update model
    for k, m in model.named_modules():
        m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatability
        if isinstance(m, models.common.Conv) and isinstance(m.act, nn.Hardswish):
            m.act = Hardswish()  # assign activation
        # if isinstance(m, models.yolo.Detect):
        #     m.forward = m.forward_export  # assign forward (optional)
    model.model[-1].export = True  # set Detect() layer export=True
    y = model(img)  # dry run

    # TorchScript export
    try:
        print('\nStarting TorchScript export with torch %s...' % torch.__version__)
        f = opt.weights.replace('.pt', '.torchscript.pt')  # filename
        ts = torch.jit.trace(model, img)
        ts.save(f)
        print('TorchScript export success, saved as %s' % f)
    except Exception as e:
        print('TorchScript export failure: %s' % e)

    # ONNX export
    try:
        import onnx

        print('\nStarting ONNX export with onnx %s...' % onnx.__version__)
        f = opt.weights.replace('.pt', '.onnx')  # filename
        torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=['images'],
                          output_names=['classes', 'boxes'] if y is None else ['output'])

        # Checks
        onnx_model = onnx.load(f)  # load onnx model
        onnx.checker.check_model(onnx_model)  # check onnx model
        # print(onnx.helper.printable_graph(onnx_model.graph))  # print a human readable model
        print('ONNX export success, saved as %s' % f)
    except Exception as e:
        print('ONNX export failure: %s' % e)

    # CoreML export
    try:
        import coremltools as ct

        print('\nStarting CoreML export with coremltools %s...' % ct.__version__)
        # convert model from torchscript and apply pixel scaling as per detect.py
        model = ct.convert(ts, inputs=[ct.ImageType(name='images', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])])
        f = opt.weights.replace('.pt', '.mlmodel')  # filename
        model.save(f)
        print('CoreML export success, saved as %s' % f)
    except Exception as e:
        print('CoreML export failure: %s' % e)

    # Finish
    print('\nExport complete. Visualize with https://github.com/lutzroeder/netron.')


================================================
FILE: models/hub/yolov3-spp.yaml
================================================
# parameters
nc: 80  # number of classes
depth_multiple: 1.0  # model depth multiple
width_multiple: 1.0  # layer channel multiple

# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# darknet53 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Conv, [32, 3, 1]],  # 0
   [-1, 1, Conv, [64, 3, 2]],  # 1-P1/2
   [-1, 1, Bottleneck, [64]],
   [-1, 1, Conv, [128, 3, 2]],  # 3-P2/4
   [-1, 2, Bottleneck, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 5-P3/8
   [-1, 8, Bottleneck, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 7-P4/16
   [-1, 8, Bottleneck, [512]],
   [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
   [-1, 4, Bottleneck, [1024]],  # 10
  ]

# YOLOv3-SPP head
head:
  [[-1, 1, Bottleneck, [1024, False]],
   [-1, 1, SPP, [512, [5, 9, 13]]],
   [-1, 1, Conv, [1024, 3, 1]],
   [-1, 1, Conv, [512, 1, 1]],
   [-1, 1, Conv, [1024, 3, 1]],  # 15 (P5/32-large)

   [-2, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 8], 1, Concat, [1]],  # cat backbone P4
   [-1, 1, Bottleneck, [512, False]],
   [-1, 1, Bottleneck, [512, False]],
   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, Conv, [512, 3, 1]],  # 22 (P4/16-medium)

   [-2, 1, Conv, [128, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P3
   [-1, 1, Bottleneck, [256, False]],
   [-1, 2, Bottleneck, [256, False]],  # 27 (P3/8-small)

   [[27, 22, 15], 1, Detect, [nc, anchors]],   # Detect(P3, P4, P5)
  ]


================================================
FILE: models/hub/yolov5-fpn.yaml
================================================
# parameters
nc: 80  # number of classes
depth_multiple: 1.0  # model depth multiple
width_multiple: 1.0  # layer channel multiple

# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, Bottleneck, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 6, BottleneckCSP, [1024]],  # 9
  ]

# YOLOv5 FPN head
head:
  [[-1, 3, BottleneckCSP, [1024, False]],  # 10 (P5/32-large)

   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 1, Conv, [512, 1, 1]],
   [-1, 3, BottleneckCSP, [512, False]],  # 14 (P4/16-medium)

   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 1, Conv, [256, 1, 1]],
   [-1, 3, BottleneckCSP, [256, False]],  # 18 (P3/8-small)

   [[18, 14, 10], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]


================================================
FILE: models/hub/yolov5-panet.yaml
================================================
# parameters
nc: 80  # number of classes
depth_multiple: 1.0  # model depth multiple
width_multiple: 1.0  # layer channel multiple

# anchors
anchors:
  - [116,90, 156,198, 373,326]  # P5/32
  - [30,61, 62,45, 59,119]  # P4/16
  - [10,13, 16,30, 33,23]  # P3/8

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, BottleneckCSP, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, BottleneckCSP, [1024, False]],  # 9
  ]

# YOLOv5 PANet head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, BottleneckCSP, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, BottleneckCSP, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, BottleneckCSP, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, BottleneckCSP, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P5, P4, P3)
  ]


================================================
FILE: models/yolo.py
================================================
import argparse
import logging
import math
from copy import deepcopy
from pathlib import Path

import torch
import torch.nn as nn

from models.common import Conv, Bottleneck, SPP, DWConv, Focus, BottleneckCSP, Concat
from models.experimental import MixConv2d, CrossConv, C3
from utils.general import check_anchor_order, make_divisible, check_file, set_logging
from utils.torch_utils import (
    time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, select_device)

logger = logging.getLogger(__name__)


class Detect(nn.Module):
    stride = None  # strides computed during build
    export = False  # onnx export

    def __init__(self, nc=80, anchors=(), ch=()):  # detection layer
        super(Detect, self).__init__()
        self.nc = nc  # number of classes
        self.no = nc + 5  # number of outputs per anchor
        self.nl = len(anchors)  # number of detection layers
        self.na = len(anchors[0]) // 2  # number of anchors
        self.grid = [torch.zeros(1)] * self.nl  # init grid
        a = torch.tensor(anchors).float().view(self.nl, -1, 2)
        self.register_buffer('anchors', a)  # shape(nl,na,2)
        self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2))  # shape(nl,1,na,1,1,2)
        self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch)  # output conv

    def forward(self, x):
        # x = x.copy()  # for profiling
        z = []  # inference output
        self.training |= self.export
        for i in range(self.nl):
            x[i] = self.m[i](x[i])  # conv
            bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)
            x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()

            if not self.training:  # inference
                if self.grid[i].shape[2:4] != x[i].shape[2:4]:
                    self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

                y = x[i].sigmoid()
                y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy
                y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
                z.append(y.view(bs, -1, self.no))

        return x if self.training else (torch.cat(z, 1), x)

    @staticmethod
    def _make_grid(nx=20, ny=20):
        yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
        return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()


class Model(nn.Module):
    def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None):  # model, input channels, number of classes
        super(Model, self).__init__()
        if isinstance(cfg, dict):
            self.yaml = cfg  # model dict
        else:  # is *.yaml
            import yaml  # for torch hub
            self.yaml_file = Path(cfg).name
            with open(cfg) as f:
                self.yaml = yaml.load(f, Loader=yaml.FullLoader)  # model dict

        # Define model
        if nc and nc != self.yaml['nc']:
            print('Overriding %s nc=%g with nc=%g' % (cfg, self.yaml['nc'], nc))
            self.yaml['nc'] = nc  # override yaml value
        self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch])  # model, savelist, ch_out
        # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])

        # Build strides, anchors
        m = self.model[-1]  # Detect()
        if isinstance(m, Detect):
            s = 128  # 2x min stride
            m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))])  # forward
            m.anchors /= m.stride.view(-1, 1, 1)
            check_anchor_order(m)
            self.stride = m.stride
            self._initialize_biases()  # only run once
            # print('Strides: %s' % m.stride.tolist())

        # Init weights, biases
        initialize_weights(self)
        self.info()
        print('')

    def forward(self, x, augment=False, profile=False):
        if augment:
            img_size = x.shape[-2:]  # height, width
            s = [1, 0.83, 0.67]  # scales
            f = [None, 3, None]  # flips (2-ud, 3-lr)
            y = []  # outputs
            for si, fi in zip(s, f):
                xi = scale_img(x.flip(fi) if fi else x, si)
                yi = self.forward_once(xi)[0]  # forward
                # cv2.imwrite('img%g.jpg' % s, 255 * xi[0].numpy().transpose((1, 2, 0))[:, :, ::-1])  # save
                yi[..., :4] /= si  # de-scale
                if fi == 2:
                    yi[..., 1] = img_size[0] - yi[..., 1]  # de-flip ud
                elif fi == 3:
                    yi[..., 0] = img_size[1] - yi[..., 0]  # de-flip lr
                y.append(yi)
            return torch.cat(y, 1), None  # augmented inference, train
        else:
            return self.forward_once(x, profile)  # single-scale inference, train

    def forward_once(self, x, profile=False):
        y, dt = [], []  # outputs
        for m in self.model:
            if m.f != -1:  # if not from previous layer
                x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f]  # from earlier layers

            if profile:
                try:
                    import thop
                    o = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2  # FLOPS
                except:
                    o = 0
                t = time_synchronized()
                for _ in range(10):
                    _ = m(x)
                dt.append((time_synchronized() - t) * 100)
                print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type))

            x = m(x)  # run
            y.append(x if m.i in self.save else None)  # save output

        if profile:
            print('%.1fms total' % sum(dt))
        return x

    def _initialize_biases(self, cf=None):  # initialize biases into Detect(), cf is class frequency
        # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
        m = self.model[-1]  # Detect() module
        for mi, s in zip(m.m, m.stride):  # from
            b = mi.bias.view(m.na, -1)  # conv.bias(255) to (3,85)
            b[:, 4] += math.log(8 / (640 / s) ** 2)  # obj (8 objects per 640 image)
            b[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum())  # cls
            mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)

    def _print_biases(self):
        m = self.model[-1]  # Detect() module
        for mi in m.m:  # from
            b = mi.bias.detach().view(m.na, -1).T  # conv.bias(255) to (3,85)
            print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))

    # def _print_weights(self):
    #     for m in self.model.modules():
    #         if type(m) is Bottleneck:
    #             print('%10.3g' % (m.w.detach().sigmoid() * 2))  # shortcut weights

    def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layers
        print('Fusing layers... ')
        for m in self.model.modules():
            if type(m) is Conv:
                m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatability
                m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv
                delattr(m, 'bn')  # remove batchnorm
                m.forward = m.fuseforward  # update forward
        self.info()
        return self

    def info(self, verbose=False):  # print model information
        model_info(self, verbose)


def parse_model(d, ch):  # model_dict, input_channels(3)
    logger.info('\n%3s%18s%3s%10s  %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
    anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
    na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors  # number of anchors
    no = na * (nc + 5)  # number of outputs = anchors * (classes + 5)

    layers, save, c2 = [], [], ch[-1]  # layers, savelist, ch out
    for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']):  # from, number, module, args
        m = eval(m) if isinstance(m, str) else m  # eval strings
        for j, a in enumerate(args):
            try:
                args[j] = eval(a) if isinstance(a, str) else a  # eval strings
            except:
                pass

        n = max(round(n * gd), 1) if n > 1 else n  # depth gain
        if m in [nn.Conv2d, Conv, Bottleneck, SPP, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3]:
            c1, c2 = ch[f], args[0]

            # Normal
            # if i > 0 and args[0] != no:  # channel expansion factor
            #     ex = 1.75  # exponential (default 2.0)
            #     e = math.log(c2 / ch[1]) / math.log(2)
            #     c2 = int(ch[1] * ex ** e)
            # if m != Focus:

            c2 = make_divisible(c2 * gw, 8) if c2 != no else c2

            # Experimental
            # if i > 0 and args[0] != no:  # channel expansion factor
            #     ex = 1 + gw  # exponential (default 2.0)
            #     ch1 = 32  # ch[1]
            #     e = math.log(c2 / ch1) / math.log(2)  # level 1-n
            #     c2 = int(ch1 * ex ** e)
            # if m != Focus:
            #     c2 = make_divisible(c2, 8) if c2 != no else c2

            args = [c1, c2, *args[1:]]
            if m in [BottleneckCSP, C3]:
                args.insert(2, n)
                n = 1
        elif m is nn.BatchNorm2d:
            args = [ch[f]]
        elif m is Concat:
            c2 = sum([ch[-1 if x == -1 else x + 1] for x in f])
        elif m is Detect:
            args.append([ch[x + 1] for x in f])
            if isinstance(args[1], int):  # number of anchors
                args[1] = [list(range(args[1] * 2))] * len(f)
        else:
            c2 = ch[f]

        m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args)  # module
        t = str(m)[8:-2].replace('__main__.', '')  # module type
        np = sum([x.numel() for x in m_.parameters()])  # number params
        m_.i, m_.f, m_.type, m_.np = i, f, t, np  # attach index, 'from' index, type, number params
        logger.info('%3s%18s%3s%10.0f  %-40s%-30s' % (i, f, n, np, t, args))  # print
        save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1)  # append to savelist
        layers.append(m_)
        ch.append(c2)
    return nn.Sequential(*layers), sorted(save)


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    opt = parser.parse_args()
    opt.cfg = check_file(opt.cfg)  # check file
    set_logging()
    device = select_device(opt.device)

    # Create model
    model = Model(opt.cfg).to(device)
    model.train()

    # Profile
    # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)
    # y = model(img, profile=True)

    # ONNX export
    # model.model[-1].export = True
    # torch.onnx.export(model, img, opt.cfg.replace('.yaml', '.onnx'), verbose=True, opset_version=11)

    # Tensorboard
    # from torch.utils.tensorboard import SummaryWriter
    # tb_writer = SummaryWriter()
    # print("Run 'tensorboard --logdir=models/runs' to view tensorboard at http://localhost:6006/")
    # tb_writer.add_graph(model.model, img)  # add model to tensorboard
    # tb_writer.add_image('test', img[0], dataformats='CWH')  # add model to tensorboard


================================================
FILE: models/yolov5l.yaml
================================================
# parameters
nc: 16  # number of classes
depth_multiple: 1.0  # model depth multiple
width_multiple: 1.0  # layer channel multiple

# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, BottleneckCSP, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, BottleneckCSP, [1024, False]],  # 9
  ]

# YOLOv5 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, BottleneckCSP, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, BottleneckCSP, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, BottleneckCSP, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, BottleneckCSP, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]


================================================
FILE: models/yolov5m.yaml
================================================
# parameters
nc: 16  # number of classes
depth_multiple: 0.67  # model depth multiple
width_multiple: 0.75  # layer channel multiple

# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, BottleneckCSP, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, BottleneckCSP, [1024, False]],  # 9
  ]

# YOLOv5 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, BottleneckCSP, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, BottleneckCSP, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, BottleneckCSP, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, BottleneckCSP, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]


================================================
FILE: models/yolov5s.yaml
================================================
# parameters
nc: 16  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple

# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, BottleneckCSP, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, BottleneckCSP, [1024, False]],  # 9
  ]

# YOLOv5 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, BottleneckCSP, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, BottleneckCSP, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, BottleneckCSP, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, BottleneckCSP, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]


================================================
FILE: models/yolov5x.yaml
================================================
# parameters
nc: 16  # number of classes
depth_multiple: 1.33  # model depth multiple
width_multiple: 1.25  # layer channel multiple

# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, BottleneckCSP, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, BottleneckCSP, [1024, False]],  # 9
  ]

# YOLOv5 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, BottleneckCSP, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, BottleneckCSP, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, BottleneckCSP, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, BottleneckCSP, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]


================================================
FILE: requirements.txt
================================================
# pip install -r requirements.txt

# base ----------------------------------------
Cython
matplotlib>=3.2.2
numpy>=1.18.5
opencv-python>=4.1.2
pillow
PyYAML>=5.3
scipy>=1.4.1
tensorboard>=2.2
torch>=1.6.0
torchvision>=0.7.0
tqdm>=4.41.0

# coco ----------------------------------------
# pycocotools>=2.0

# export --------------------------------------
# packaging  # for coremltools
# coremltools==4.0b3
# onnx>=1.7.0
# scikit-learn==0.19.2  # for coreml quantization

# extras --------------------------------------
# thop  # FLOPS computation
# seaborn  # plotting


================================================
FILE: sotabench.py
================================================
import argparse
import glob
import json
import os
import shutil
from pathlib import Path

import numpy as np
import torch
import yaml
from tqdm import tqdm

from models.experimental import attempt_load
from utils.datasets import create_dataloader
from utils.general import (
    coco80_to_coco91_class, check_dataset, check_file, check_img_size, compute_loss, non_max_suppression, scale_coords,
    xyxy2xywh, clip_coords, plot_images, xywh2xyxy, box_iou, output_to_target, ap_per_class, set_logging)
from utils.torch_utils import select_device, time_synchronized


from sotabencheval.object_detection import COCOEvaluator
from sotabencheval.utils import is_server

DATA_ROOT = './.data/vision/coco' if is_server() else '../coco'  # sotabench data dir


def test(data,
         weights=None,
         batch_size=16,
         imgsz=640,
         conf_thres=0.001,
         iou_thres=0.6,  # for NMS
         save_json=False,
         single_cls=False,
         augment=False,
         verbose=False,
         model=None,
         dataloader=None,
         save_dir='',
         merge=False,
         save_txt=False):
    # Initialize/load model and set device
    training = model is not None
    if training:  # called by train.py
        device = next(model.parameters()).device  # get model device

    else:  # called directly
        set_logging()
        device = select_device(opt.device, batch_size=batch_size)
        merge, save_txt = opt.merge, opt.save_txt  # use Merge NMS, save *.txt labels
        if save_txt:
            out = Path('inference/output')
            if os.path.exists(out):
                shutil.rmtree(out)  # delete output folder
            os.makedirs(out)  # make new output folder

        # Remove previous
        for f in glob.glob(str(Path(save_dir) / 'test_batch*.jpg')):
            os.remove(f)

        # Load model
        model = attempt_load(weights, map_location=device)  # load FP32 model
        imgsz = check_img_size(imgsz, s=model.stride.max())  # check img_size

        # Multi-GPU disabled, incompatible with .half() https://github.com/ultralytics/yolov5/issues/99
        # if device.type != 'cpu' and torch.cuda.device_count() > 1:
        #     model = nn.DataParallel(model)

    # Half
    half = device.type != 'cpu'  # half precision only supported on CUDA
    if half:
        model.half()

    # Configure
    model.eval()
    with open(data) as f:
        data = yaml.load(f, Loader=yaml.FullLoader)  # model dict
    check_dataset(data)  # check
    nc = 1 if single_cls else int(data['nc'])  # number of classes
    iouv = torch.linspace(0.5, 0.95, 10).to(device)  # iou vector for mAP@0.5:0.95
    niou = iouv.numel()

    # Dataloader
    if not training:
        img = torch.zeros((1, 3, imgsz, imgsz), device=device)  # init img
        _ = model(img.half() if half else img) if device.type != 'cpu' else None  # run once
        path = data['test'] if opt.task == 'test' else data['val']  # path to val/test images
        dataloader = create_dataloader(path, imgsz, batch_size, model.stride.max(), opt,
                                       hyp=None, augment=False, cache=True, pad=0.5, rect=True)[0]

    seen = 0
    names = model.names if hasattr(model, 'names') else model.module.names
    coco91class = coco80_to_coco91_class()
    s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Targets', 'P', 'R', 'mAP@.5', 'mAP@.5:.95')
    p, r, f1, mp, mr, map50, map, t0, t1 = 0., 0., 0., 0., 0., 0., 0., 0., 0.
    loss = torch.zeros(3, device=device)
    jdict, stats, ap, ap_class = [], [], [], []
    evaluator = COCOEvaluator(root=DATA_ROOT, model_name=opt.weights.replace('.pt', ''))
    for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)):
        img = img.to(device, non_blocking=True)
        img = img.half() if half else img.float()  # uint8 to fp16/32
        img /= 255.0  # 0 - 255 to 0.0 - 1.0
        targets = targets.to(device)
        nb, _, height, width = img.shape  # batch size, channels, height, width
        whwh = torch.Tensor([width, height, width, height]).to(device)

        # Disable gradients
        with torch.no_grad():
            # Run model
            t = time_synchronized()
            inf_out, train_out = model(img, augment=augment)  # inference and training outputs
            t0 += time_synchronized() - t

            # Compute loss
            if training:  # if model has loss hyperparameters
                loss += compute_loss([x.float() for x in train_out], targets, model)[1][:3]  # GIoU, obj, cls

            # Run NMS
            t = time_synchronized()
            output = non_max_suppression(inf_out, conf_thres=conf_thres, iou_thres=iou_thres, merge=merge)
            t1 += time_synchronized() - t

        # Statistics per image
        for si, pred in enumerate(output):
            labels = targets[targets[:, 0] == si, 1:]
            nl = len(labels)
            tcls = labels[:, 0].tolist() if nl else []  # target class
            seen += 1

            if pred is None:
                if nl:
                    stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
                continue

            # Append to text file
            if save_txt:
                gn = torch.tensor(shapes[si][0])[[1, 0, 1, 0]]  # normalization gain whwh
                x = pred.clone()
                x[:, :4] = scale_coords(img[si].shape[1:], x[:, :4], shapes[si][0], shapes[si][1])  # to original
                for *xyxy, conf, cls in x:
                    xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                    with open(str(out / Path(paths[si]).stem) + '.txt', 'a') as f:
                        f.write(('%g ' * 5 + '\n') % (cls, *xywh))  # label format

            # Clip boxes to image bounds
            clip_coords(pred, (height, width))

            # Append to pycocotools JSON dictionary
            if save_json:
                # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ...
                image_id = Path(paths[si]).stem
                box = pred[:, :4].clone()  # xyxy
                scale_coords(img[si].shape[1:], box, shapes[si][0], shapes[si][1])  # to original shape
                box = xyxy2xywh(box)  # xywh
                box[:, :2] -= box[:, 2:] / 2  # xy center to top-left corner
                for p, b in zip(pred.tolist(), box.tolist()):
                    result = {'image_id': int(image_id) if image_id.isnumeric() else image_id,
                                  'category_id': coco91class[int(p[5])],
                                  'bbox': [round(x, 3) for x in b],
                                  'score': round(p[4], 5)}
                    jdict.append(result)

                    #evaluator.add([result])
                    #if evaluator.cache_exists:
                    #    break

            # # Assign all predictions as incorrect
            # correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device)
            # if nl:
            #     detected = []  # target indices
            #     tcls_tensor = labels[:, 0]
            #
            #     # target boxes
            #     tbox = xywh2xyxy(labels[:, 1:5]) * whwh
            #
            #     # Per target class
            #     for cls in torch.unique(tcls_tensor):
            #         ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1)  # prediction indices
            #         pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1)  # target indices
            #
            #         # Search for detections
            #         if pi.shape[0]:
            #             # Prediction to target ious
            #             ious, i = box_iou(pred[pi, :4], tbox[ti]).max(1)  # best ious, indices
            #
            #             # Append detections
            #             detected_set = set()
            #             for j in (ious > iouv[0]).nonzero(as_tuple=False):
            #                 d = ti[i[j]]  # detected target
            #                 if d.item() not in detected_set:
            #                     detected_set.add(d.item())
            #                     detected.append(d)
            #                     correct[pi[j]] = ious[j] > iouv  # iou_thres is 1xn
            #                     if len(detected) == nl:  # all targets already located in image
            #                         break
            #
            # # Append statistics (correct, conf, pcls, tcls)
            # stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))

        # # Plot images
        # if batch_i < 1:
        #     f = Path(save_dir) / ('test_batch%g_gt.jpg' % batch_i)  # filename
        #     plot_images(img, targets, paths, str(f), names)  # ground truth
        #     f = Path(save_dir) / ('test_batch%g_pred.jpg' % batch_i)
        #     plot_images(img, output_to_target(output, width, height), paths, str(f), names)  # predictions

    evaluator.add(jdict)
    evaluator.save()

    # # Compute statistics
    # stats = [np.concatenate(x, 0) for x in zip(*stats)]  # to numpy
    # if len(stats) and stats[0].any():
    #     p, r, ap, f1, ap_class = ap_per_class(*stats)
    #     p, r, ap50, ap = p[:, 0], r[:, 0], ap[:, 0], ap.mean(1)  # [P, R, AP@0.5, AP@0.5:0.95]
    #     mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
    #     nt = np.bincount(stats[3].astype(np.int64), minlength=nc)  # number of targets per class
    # else:
    #     nt = torch.zeros(1)
    #
    # # Print results
    # pf = '%20s' + '%12.3g' * 6  # print format
    # print(pf % ('all', seen, nt.sum(), mp, mr, map50, map))
    #
    # # Print results per class
    # if verbose and nc > 1 and len(stats):
    #     for i, c in enumerate(ap_class):
    #         print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i]))
    #
    # # Print speeds
    # t = tuple(x / seen * 1E3 for x in (t0, t1, t0 + t1)) + (imgsz, imgsz, batch_size)  # tuple
    # if not training:
    #     print('Speed: %.1f/%.1f/%.1f ms inference/NMS/total per %gx%g image at batch-size %g' % t)
    #
    # # Save JSON
    # if save_json and len(jdict):
    #     f = 'detections_val2017_%s_results.json' % \
    #         (weights.split(os.sep)[-1].replace('.pt', '') if isinstance(weights, str) else '')  # filename
    #     print('\nCOCO mAP with pycocotools... saving %s...' % f)
    #     with open(f, 'w') as file:
    #         json.dump(jdict, file)
    #
    #     try:  # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
    #         from pycocotools.coco import COCO
    #         from pycocotools.cocoeval import COCOeval
    #
    #         imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files]
    #         cocoGt = COCO(glob.glob('../coco/annotations/instances_val*.json')[0])  # initialize COCO ground truth api
    #         cocoDt = cocoGt.loadRes(f)  # initialize COCO pred api
    #         cocoEval = COCOeval(cocoGt, cocoDt, 'bbox')
    #         cocoEval.params.imgIds = imgIds  # image IDs to evaluate
    #         cocoEval.evaluate()
    #         cocoEval.accumulate()
    #         cocoEval.summarize()
    #         map, map50 = cocoEval.stats[:2]  # update results (mAP@0.5:0.95, mAP@0.5)
    #     except Exception as e:
    #         print('ERROR: pycocotools unable to run: %s' % e)
    #
    # # Return results
    # model.float()  # for training
    # maps = np.zeros(nc) + map
    # for i, c in enumerate(ap_class):
    #     maps[c] = ap[i]
    # return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t


if __name__ == '__main__':
    parser = argparse.ArgumentParser(prog='test.py')
    parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)')
    parser.add_argument('--data', type=str, default='data/coco.yaml', help='*.data path')
    parser.add_argument('--batch-size', type=int, default=32, help='size of each image batch')
    parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
    parser.add_argument('--conf-thres', type=float, default=0.001, help='object confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.65, help='IOU threshold for NMS')
    parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')
    parser.add_argument('--task', default='val', help="'val', 'test', 'study'")
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--merge', action='store_true', help='use Merge NMS')
    parser.add_argument('--verbose', action='store_true', help='report mAP by class')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    opt = parser.parse_args()
    opt.save_json |= opt.data.endswith('coco.yaml')
    opt.data = check_file(opt.data)  # check file
    print(opt)

    if opt.task in ['val', 'test']:  # run normally
        test(opt.data,
             opt.weights,
             opt.batch_size,
             opt.img_size,
             opt.conf_thres,
             opt.iou_thres,
             opt.save_json,
             opt.single_cls,
             opt.augment,
             opt.verbose)

    elif opt.task == 'study':  # run over a range of settings and save/plot
        for weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']:
            f = 'study_%s_%s.txt' % (Path(opt.data).stem, Path(weights).stem)  # filename to save to
            x = list(range(320, 800, 64))  # x axis
            y = []  # y axis
            for i in x:  # img-size
                print('\nRunning %s point %s...' % (f, i))
                r, _, t = test(opt.data, weights, opt.batch_size, i, opt.conf_thres, opt.iou_thres, opt.save_json)
                y.append(r + t)  # results and times
            np.savetxt(f, y, fmt='%10.4g')  # save
        os.system('zip -r study.zip study_*.txt')
        # utils.general.plot_study_txt(f, x)  # plot

================================================
FILE: test.py
================================================
import argparse
import glob
import json
import os
import shutil
from pathlib import Path

import numpy as np
import torch
import yaml
from tqdm import tqdm

from models.experimental import attempt_load
from utils.datasets import create_dataloader
from utils.general import (
    coco80_to_coco91_class, check_dataset, check_file, check_img_size, compute_loss, non_max_suppression, scale_coords,
    xyxy2xywh, clip_coords, plot_images, xywh2xyxy, box_iou, output_to_target, ap_per_class, set_logging)
from utils.torch_utils import select_device, time_synchronized


def test(data,
         weights=None,
         batch_size=16,
         imgsz=640,
         conf_thres=0.001,
         iou_thres=0.6,  # for NMS
         save_json=False,
         single_cls=False,
         augment=False,
         verbose=False,
         model=None,
         dataloader=None,
         save_dir='',
         merge=False,
         save_txt=False):
    # Initialize/load model and set device
    training = model is not None
    if training:  # called by train.py
        device = next(model.parameters()).device  # get model device

    else:  # called directly
        set_logging()
        device = select_device(opt.device, batch_size=batch_size)
        merge, save_txt = opt.merge, opt.save_txt  # use Merge NMS, save *.txt labels
        if save_txt:
            out = Path('inference/output')
            if os.path.exists(out):
                shutil.rmtree(out)  # delete output folder
            os.makedirs(out)  # make new output folder

        # Remove previous
        for f in glob.glob(str(Path(save_dir) / 'test_batch*.jpg')):
            os.remove(f)

        # Load model
        model = attempt_load(weights, map_location=device)  # load FP32 model
        imgsz = check_img_size(imgsz, s=model.stride.max())  # check img_size

        # Multi-GPU disabled, incompatible with .half() https://github.com/ultralytics/yolov5/issues/99
        # if device.type != 'cpu' and torch.cuda.device_count() > 1:
        #     model = nn.DataParallel(model)

    # Half
    half = device.type != 'cpu'  # half precision only supported on CUDA
    if half:
        model.half()

    # Configure
    model.eval()
    with open(data) as f:
        data = yaml.load(f, Loader=yaml.FullLoader)  # model dict
    check_dataset(data)  # check
    nc = 1 if single_cls else int(data['nc'])  # number of classes
    iouv = torch.linspace(0.5, 0.95, 10).to(device)  # iou vector for mAP@0.5:0.95
    niou = iouv.numel()

    # Dataloader
    if not training:
        img = torch.zeros((1, 3, imgsz, imgsz), device=device)  # init img
        _ = model(img.half() if half else img) if device.type != 'cpu' else None  # run once
        path = data['test'] if opt.task == 'test' else data['val']  # path to val/test images
        dataloader = create_dataloader(path, imgsz, batch_size, model.stride.max(), opt,
                                       hyp=None, augment=False, cache=False, pad=0.5, rect=True)[0]

    seen = 0
    names = model.names if hasattr(model, 'names') else model.module.names
    coco91class = coco80_to_coco91_class()
    s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Targets', 'P', 'R', 'mAP@.5', 'mAP@.5:.95')
    p, r, f1, mp, mr, map50, map, t0, t1 = 0., 0., 0., 0., 0., 0., 0., 0., 0.
    loss = torch.zeros(3, device=device)
    jdict, stats, ap, ap_class = [], [], [], []
    for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)):
        img = img.to(device, non_blocking=True)
        img = img.half() if half else img.float()  # uint8 to fp16/32
        img /= 255.0  # 0 - 255 to 0.0 - 1.0
        targets = targets.to(device)
        nb, _, height, width = img.shape  # batch size, channels, height, width
        whwh = torch.Tensor([width, height, width, height]).to(device)

        # Disable gradients
        with torch.no_grad():
            # Run model
            t = time_synchronized()
            inf_out, train_out = model(img, augment=augment)  # inference and training outputs
            t0 += time_synchronized() - t

            # Compute loss
            if training:  # if model has loss hyperparameters
                loss += compute_loss([x.float() for x in train_out], targets, model)[1][:3]  # GIoU, obj, cls

            # Run NMS
            t = time_synchronized()
            output = non_max_suppression(inf_out, conf_thres=conf_thres, iou_thres=iou_thres, merge=merge)
            t1 += time_synchronized() - t

        # Statistics per image
        for si, pred in enumerate(output):
            labels = targets[targets[:, 0] == si, 1:]
            nl = len(labels)
            tcls = labels[:, 0].tolist() if nl else []  # target class
            seen += 1

            if pred is None:
                if nl:
                    stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
                continue

            # Append to text file
            if save_txt:
                gn = torch.tensor(shapes[si][0])[[1, 0, 1, 0]]  # normalization gain whwh
                x = pred.clone()
                x[:, :4] = scale_coords(img[si].shape[1:], x[:, :4], shapes[si][0], shapes[si][1])  # to original
                for *xyxy, conf, cls in x:
                    xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                    with open(str(out / Path(paths[si]).stem) + '.txt', 'a') as f:
                        f.write(('%g ' * 5 + '\n') % (cls, *xywh))  # label format

            # Clip boxes to image bounds
            clip_coords(pred, (height, width))

            # Append to pycocotools JSON dictionary
            if save_json:
                # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ...
                image_id = Path(paths[si]).stem
                box = pred[:, :4].clone()  # xyxy
                scale_coords(img[si].shape[1:], box, shapes[si][0], shapes[si][1])  # to original shape
                box = xyxy2xywh(box)  # xywh
                box[:, :2] -= box[:, 2:] / 2  # xy center to top-left corner
                for p, b in zip(pred.tolist(), box.tolist()):
                    jdict.append({'image_id': int(image_id) if image_id.isnumeric() else image_id,
                                  'category_id': coco91class[int(p[5])],
                                  'bbox': [round(x, 3) for x in b],
                                  'score': round(p[4], 5)})

            # Assign all predictions as incorrect
            correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device)
            if nl:
                detected = []  # target indices
                tcls_tensor = labels[:, 0]

                # target boxes
                tbox = xywh2xyxy(labels[:, 1:5]) * whwh

                # Per target class
                for cls in torch.unique(tcls_tensor):
                    ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1)  # prediction indices
                    pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1)  # target indices

                    # Search for detections
                    if pi.shape[0]:
                        # Prediction to target ious
                        ious, i = box_iou(pred[pi, :4], tbox[ti]).max(1)  # best ious, indices

                        # Append detections
                        detected_set = set()
                        for j in (ious > iouv[0]).nonzero(as_tuple=False):
                            d = ti[i[j]]  # detected target
                            if d.item() not in detected_set:
                                detected_set.add(d.item())
                                detected.append(d)
                                correct[pi[j]] = ious[j] > iouv  # iou_thres is 1xn
                                if len(detected) == nl:  # all targets already located in image
                                    break

            # Append statistics (correct, conf, pcls, tcls)
            stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))

        # Plot images
        if batch_i < 1:
            f = Path(save_dir) / ('test_batch%g_gt.jpg' % batch_i)  # filename
            plot_images(img, targets, paths, str(f), names)  # ground truth
            f = Path(save_dir) / ('test_batch%g_pred.jpg' % batch_i)
            plot_images(img, output_to_target(output, width, height), paths, str(f), names)  # predictions

    # Compute statistics
    stats = [np.concatenate(x, 0) for x in zip(*stats)]  # to numpy
    if len(stats) and stats[0].any():
        p, r, ap, f1, ap_class = ap_per_class(*stats)
        p, r, ap50, ap = p[:, 0], r[:, 0], ap[:, 0], ap.mean(1)  # [P, R, AP@0.5, AP@0.5:0.95]
        mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
        nt = np.bincount(stats[3].astype(np.int64), minlength=nc)  # number of targets per class
    else:
        nt = torch.zeros(1)

    # Print results
    pf = '%20s' + '%12.3g' * 6  # print format
    print(pf % ('all', seen, nt.sum(), mp, mr, map50, map))

    # Print results per class
    if verbose and nc > 1 and len(stats):
        for i, c in enumerate(ap_class):
            print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i]))

    # Print speeds
    t = tuple(x / seen * 1E3 for x in (t0, t1, t0 + t1)) + (imgsz, imgsz, batch_size)  # tuple
    if not training:
        print('Speed: %.1f/%.1f/%.1f ms inference/NMS/total per %gx%g image at batch-size %g' % t)

    # Save JSON
    if save_json and len(jdict):
        f = 'detections_val2017_%s_results.json' % \
            (weights.split(os.sep)[-1].replace('.pt', '') if isinstance(weights, str) else '')  # filename
        print('\nCOCO mAP with pycocotools... saving %s...' % f)
        with open(f, 'w') as file:
            json.dump(jdict, file)

        try:  # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
            from pycocotools.coco import COCO
            from pycocotools.cocoeval import COCOeval

            imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files]
            cocoGt = COCO(glob.glob('../coco/annotations/instances_val*.json')[0])  # initialize COCO ground truth api
            cocoDt = cocoGt.loadRes(f)  # initialize COCO pred api
            cocoEval = COCOeval(cocoGt, cocoDt, 'bbox')
            cocoEval.params.imgIds = imgIds  # image IDs to evaluate
            cocoEval.evaluate()
            cocoEval.accumulate()
            cocoEval.summarize()
            map, map50 = cocoEval.stats[:2]  # update results (mAP@0.5:0.95, mAP@0.5)
        except Exception as e:
            print('ERROR: pycocotools unable to run: %s' % e)

    # Return results
    model.float()  # for training
    maps = np.zeros(nc) + map
    for i, c in enumerate(ap_class):
        maps[c] = ap[i]
    return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t


if __name__ == '__main__':
    parser = argparse.ArgumentParser(prog='test.py')
    parser.add_argument('--weights', nargs='+', type=str, default='weights/yolov5s.pt', help='model.pt path(s)')
    parser.add_argument('--data', type=str, default='data/DOTA.yaml', help='*.data path')
    parser.add_argument('--batch-size', type=int, default=16, help='size of each image batch')
    parser.add_argument('--img-size', type=int, default=1024, help='inference size (pixels)')
    parser.add_argument('--conf-thres', type=float, default=0.001, help='object confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.65, help='IOU threshold for NMS')
    parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')
    parser.add_argument('--task', default='val', help="'val', 'test', 'study'")
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--merge', action='store_true', help='use Merge NMS')
    parser.add_argument('--verbose', action='store_true', help='report mAP by class')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    opt = parser.parse_args()
    opt.save_json |= opt.data.endswith('DOTA.yaml')
    opt.data = check_file(opt.data)  # check file
    print(opt)

    if opt.task in ['val', 'test']:  # run normally
        test(opt.data,
             opt.weights,
             opt.batch_size,
             opt.img_size,
             opt.conf_thres,
             opt.iou_thres,
             opt.save_json,
             opt.single_cls,
             opt.augment,
             opt.verbose)

    elif opt.task == 'study':  # run over a range of settings and save/plot
        for weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']:
            f = 'study_%s_%s.txt' % (Path(opt.data).stem, Path(weights).stem)  # filename to save to
            x = list(range(320, 800, 64))  # x axis
            y = []  # y axis
            for i in x:  # img-size
                print('\nRunning %s point %s...' % (f, i))
                r, _, t = test(opt.data, weights, opt.batch_size, i, opt.conf_thres, opt.iou_thres, opt.save_json)
                y.append(r + t)  # results and times
            np.savetxt(f, y, fmt='%10.4g')  # save
        os.system('zip -r study.zip study_*.txt')
        # utils.general.plot_study_txt(f, x)  # plot


================================================
FILE: train.py
================================================
import argparse
import glob
import logging
import math
import os
import random
import shutil
import time
from pathlib import Path

import numpy as np
import torch.distributed as dist
import torch.nn.functional as F
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
import torch.utils.data
import yaml
from torch.cuda import amp
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.tensorboard import SummaryWriter
from tqdm import tqdm

import test  # import test.py to get mAP after each epoch
from models.yolo import Model
from utils.datasets import create_dataloader
from utils.general import (
    torch_distributed_zero_first, labels_to_class_weights, plot_labels, check_anchors, labels_to_image_weights,
    compute_loss, plot_images, fitness, strip_optimizer, plot_results, get_latest_run, check_dataset, check_file,
    check_git_status, check_img_size, increment_dir, print_mutation, plot_evolution, set_logging)
from utils.google_utils import attempt_download
from utils.torch_utils import init_seeds, ModelEMA, select_device, intersect_dicts

logger = logging.getLogger(__name__)


def train(hyp, opt, device, tb_writer=None):
    logger.info(f'Hyperparameters {hyp}')
    log_dir = Path(tb_writer.log_dir) if tb_writer else Path(opt.logdir) / 'evolve'  # logging directory
    wdir = log_dir / 'weights'  # weights directory
    os.makedirs(wdir, exist_ok=True)
    last = wdir / 'last.pt'
    best = wdir / 'best.pt'
    results_file = str(log_dir / 'results.txt')
    epochs, batch_size, total_batch_size, weights, rank = \
        opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank

    # Save run settings
    with open(log_dir / 'hyp.yaml', 'w') as f:
        yaml.dump(hyp, f, sort_keys=False)
    with open(log_dir / 'opt.yaml', 'w') as f:
        yaml.dump(vars(opt), f, sort_keys=False)

    # Configure
    cuda = device.type != 'cpu'
    init_seeds(2 + rank)
    with open(opt.data) as f:
        data_dict = yaml.load(f, Loader=yaml.FullLoader)  # data dict
    with torch_distributed_zero_first(rank):
        check_dataset(data_dict)  # check
    train_path = data_dict['train']
    test_path = data_dict['val']
    nc, names = (1, ['item']) if opt.single_cls else (int(data_dict['nc']), data_dict['names'])  # number classes, names
    assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data)  # check

    # Model
    pretrained = weights.endswith('.pt')
    if pretrained:
        with torch_distributed_zero_first(rank):
            attempt_download(weights)  # download if not found locally
        ckpt = torch.load(weights, map_location=device)  # load checkpoint
        if 'anchors' in hyp and hyp['anchors']:
            ckpt['model'].yaml['anchors'] = round(hyp['anchors'])  # force autoanchor
        model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc).to(device)  # create
        exclude = ['anchor'] if opt.cfg else []  # exclude keys
        state_dict = ckpt['model'].float().state_dict()  # to FP32
        state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude)  # intersect
        model.load_state_dict(state_dict, strict=False)  # load
        logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights))  # report
    else:
        model = Model(opt.cfg, ch=3, nc=nc).to(device)  # create

    # Freeze
    freeze = ['', ]  # parameter names to freeze (full or partial)
    if any(freeze):
        for k, v in model.named_parameters():
            if any(x in k for x in freeze):
                print('freezing %s' % k)
                v.requires_grad = False

    # Optimizer
    nbs = 64  # nominal batch size
    accumulate = max(round(nbs / total_batch_size), 1)  # accumulate loss before optimizing
    hyp['weight_decay'] *= total_batch_size * accumulate / nbs  # scale weight_decay

    pg0, pg1, pg2 = [], [], []  # optimizer parameter groups
    for k, v in model.named_parameters():
        v.requires_grad = True
        if '.bias' in k:
            pg2.append(v)  # biases
        elif '.weight' in k and '.bn' not in k:
            pg1.append(v)  # apply weight decay
        else:
            pg0.append(v)  # all else

    if opt.adam:
        optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999))  # adjust beta1 to momentum
    else:
        optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)

    optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']})  # add pg1 with weight_decay
    optimizer.add_param_group({'params': pg2})  # add pg2 (biases)
    logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
    del pg0, pg1, pg2

    # Scheduler https://arxiv.org/pdf/1812.01187.pdf
    # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
    lf = lambda x: ((1 + math.cos(x * math.pi / epochs)) / 2) * (1 - hyp['lrf']) + hyp['lrf']  # cosine
    scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
    # plot_lr_scheduler(optimizer, scheduler, epochs)

    # Resume
    start_epoch, best_fitness = 0, 0.0
    if pretrained:
        # Optimizer
        if ckpt['optimizer'] is not None:
            optimizer.load_state_dict(ckpt['optimizer'])
            best_fitness = ckpt['best_fitness']

        # Results
        if ckpt.get('training_results') is not None:
            with open(results_file, 'w') as file:
                file.write(ckpt['training_results'])  # write results.txt

        # Epochs
        start_epoch = ckpt['epoch'] + 1
        if opt.resume:
            assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs)
            shutil.copytree(wdir, wdir.parent / f'weights_backup_epoch{start_epoch - 1}')  # save previous weights
        if epochs < start_epoch:
            logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
                        (weights, ckpt['epoch'], epochs))
            epochs += ckpt['epoch']  # finetune additional epochs

        del ckpt, state_dict

    # Image sizes
    gs = int(max(model.stride))  # grid size (max stride)
    imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size]  # verify imgsz are gs-multiples

    # DP mode
    if cuda and rank == -1 and torch.cuda.device_count() > 1:
        model = torch.nn.DataParallel(model)

    # SyncBatchNorm
    if opt.sync_bn and cuda and rank != -1:
        model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
        logger.info('Using SyncBatchNorm()')

    # Exponential moving average
    ema = ModelEMA(model) if rank in [-1, 0] else None

    # DDP mode
    if cuda and rank != -1:
        model = DDP(model, device_ids=[opt.local_rank], output_device=(opt.local_rank))

    # Trainloader
    dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
                                            hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank,
                                            world_size=opt.world_size, workers=opt.workers)
    mlc = np.concatenate(dataset.labels, 0)[:, 0].max()  # max label class
    nb = len(dataloader)  # number of batches
    assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)

    # Testloader
    if rank in [-1, 0]:
        ema.updates = start_epoch * nb // accumulate  # set EMA updates
        testloader = create_dataloader(test_path, imgsz_test, total_batch_size, gs, opt,
                                       hyp=hyp, augment=False, cache=opt.cache_images, rect=True, rank=-1,
                                       world_size=opt.world_size, workers=opt.workers)[0]  # only runs on process 0

    # Model parameters
    hyp['cls'] *= nc / 80.  # scale coco-tuned hyp['cls'] to current dataset
    model.nc = nc  # attach number of classes to model
    model.hyp = hyp  # attach hyperparameters to model
    model.gr = 1.0  # giou loss ratio (obj_loss = 1.0 or giou)
    model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device)  # attach class weights
    model.names = names

    # Classes and Anchors
    if rank in [-1, 0] and not opt.resume:
        labels = np.concatenate(dataset.labels, 0)
        c = torch.tensor(labels[:, 0])  # classes
        # cf = torch.bincount(c.long(), minlength=nc) + 1.  # frequency
        # model._initialize_biases(cf.to(device))
        plot_labels(labels, save_dir=log_dir)
        if tb_writer:
            # tb_writer.add_hparams(hyp, {})  # causes duplicate https://github.com/ultralytics/yolov5/pull/384
            tb_writer.add_histogram('classes', c, 0)

        # Anchors
        if not opt.noautoanchor:
            check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)

    # Start training
    t0 = time.time()
    nw = max(3 * nb, 1e3)  # number of warmup iterations, max(3 epochs, 1k iterations)
    # nw = min(nw, (epochs - start_epoch) / 2 * nb)  # limit warmup to < 1/2 of training
    maps = np.zeros(nc)  # mAP per class
    results = (0, 0, 0, 0, 0, 0, 0)  # 'P', 'R', 'mAP', 'F1', 'val GIoU', 'val Objectness', 'val Classification'
    scheduler.last_epoch = start_epoch - 1  # do not move
    scaler = amp.GradScaler(enabled=cuda)
    logger.info('Image sizes %g train, %g test' % (imgsz, imgsz_test))
    logger.info('Using %g dataloader workers' % dataloader.num_workers)
    logger.info('Starting training for %g epochs...' % epochs)
    # torch.autograd.set_detect_anomaly(True)
    for epoch in range(start_epoch, epochs):  # epoch ------------------------------------------------------------------
        model.train()

        # Update image weights (optional)
        if opt.image_weights:
            # Generate indices
            if rank in [-1, 0]:
                cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2  # class weights
                iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw)  # image weights
                dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n)  # rand weighted idx
            # Broadcast if DDP
            if rank != -1:
                indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int()
                dist.broadcast(indices, 0)
                if rank != 0:
                    dataset.indices = indices.cpu().numpy()

        # Update mosaic border
        # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
        # dataset.mosaic_border = [b - imgsz, -b]  # height, width borders

        mloss = torch.zeros(4, device=device)  # mean losses
        if rank != -1:
            dataloader.sampler.set_epoch(epoch)
        pbar = enumerate(dataloader)
        logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'GIoU', 'obj', 'cls', 'total', 'targets', 'img_size'))
        if rank in [-1, 0]:
            pbar = tqdm(pbar, total=nb)  # progress bar
        optimizer.zero_grad()
        for i, (imgs, targets, paths, _) in pbar:  # batch -------------------------------------------------------------
            ni = i + nb * epoch  # number integrated batches (since train start)
            imgs = imgs.to(device, non_blocking=True).float() / 255.0  # uint8 to float32, 0-255 to 0.0-1.0

            # Warmup
            if ni <= nw:
                xi = [0, nw]  # x interp
                # model.gr = np.interp(ni, xi, [0.0, 1.0])  # giou loss ratio (obj_loss = 1.0 or giou)
                accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round())
                for j, x in enumerate(optimizer.param_groups):
                    # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
                    x['lr'] = np.interp(ni, xi, [0.1 if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
                    if 'momentum' in x:
                        x['momentum'] = np.interp(ni, xi, [0.9, hyp['momentum']])

            # Multi-scale
            if opt.multi_scale:
                sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs  # size
                sf = sz / max(imgs.shape[2:])  # scale factor
                if sf != 1:
                    ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]]  # new shape (stretched to gs-multiple)
                    imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)

            # Forward
            with amp.autocast(enabled=cuda):
                pred = model(imgs)  # forward
                loss, loss_items = compute_loss(pred, targets.to(device), model)  # loss scaled by batch_size
                if rank != -1:
                    loss *= opt.world_size  # gradient averaged between devices in DDP mode

            # Backward
            scaler.scale(loss).backward()

            # Optimize
            if ni % accumulate == 0:
                scaler.step(optimizer)  # optimizer.step
                scaler.update()
                optimizer.zero_grad()
                if ema:
                    ema.update(model)

            # Print
            if rank in [-1, 0]:
                mloss = (mloss * i + loss_items) / (i + 1)  # update mean losses
                mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0)  # (GB)
                s = ('%10s' * 2 + '%10.4g' * 6) % (
                    '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1])
                pbar.set_description(s)

                # Plot
                if ni < 3:
                    f = str(log_dir / ('train_batch%g.jpg' % ni))  # filename
                    result = plot_images(images=imgs, targets=targets, paths=paths, fname=f)
                    if tb_writer and result is not None:
                        tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch)
                        # tb_writer.add_graph(model, imgs)  # add model to tensorboard

            # end batch ------------------------------------------------------------------------------------------------

        # Scheduler
        lr = [x['lr'] for x in optimizer.param_groups]  # for tensorboard
        scheduler.step()

        # DDP process 0 or single-GPU
        if rank in [-1, 0]:
            # mAP
            if ema:
                ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride'])
            final_epoch = epoch + 1 == epochs
            if not opt.notest or final_epoch:  # Calculate mAP
                if final_epoch:  # replot predictions
                    [os.remove(x) for x in glob.glob(str(log_dir / 'test_batch*_pred.jpg')) if os.path.exists(x)]
                results, maps, times = test.test(opt.data,
                                                 batch_size=total_batch_size,
                                                 imgsz=imgsz_test,
                                                 model=ema.ema,
                                                 single_cls=opt.single_cls,
                                                 dataloader=testloader,
                                                 save_dir=log_dir)

            # Write
            with open(results_file, 'a') as f:
                f.write(s + '%10.4g' * 7 % results + '\n')  # P, R, mAP, F1, test_losses=(GIoU, obj, cls)
            if len(opt.name) and opt.bucket:
                os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name))

            # Tensorboard
            if tb_writer:
                tags = ['train/giou_loss', 'train/obj_loss', 'train/cls_loss',  # train loss
                        'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',
                        'val/giou_loss', 'val/obj_loss', 'val/cls_loss',  # val loss
                        'x/lr0', 'x/lr1', 'x/lr2']  # params
                for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags):
                    tb_writer.add_scalar(tag, x, epoch)

            # Update best mAP
            fi = fitness(np.array(results).reshape(1, -1))  # fitness_i = weighted combination of [P, R, mAP, F1]
            if fi > best_fitness:
                best_fitness = fi

            # Save model
            save = (not opt.nosave) or (final_epoch and not opt.evolve)
            if save:
                with open(results_file, 'r') as f:  # create checkpoint
                    ckpt = {'epoch': epoch,
                            'best_fitness': best_fitness,
                            'training_results': f.read(),
                            'model': ema.ema,
                            'optimizer': None if final_epoch else optimizer.state_dict()}

                # Save last, best and delete
                torch.save(ckpt, last)
                if best_fitness == fi:
                    torch.save(ckpt, best)
                del ckpt
        # end epoch ----------------------------------------------------------------------------------------------------
    # end training

    if rank in [-1, 0]:
        # Strip optimizers
        n = opt.name if opt.name.isnumeric() else ''
        fresults, flast, fbest = log_dir / f'results{n}.txt', wdir / f'last{n}.pt', wdir / f'best{n}.pt'
        for f1, f2 in zip([wdir / 'last.pt', wdir / 'best.pt', results_file], [flast, fbest, fresults]):
            if os.path.exists(f1):
                os.rename(f1, f2)  # rename
                if str(f2).endswith('.pt'):  # is *.pt
                    strip_optimizer(f2)  # strip optimizer
                    os.system('gsutil cp %s gs://%s/weights' % (f2, opt.bucket)) if opt.bucket else None  # upload
        # Finish
        if not opt.evolve:
            plot_results(save_dir=log_dir)  # save as results.png
        logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600))

    dist.destroy_process_group() if rank not in [-1, 0] else None
    torch.cuda.empty_cache()
    return results


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default='weights/yolov5s.pt', help='initial weights path')
    parser.add_argument('--cfg', type=str, default='models/yolov5s.yaml', help='model.yaml path')
    parser.add_argument('--data', type=str, default='data/DOTA.yaml', help='data.yaml path')
    parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path')
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
    parser.add_argument('--img-size', nargs='+', type=int, default=[1024, 1024], help='[train, test] image sizes')
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--notest', action='store_true', help='only test final epoch')
    parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
    parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    parser.add_argument('--name', default='', help='renames results.txt to results_name.txt if supplied')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset')
    parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
    parser.add_argument('--logdir', type=str, default='runs/', help='logging directory')
    parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
    opt = parser.parse_args()

    # Set DDP variables
    opt.total_batch_size = opt.batch_size
    opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1
    opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1
    set_logging(opt.global_rank)
    if opt.global_rank in [-1, 0]:
        check_git_status()

    # Resume
    if opt.resume:  # resume an interrupted run
        ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run()  # specified or most recent path
        log_dir = Path(ckpt).parent.parent  # runs/exp0
        assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
        with open(log_dir / 'opt.yaml') as f:
            opt = argparse.Namespace(**yaml.load(f, Loader=yaml.FullLoader))  # replace
        opt.cfg, opt.weights, opt.resume = '', ckpt, True
        logger.info('Resuming training from %s' % ckpt)

    else:
        # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
        opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp)  # check files
        assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
        opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size)))  # extend to 2 sizes (train, test)
        log_dir = increment_dir(Path(opt.logdir) / 'exp', opt.name)  # runs/exp1

    device = select_device(opt.device, batch_size=opt.batch_size)

    # DDP mode
    if opt.local_rank != -1:
        assert torch.cuda.device_count() > opt.local_rank
        torch.cuda.set_device(opt.local_rank)
        device = torch.device('cuda', opt.local_rank)
        dist.init_process_group(backend='nccl', init_method='env://')  # distributed backend
        assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count'
        opt.batch_size = opt.total_batch_size // opt.world_size

    logger.info(opt)
    with open(opt.hyp) as f:
        hyp = yaml.load(f, Loader=yaml.FullLoader)  # load hyps

    # Train
    if not opt.evolve:
        tb_writer = None
        if opt.global_rank in [-1, 0]:
            logger.info('Start Tensorboard with "tensorboard --logdir %s", view at http://localhost:6006/' % opt.logdir)
            tb_writer = SummaryWriter(log_dir=log_dir)  # runs/exp0

        train(hyp, opt, device, tb_writer)

    # Evolve hyperparameters (optional)
    else:
        # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
        meta = {'lr0': (1, 1e-5, 1e-1),  # initial learning rate (SGD=1E-2, Adam=1E-3)
                'lrf': (1, 0.01, 1.0),  # final OneCycleLR learning rate (lr0 * lrf)
                'momentum': (0.1, 0.6, 0.98),  # SGD momentum/Adam beta1
                'weight_decay': (1, 0.0, 0.001),  # optimizer weight decay
                'giou': (1, 0.02, 0.2),  # GIoU loss gain
                'cls': (1, 0.2, 4.0),  # cls loss gain
                'cls_pw': (1, 0.5, 2.0),  # cls BCELoss positive_weight
                'obj': (1, 0.2, 4.0),  # obj loss gain (scale with pixels)
                'obj_pw': (1, 0.5, 2.0),  # obj BCELoss positive_weight
                'iou_t': (0, 0.1, 0.7),  # IoU training threshold
                'anchor_t': (1, 2.0, 8.0),  # anchor-multiple threshold
                'anchors': (1, 2.0, 10.0),  # anchors per output grid (0 to ignore)
                'fl_gamma': (0, 0.0, 2.0),  # focal loss gamma (efficientDet default gamma=1.5)
                'hsv_h': (1, 0.0, 0.1),  # image HSV-Hue augmentation (fraction)
                'hsv_s': (1, 0.0, 0.9),  # image HSV-Saturation augmentation (fraction)
                'hsv_v': (1, 0.0, 0.9),  # image HSV-Value augmentation (fraction)
                'degrees': (1, 0.0, 45.0),  # image rotation (+/- deg)
                'translate': (1, 0.0, 0.9),  # image translation (+/- fraction)
                'scale': (1, 0.0, 0.9),  # image scale (+/- gain)
                'shear': (1, 0.0, 10.0),  # image shear (+/- deg)
                'perspective': (0, 0.0, 0.001),  # image perspective (+/- fraction), range 0-0.001
                'flipud': (1, 0.0, 1.0),  # image flip up-down (probability)
                'fliplr': (0, 0.0, 1.0),  # image flip left-right (probability)
                'mixup': (1, 0.0, 1.0)}  # image mixup (probability)

        assert opt.local_rank == -1, 'DDP mode not implemented for --evolve'
        opt.notest, opt.nosave = True, True  # only test/save final epoch
        # ei = [isinstance(x, (int, float)) for x in hyp.values()]  # evolvable indices
        yaml_file = Path('runs/evolve/hyp_evolved.yaml')  # save best result here
        if opt.bucket:
            os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket)  # download evolve.txt if exists

        for _ in range(1):  # generations to evolve
            if os.path.exists('evolve.txt'):  # if evolve.txt exists: select best hyps and mutate
                # Select parent(s)
                parent = 'single'  # parent selection method: 'single' or 'weighted'
                x = np.loadtxt('evolve.txt', ndmin=2)
                n = min(5, len(x))  # number of previous results to consider
                x = x[np.argsort(-fitness(x))][:n]  # top n mutations
                w = fitness(x) - fitness(x).min()  # weights
                if parent == 'single' or len(x) == 1:
                    # x = x[random.randint(0, n - 1)]  # random selection
                    x = x[random.choices(range(n), weights=w)[0]]  # weighted selection
                elif parent == 'weighted':
                    x = (x * w.reshape(n, 1)).sum(0) / w.sum()  # weighted combination

                # Mutate
                mp, s = 0.9, 0.2  # mutation probability, sigma
                npr = np.random
                npr.seed(int(time.time()))
                g = np.array([x[0] for x in meta.values()])  # gains 0-1
                ng = len(meta)
                v = np.ones(ng)
                while all(v == 1):  # mutate until a change occurs (prevent duplicates)
                    v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
                for i, k in enumerate(hyp.keys()):  # plt.hist(v.ravel(), 300)
                    hyp[k] = float(x[i + 7] * v[i])  # mutate

            # Constrain to limits
            for k, v in meta.items():
                hyp[k] = max(hyp[k], v[1])  # lower limit
                hyp[k] = min(hyp[k], v[2])  # upper limit
                hyp[k] = round(hyp[k], 5)  # significant digits

            # Train mutation
            results = train(hyp.copy(), opt, device)

            # Write mutation results
            print_mutation(hyp.copy(), results, yaml_file, opt.bucket)

        # Plot results
        plot_evolution(yaml_file)
        print('Hyperparameter evolution complete. Best results saved as: %s\nCommand to train a new model with these '
              'hyperparameters: $ python train.py --hyp %s' % (yaml_file, yaml_file))


================================================
FILE: tutorial.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "HvhYZrIZCEyo"
   },
   "source": [
    "<img src=\"https://user-images.githubusercontent.com/26833433/82952157-51b7db00-9f5d-11ea-8f4b-dda1ffecf992.jpg\">\n",
    "\n",
    "This notebook was written by Ultralytics LLC, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). \n",
    "For more information please visit https://github.com/ultralytics/yolov5 and https://www.ultralytics.com."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "7mGmQbAO5pQb"
   },
   "source": [
    "# Setup\n",
    "\n",
    "Clone repo, install dependencies, `%cd` into `./yolov5` folder and check GPU."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 53
    },
    "colab_type": "code",
    "id": "wbvMlHd_QwMG",
    "outputId": "669566b2-391f-4596-f290-110e2e177946"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Setup complete. Using torch 1.5.0 CPU\n"
     ]
    }
   ],
   "source": [
    "!git clone https://github.com/ultralytics/yolov5  # clone repo\n",
    "!pip install -qr yolov5/requirements.txt  # install dependencies (ignore errors)\n",
    "%cd yolov5\n",
    "\n",
    "import torch\n",
    "from IPython.display import Image, clear_output  # to display images\n",
    "from utils.google_utils import gdrive_download  # to download models/datasets\n",
    "\n",
    "clear_output()\n",
    "print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "N3qM6T0W53gh"
   },
   "source": [
    "# 1. Inference\n",
    "\n",
    "Run inference with a pretrained checkpoint on contents of `/inference/images` folder. Models are auto-downloaded from [Google Drive](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 488
    },
    "colab_type": "code",
    "id": "zR9ZbuQCH7FX",
    "outputId": "528fcc04-2393-437a-84d2-092becbaefbe"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.4, device='', fourcc='mp4v', half=False, img_size=416, iou_thres=0.5, output='inference/output', save_txt=False, source='./inference/images/', view_img=False, weights='yolov5s.pt')\n",
      "Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB)\n",
      "\n",
      "image 1/2 inference/images/bus.jpg: 416x352 3 persons, 1 buss, Done. (0.009s)\n",
      "image 2/2 inference/images/zidane.jpg: 288x416 2 persons, 2 ties, Done. (0.009s)\n",
      "Results saved to /content/yolov5/inference/output\n",
      "Done. (0.100s)\n"
     ]
    },
    {
     "data": {
      "image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCALQBQADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD8347F5pkSP5t38P3ttaFjZzR2rzOMjfs+/wDNVi10+5kh877Gqv8AwfP96tOz0+2b99sw0e1drfxV87HY+wjHm94z4bOZ2WZ4dgV9vzN81Tx6a8jHvu+bd/DV+HT51uHd0Up95Pl21bhtfIkH2ncqfN8q/e21NS0dUbU4/ZMf7Oi52OzMu1UVU+an/wBjlW3w7l2t8y/3q3pNPRl2I+1tn/AqZZ280cXk3Nrub+7v+6tefKtLl5onZGm48qMqbQ3k/wBJeb5lb5PMf5l/2aZcaW6tshhyzffZn3ba3biHzI5USFfmX7tQyWc3zTXltuWPb+8jT+LbXJWxVWO534XDxkchrmm/KZt+d3yvurBm0maHLvu2su1G/vV3OsWsMe5xyWTd5bVh3VikkLJ5Pyqu7b/easaNacX7x6nsYyicrJYws3nom1m/vf3qWC3uYW32zr8v95v/AEGtK6s5I9iJuDMu51aq62827502Nt3Jur6zAylKUTlqREj+0wsiI7OzNuRW/wBr+7ViSPy4/wBzud9+1vm+Wq0aurIJtxdf4qtLayeX8nyusu5mb+KvqMPSlKJ58qnvco65uHaNpvlTdt2fJ8y0kjSbER3Vtq7tzJtqbyPtDLDNtx96nTKjR/Ii7t38X3a9D2fKebUkoy5SHyXjnP75l/i/3amSSVm+0v5joqbfv/Ky/wB6i3/fRrv+9911j+6rUsMMuxvJufu/fXZXPKXLE4OaUuaxPBv3b9n+r/hjl3LVqH9zJ/qV2t823/eqtbwpHGkP+qVn+dY/l/4FVuzZLqRI5plV13b12fdX+GvLxHvF04825p2cm1Ucopdvl+V9taVvDcSSK6fd+ZXrN0+GGS637F+V1aXd/d/hq7b75mX51Db9zMr/AC/7Py14WIqSNadHuaVjNLJCsP2pmTfuddvzNU8jO3yQ7X2/e/iaq8IeGNPLRW+bbu2fdq95n2OZXhhV2b5V3V4dap7+h6VOnHqWob792yI6o6orfLVCZJpPnudrBf4v97+KpmuIWmDzTKsrfdXft+7VCS5dpmR5o3/vq392uJSjztQOlx928hzbIZXSFFLs7fMqf6yopmubzY63jIVb7qrU32OGSP8AhRPveXHSyKluy/J975VXf/FWkqnNqLk5fdEntdy/3vl2eZs/76pU3yQyJsYeX8if3lqwsE0iy2zzfuvl/d/7VVr6O6WTf8yfe/d7/u1n71TRSMK0R8d1cxwrvRQv3dzfdWoprp75hNc3cjtHtSLzG+61OaGaS3RJnV1+88bVVkkRlKWtthlf+GspRhKRjH3Y8rKuoXtvHteN8qy7X/vVga9cXisrpcthkVfm/u1pXk00zAu+R/d/utWDq14+5n342/6rav3a78PFRj8JyVqhj6lM/wC8+8f/AB3dXManN82/fjd/CtdBqW+4bM0/Gzc1Yd48Pls/Vm+Xb/FXsUYy5NDxsVLmiYF9avt+07F21QVXmuNmzb/utW9cWbyR56hVqnHp7rMJvJ8xK9CnKMeU82T5hljlWZE3fN9//ZrodI3x7ntn+Rk2srfM1V9N03bGOdu7/wAdrVhs4I5BGiMk0f8ADJ8tEqhrToz+I1NLtUinR9+fLf5F/wDsa7bQZnjwibU2/N+7X5VrjdH/AHKxBE3f367TRZE+x7E2/wB1dv3mqo1PfOj2fuWOu0W4k+ziF5sOzfxfw11ui6uNyu6Mrqu1/Mfb8v8As1wWk3KOuy28xVVvnb+7W/puqQxsU3/eiVmj+9XZGpzmMoyj8R3Wn6kQN8Myh1f/AEfb93/eatXT9am8ve+1vvbmrgrHWd0iXOcFfl3L/F/wGtCHxB5K+d8wSR9qKq/M3/Aa6OYw9+J2q69C3zpZttX5Ub+9/vUybV4IYd+//WbtzL/CtcqutbYf3fmHc+1/mqvcawk3ybJCu/b9/wC9U/DAfunT/wBtusCv0/2d/wDDWbqGuosbO8jEt91tvystYN9q226ldH2xtt8qNX3f8B3VVvtUm2l3TLsnzLu/i/hqJRjI25vslPxRNDdZm85iv3fLb+GuMvJ3dXR/uK23/erW1PVHuomQXLFpJfkZvur/ALNZGqQ/aFb5G+V/3sa1x1I8x0UeaOjOa1SG2ml85Pv/AMO5vlWqtvbupYOmPLf5d3yturcbTkjdt6Mxb/lm38NQXWnpJcM8iSO38Un8K1nKn7p2RqQ5tTPWFJpD5czIn97726mTWVzIHfez+Z/yz/vVZa1eSTZDCqqqNu+fbSLYwzRuXhxufd9/71cNSnI0lUM2SN1CwpMuyT5tv/stJbxurI/nL+8ba0cn92tXybaOSHyYfuxbtrN8v3qq3Eltu+0+T86tt+VK5q1P3tCoVOXWRbtWdcoltv2tu2t8u6uj01na3TZuAVt27+61YNu7s0jzbWlb5U/hrQ0+aGObzo3bzl+X7/y7q+Ox1GXNKTPewtT4ZI7LT2T/AFM03mt8q7v4a0WuvLUI+6H5v9Wvzbv+BVzVnfTeSH/55q25d/3m/wBmp/7UdpI+Nqt8rbWr5DEYeUqp9DRrfDzG5cXySsN9zuVot6qybvu1m3mpRrD5iO0KSRbvlf5aqSal8zbNuPm2/J8q1Uk1QSM73KKrrF8nlr8u6tKOHUZe8dvtOhPeahD5yc7v3X975t1Zs0zrsfo2/wCZW/h/4FS3F4jKkEyMXX5X3fdaqzLBNJscrsZNqqv8NexhcPGPuozqVOWHKJe+c0hf7Tv3fL8tVri3DSPD9pUyr/F91d1aEljH/wAvMylG+4yp91aktdPeRc+Tv+f5fk3V9XluH5dTwcdiIx+0YLK6tvfcKry6bN5ezZ+7b/lpG+35q7BfDiNa+XNC37xtq7m27qdY+DXuN0m/hX/1f8NfY4ej7lz5XGYjm+E5C10e/Ece+2+fdtXb81XF8P7bqPztwkVGV9vyrt/2a7ux8KzRyJCkLM6/Nt3/ACtU7eDXkmj811Ty2+f91ub5q1lTjGZwRrcp5wuihpJIPmZGf/v2tQDwrMzHyXbZ93aqV6ovg/y5FT7zL99VT7y0kngvM3nfZmQbWZFWuKpR5vdN6dbl+0eUyeG7mO4Dp0Zf/Hqfp+jzQtLNczZK/wAP92vS28HmaOL/AEXa21n/AOA1m3HhWaxmm32fySIv+1uX/drxsVR+yejh63N7xysmnwxqrwp5rtztV/4f/iqJLRLVVT7HIo2bd27+Kuqj8Nos29BiKRdySN/d/u1UvrN/MhhmtmH/AE0rzJRl9hnbGpLm1Obmt5LfPkoxdvmdqpGzTzks33MrRbvL37WrevtPmkuNk3zLI27958tZd1bJZ3mz94Xk/vN8taxl9kr4vhM9YUt2SFJtq/8AXX5vlqb7PNdTPNM6r5iLsVf4f9qnzW8KM72yKpX+KrDWf7vYJtoXb95vmrS8fi5iPe5iCGSZrdYfObYvy7v7zLUNxcFVaNHaM/Mu3/ZqzInkxhGm+79xf7tZN1I7L9/HzfPu/irejTlUkYyqcseWRDM0Plu8kzfc+6v8VZ0cszN87qPm+fy/m2rVm6Z7iTyfl2xpt8yNdu6qk0nlqXh2hG+4y161GmeZWqSjL3SNpEZfJjhXb/D/ALVIq/ut83zf3fmpkbIrDftC7P4fvbqVVTCPHBtH8MbN/FXV7P7RjGt7xGq3O48Z2/N8vy7qfIszRq6Pj+9u+9VhbXbJs3/MqfP8u75qVbVMt5j/ADfe2rTfvfEbxqe5ykSXj/Y3DzSBv4Kt2zIsa70y+/dtb/0KmW8aW6tcvM21fl3bPutWlHYO1vvmhYf3JF/irel8ISrT5CssYM/7l2Rm/vfLUNxpsysNm4fLtfd92tVdI+UvezbXZP71egfs8/sq/GD9qfxfd+Cfgh4Ti1jULHT/ALddw3GoQ2yxwK6IWLSsoPzOowMnnOMAmujEY3C4DDSxGJqKEIq7lJpJLzb0Rwzqq9keQSaS+1jvZn3fL833ayL6xeS6mTYw2/Ltr7Wm/wCCL37e7lSvwfsCR1J8U2GP/R1Ub7/gih/wUEO37N8I7Bvlw3/FU2A/9rV4seO+Cf8AoZUP/BsP/kjiqUpy+y/uPhvVNJdbXe7NvX5kZa5rVNNLf7f3lr71vP8Agh9/wUPnBRPg7YYYZb/irNP6/wDf6sa9/wCCE3/BR2TdHb/BbTdh6Z8Xad/8epx474K+1mVD/wAGw/8AkjhqYfES+w/uPz51S1ubeRkdPlX+KqXkzSD+Jt3zOq1966p/wQK/4KY3EOy2+CGlZxj/AJHDTv8A4/WJJ/wb5/8ABUU/IPgdpePVfGem/wDx+qlx5wTy3/tKh/4Nh/8AJGX1bEbcj+4+Jo4fLO/e3+7Vy1tppLjY8zMrf3q+0Yv+Dff/AIKgqwL/AAN0047/APCaaZ/8fq/af8ECP+CnMY3zfAnSd+Mf8jhpv/x+uOrx5wY46ZjQ/wDBsP8A5I6Y4at/K/uPja3s/JZX/wDHVatO1t3mVUf5Wb7nzba+xrf/AIIKf8FMI2Un4KaaoA5C+MNO6/8Af+r8X/BCD/gpHGrn/hS2m7j93/irdO/+P1yS454N/wChhR/8GQ/zOuGHm90fHccMkbbEfdWhaxO3753Zd38O77tfXkH/AAQr/wCCkTuRc/BfT8KuFP8Awl+nHP8A5Hqe3/4IZf8ABR+Mbm+DGmhgGC/8Vbp/T/v9XNLjjhD7OYUf/BkP8zrp0HE+R1hfa02G/wBrdR5m6RH3so+XZ5lfYS/8EO/+CjAQhPg1p6tsblvFun43f9/qD/wQ5/4KMuFEnwe007Vx/wAjVp//AMerJca8IOWuY0f/AAZD/M7owS2PkaFbmaQiHa23+GSoZFmVdgh27U+9/ErV9fN/wQ+/4KM8BPgtYDHQ/wDCX6f/APHqmsf+CE//AAUx1u4Fjo/wFtbp8ZMUHimwYn8BNXRT404TqzUIY6k29kqkW3+JrBQ5rtnxpNHC3yfMWX5kZvvbv71NmkmjZX2K7qm379fZGvf8EEP+CnOgHZqXwCitHlOQtx4osE/LM3NZrf8ABC//AIKTb/MT4J6eN3Uf8Jfp3H/keqnxnwrRk6dXHUoyW6dSKa+TY3Z7NHyO0zsQjT7FX+Fant5kbCfdf+L5q+rpP+CFv/BSrIZPgnpuduD/AMVbpvH/AJHrU0X/AIIAf8FWNXT7fpX7N0dxGDhZovE9gyt+Imrpw3F3C2LfLRx1KUuyqRb+5Mr2kYu7Z8mQ6hNZzFHdXZkx/e+WtK31B/LDu7FvvLX07e/8EIf+CoGhXxi1j4FWttN/zyuPFOnocevM1WtD/wCCEP8AwVC1ySS10X4BW94yDfst/FVg/lj1OJuK3pcYcKqv7JY6lz7cvtI3v2te5zyqO976HyzJqSQ/6l1VmbczVA2rPMrec+Tv+ZY3r6/b/g38/wCCtTBQ37M5AC8sPEFjux/d/wBdXF/EL/gjV/wUP+Gng/WPiD4p+DdkmnaDptxf6myeJ7B3hghjaSVggm3MQqsdoBJxgAnAroq8T8MUJxVXGUouTsk5xV35Xeu5zzqOWx8yahqCTK2z5V2/xPVC41grbBOo/vVnzXqeYONv/A6q3WqJl03/AC7Pm2pX0UY8u559bFS6FqTUHaNXCMwas261J2kOeBs3Lu/iaq8l58pmhfb8vytWXdawFjb58t/dpyOeNbl0Ld1fTbt4mVFZfn2vWfNdJI3zuwH8DVTuNSuJOqLt/u1Va82/Oh/75rnqc0T0KOK940JL54X3xozBf4qHvtzLO833qzTM/mfPNx/dqWO4Rpv3P8NcVaJ9LhcRzcqUjQe480bEf5m+9uqS1neNtjvkL91qz/OSSRnT7zJ8itVq1t7mSZE/u/xVx1I8sT38PiOUvxu8h2TNv/3v4ateSjR/I+NtUoflben975quRqixsyOzM38P92uWUeU96jiOYeq+Tt2J/v7v4qkkkm85N/8AwHdUTRuI9kz7t33amVXjiCTP91vm3VhKJ3xrR2BfmZ4H6K//AI9UrzP5Imd8u393+GoNrx8oeGahm2q3dt21KUuY2+tFtW24CTfL/wCzVGJk/jT5o3qFpJ2jZPOyy/NtX71NaRFz8ir/AAs396nGjzB9a5tCSe4dVZ3dXVv/AB2oPMeQeYr/AMX3aTa7s0Py/wDxVV2byZN6JtK/K3z1v7PliclXGcurLM0yLh0h3fwtTFk2q2x2D/3d9UmukVj5W7/bWo/tyFedybv4mreMZHnYjGRsXJtQm+V/JVWb5mqrcTeYp3zcVG0ybm2fMv8ABVRr5/M2bFUN99a6qcZHz+KxXNAtrP50bIHYK38NNjkDN5EzqrfNVKOYwJvR12K1SrdPcNvR/mX/AMerrjE+bxVaMjRt5HVld5sVoW7oqq/nfL/H/tVj6efMZ0f+/t+/WrZRwyLjZlV/hq4+98R5Mqxp2cjt/eVW2/Mta2nq9xI29938L7m+8tZ2mwx+Zvhh4bb80lbWl2f8c0a4X7u2iJ5sqnMa9lZzSRN86hJP4V+9XT6Xa7Y43/eSstZeh2L/ACOiKdq7fm/hrpPDtmluuy5hXbGvzfP92qMZSN/S7NFVN6Y3JuRf7ta9laPIux4cszbt0dM0Ozk8uNIbbbuTcv8AtV0ljYoy7Id29VbfuSuj4jn9oYk1ik0Lby2Nq/L/AHWqZ7GFo1h37fl3OrfwtWtHo8022GaHbu/i/hqKbT3WRnfcn8Hyv822ly/aOmjL3zFis5mkFz8zlvl3b/u7aelj/pBmmm2CTbv/AL22r62aQt5Nt5n7z+GT7y1FdWO2FfLfJVPustTKMeXmPewsvdM/ULO2kZZkRnX7RtRm/h/2q5vWtPmWRtk38X3lauwuI4f40k2/wMv96uY1SL7Ll9i/MzfKrbvmpxjCXvHvYaPwqR57r1i80LzQv5yM3yM3/oNcT4k099zJvY7vl+X71eoeIIdyt8jL8/7pv7tcZrln50bokbfL8yNXJWl/MerHC83vHtWnw20Ku8ybx5v3l+8rVLbxPcM6eTH5SuzRMvysrVWguIFZjZupSNvvMv3m/wDiant77/SPJ+zM+1V3V40Y8sD572nKX7G1eNv9JRX+Xcn8VaMLQyKfJf8Ag2orL8y/8CrPjuPJbY8n7pn3LGqfd/4FV6Fu1y+EVdyN/tV59eT/AJTupVOaVxLqOFZCj7WPlKrrG3zfN/FUUdq8ciu7sGWp7iRPtDpIil9m/wCVPlkamNbIqufJV/4/lb5VriqVJR3PQpx5vhG2qwzNNvhkbdLt8lv/AEKh7Xa4he58pG/1qs33aSOPd++dNyKjM6r8u2pooYJIzvhkd/vr8v8As159SpyzPQox9zmMKSzS8mm8l1+V9sUjferOuLeSa4NzsyVXbu+X71dFfQzKpuUmhXbKvy7KzJreGNXTyV+aqo83tTo5onNXivDIzuq4/gbZ92sjyUuJNjzSbYfufPXVala/u96bvu/MrL/DWDcaanyv5ap8vyf3mr7DLeaMtTGpy/ZKK26T7n87d5bsj/w7qswxvZwh9jbd/wB5V3VFMrzRlN/zrtX5f73+1UkapH5MCJt/76bdX11GR4+KqRjFklrN5jfPuX+6zL96o52e6hdEfbuf733asSK+6Nxu/vbWquzCFdjuu7f86/3a6vaHz1StLnFS1favz5b+Bd9SQzPtL/w/7NRF3jwmzCsnybf4lqONpp5vOebbt+VFrKpIiMpfCX4WeSYul4r7futs2/8AAatwyQQw/wCk9W3b4/722smFYW/vOyv83zVqQtN8ifLu+99/btWvHxko83xHdRjL4jZtV2skyJvSTa37v733f4q0re3s5o3d0807flZflrEhZLRnfZu3LtUx1t294tvCj7FRVTZtX5q+exFT3uY9CjT/AJi7C0k0bbyzOsX71tm1f+A06G427vszthk27W/h/wB6qrXTqvko+5/4Y2/u1Fcag4Z3uYVXcy/6v5VWvAxEpI9KnTj9ouf2ju/0Z/L2r91v9mkVbO4ZbmaFn8v5f3afNtqGCRFklSWaGT+L94v3V/u0QyPFIIYQ3lbvm/hb/wCyrGn+7M6nvF2zt0uo02Oyxfwt91qnj8mRZUfbtjb+L5mpLdU4+0mNB99FkqSSOZYV/wBGydu5mZt235vu1VSpfoZRUvdIzHNGDCk0K7v4t/3VqNo7mSRrmb5kb+HdurQt/tMeEmhjRdvyKq/eqvNazLIyQ3OWb7qttXbU09Nncmp8JnyRpcTGFN2Wi3bv4V/2arXTTW6/Om2Vk27VWtVYXZQ+xkLP80e373+1RNZ2aoIdjbm+VP71KVTlkc0uaMTl9SsUhUyJudv4lVqwtStwtqLaZMvJ/Ev3mrsNSs4biLMN4xLfK67P7tYOrWvkSM83ysqqvmKv3lr0sPzT5W/hPJrOcuY4y+hSNPJ2N8vy/M1ZkNjDcZ+RQ6ttX/arpNQhhmMkL7V3fP5f8NUZrWGeZUh2oqv86168JScJHlVIyMX7A81wXTcn8PltTrfS5M/Pu2bdy/7NdF5flyb0ttwZqdHYo0beTMqf7Mifdpus4xt0EqcYy5jFh0tI4fMSHe275d3y0s0aQzeTMMmRPm+f5q19Qtdsmz5t3ysvl/xLVK8tvJm3zJ95PkaqjL3vI6o0x2n3EPmJBsXzfu/K/wB1a6DTbhoY/wCHG7duX+7WDZ27+WzvDGzfeRlatjT7yT7Os0yZbf8AeWiMuaXumvL/ADHTaXfTLuT725NyM33ttasd0kluj75C6puSSN9u6ubsofIuPtMKN9z52V61Vmga3/fQbg277z7f4f4a7qdT+Y46kO5sWOuPDIqJHhG2qzMv8X+zVxfEEMLLD9p37X+b5q5r7YmYrbfNvWL7rfd/3qinmdpC7uw2/N8tdkahxy906tfFCSSMU3Ax/Lu2/L81Jb60l18m9WZXb95G3y1zEeqIsaiZNrSfM0b/AMNTW+pQxxqSn8X3t3y7a35vcIjKETo21RNoR+i/w/3qoz7PMKQw5SZ/nXdu21m2t1DN8m9ju+5H/DVmMPIrSW25Vb+61RU+Enm5pjJriaRT9pTZ5LbE/wBpf4amhteDJ5K5/hjX7tXYbN5oVd5FZmT7zVb+yr8hdNjfKi/7396sPZ8xtHETiYt1pbxv5j2yt/Cm6sy40e5WFnSHD7vvSGu2k00XVwJktv4fkk/h+Wq0mgzNMftKMyb921Xqox5fdKjiPfOMk0dFt5HRMBfml+X+Kqf2G5+QTPHub5v3ddVeabcr5ttDDyrbn3fLuX+7VS40f7PbnfCu/Z8nlrWEqZvHEcsjmriGGO3i+T+PcjLVO4s0+V3Rm3Nu/wB6uhmsIY1bZDyvzbv7tZuoRpHM0aTMnmfNu2feriqUy41ihDeOsjfe+9t+b+GrljsMn7l1Cr/DI9Z7RzRyMjovzfKqs1S29n5alNjGVvuN95a+ezDC9z2cPiuXU37K6dZV3opZX3f8BqaW8hjl/fceZP8AKv8As1T0mFFyruzbfl+Z60YbGZpP3acLtZGk+9XxdTB/vdT6PD4rmheQjW6LJ87+UG3bPnqvNNu3b7bJ2bU3f+hNWotq837kWe/y03/N93dSrpE98sWyyVpNnz7vlX/vqoo4OcavPI9SNb3DKgjNxMkPzLu/vfdrQj0va3nQou3cvzf3q1NP0HzJGf5ZW3/wv8v+7Wja6DDDIIfsbIY5dv3vu17+DwvtZXUTjxGMhT3MePS02y+d8+77y/3a07Hw3eXEccM1huMO19yp/wCPV0Fn4XRpF2Q7f3v3m/irf0/wvDDH8+5WV/k2t/47X1uBw7jGK5T5jHYpVOZnNWHh/wA+Pe8Kld22Jm+ZVq/b+FZm+dPnRW+9H92up0/S0jhhjRGil37ty/Mvy/3qvWeg7l+eZYl+Y7f9rdX0mHj7p89Uqcuhztn4d8z50sG2/wDPT+81X7Pw3NKrw/ZvKMb10+l+GUYyQi32bWbyvLb5f+BVp2Ph2G1hRH3Ku7ev/wBlXTKmckq0pS0OQk8LwrCn2ZGZY5d0v7qm3Gg20P8ApkKN5X8PyfxV3kejzXSr5KKvz/vWX+Jf4abeeGZlkdPleL7v93atcNSiHtDzG48LzSK3yYC/NuX+Jf8AarMuvD72sm9/MkfZ8jN/dr1HVNFhjUokLbI0/wCWP3W/3q5rUtJEjCHf88n3FkX5V/3q8ytR5jvwtY4S80eZ3EyWzIv8CybfmasTUtH8mRnufM3Mm3a3y+W1dzrMfl3Tw71+X7k38Fc3eW7zXCO822FmZt0z7tzbf71eDLCyi5Hv0cVE4680+5aQO8Kt5abXZfm21z+tQvb73hhZ0+Vkl212euw7rgJsZd3zblf+GsHWIXZfk+ZV+5WLh1OpS7mAsqQs8w67Nu1v4v8AdqWT/SLg/Iu3bu3NT7izeGZ2CRsG/ib+FqjmkeSPfHtHmJt2t/DWtOnBy9wy9p7tnIz7m6ha3/c2zbWdvm/9mrJu9833IWHy/LV7UGePaiuxVk3bvusq1UuA7/cZS/8AD8n3q9PD04Hl4iXvXM+Oa2kj3puDqu7d/eqnLN5i7H+RV/8AHqt6hsZWdEZAv39tUWm8uMIIVmDfc2v8y13xjy+8ckqnN7pZjt3mVT98qm7bt2rRDIG2o6eW/wDeWo9ibjIj/uv7qt81Tw3X7zzvLb5n27V+bdVx94y9CzbxozMHhjZdvysv/LT/AHqvw2LyRt+5+WT79QWsO1i6Jsb+7WzY2/mwoj/Lu/hb+KrcpRiXGUviKlnp8EP39rqz/NHWjZ6fHvVPO37v+Wa/dq3DY20iokwXMn8W2tCw02GJhDCjMsa/e/2v71WX7Tmj7pW/s3eF2Y3K/wAirX2//wAEJLCSH9o3xXst2UyeCWULjlyLy2Ga+Q9N0eeRlTZ8zfxf3a+2P+CHOm/2f+0j4klc7ynhVkKt0OL22NfF+I8FLgbHJ/yL/wBKiYKd6yR+q1l8GPiXfStCnhiWMqisWmkRAQwyACTyfUdR0OKyfEnhHxH4RuxZ+IdJltmb7jMMo/8AusOG6joeK9N/aC8c+KfD+r2WjaFq0tpE9r50jQHaztuK8t1wAOg9ee1N8MatefFX4Tavp/iZUuLvTVLW93JFlshSyngfe4IJHJB568/zNjuEuGXmmIyfBVKv1qnGUk58jhJxjzOOiTTts9rp6bGkatTlU5JWZ5domga14kvl03QtNlupjzsiXO0Zxknoo5HJ4rY134R/EHw5YnUtS8POYVBMjwSLLsAGSSFJIHv0r034T6LZaB8LF1OLV7TTbrUgzNqU0S/J8xCj5yAcAHA6ZJPPe/4Sa38OXstxq3xnttUgkQhoLqaP5T2YNvOO/HT9K7sr8Ocvq5dh54yc1OtFT5oypKNNSV43jJqc9N+W3lcmWIkpO3T1PDdB8Na94ouns9A0yW6kjiMjrGPuqO/P6DqTwOa2rT4NfEm8046nF4YlVNpYRyuqSED/AGGIb8MZNdl8E10f/hamvtoMytZ+TJ9l25wUMq4xwOP889a5Lxz8V/GGva3erba9cW9kZHihtreQovl5I5xgkkdc+vpxXzqyLhrLchhjswqVJznOpCMabhZ8jtzXaen33urGnPUlU5Y/iZ/hz4a+N/FcZn0XQJniBI86QiNCQcEAsQDgjHFJ4l+G/jXwjD9q1zQpY4RjM6EOi5OBllJA59a9N8M+ItI8afD+w8OeHvG6+H7+0iVJoUIUtgEcbiCQT82VJPPNV/E8fxN8GeCNStNde38Rafcw7ftjzNvtgTgll6sORjB4IznFexPgnIv7H+s05VZr2fO6sHCdNS5b8sqcb1Ek9G2tN3ZJ2n20+ezt6df8jyrRNA1rxJfDTdC02W6mIzsiXO0Zxknoo5HJ4rY174R/EHw5YnUtS8PuYUBMjwSLJsAGSSFJIHv0rtfD93J8Nvgb/wAJTo9vENR1GXAufKyUBYhc5HOADgHjJ755yfhP8U/F8vjO10fWtXmvrW/k8qSK4O/aT0Ydxz1HTGfqPNocOcNYb6phMxq1FiMTGMk4KPJTVT4OZPWX96zVlsN1Kju4pWRw2i6JqviLUo9I0Wye4uJThI0x+ZJ4A9zxW9pfwa+JOrRPNB4ZliCOVIuXWIkj0DEEj36V2Hh/QLLw1+0Y2nWUCJC8cksEcS7VjDQlsAY6dRgcflisf4p/FfxofGV5pel6xLY21jcNDFHbNtLFTgsx6nJ7dAPxJUOGsgynK6uKzidRzhXnR5afLq4pO95LTr66abh7SpOVoW2vqcVreg6x4cv20vXNOltp15Mcq4yM4yD0I4PI4qpXp/x/aPUvD/hrxDPH/pN1aEyMDxgojY6erGvMK+Z4myilkedVMJSk5QXK4t72lFSV/NJ2ZrTm5wuw69K9f8R+IG+CXgLS9B8MQoL+/UzXFxMgJB2jc2PXJAGcgBe9eSWbpFdxSSn5VkUt8oPGfQ9a9H/aWUya1pN5EcwyWBEeBx97P8iK93hivVy3h7Msxwz5a0FShGS3ipyfM12bslfddDOolKpGL21NT4bePLn4s29/4C8dRxzefal4Z4owjcEZ4HG4Egg47HOa8n1KyfTdRuNOkJLQTNGxIxypI/pXYfs+QzSfEiB4ydsdrK0nHbbj+ZFYPxDuILrx1q9xbEFG1CXaQoH8R9KrPMViM34QwePxkuatGpUp8z+KUElJXe75W2lfuEEoVXFbEPgy20y88W6ba6yVFrJexrPuOAVLDg8Hj/PHWvWfit4t+LHh/wAQLaeEtLk/s/yV8uW3sfO3N3BODtx6eleWfD/wqPGni208PSTGOOZiZnVgGCKCWxnvgcda9L8dfHV/BurHwp4Z0tLn7CFimubyZnywHK9ckjuxPXPHevY4Pr0cDwpiquJxEsLCVSKjUp355SUW3Cy15Une91q9yaqcqqSV9BfF9xrGu/A6bUfiLYLb38cwa13R7HLbgFO3HykgsMenp2qfAiHWLb4e67qOgxF72SbZaLxzIqcdeOrd6sa3qNh8cvhtdapEktpf6OWla2FxmMkKTk5wCCobB4IIPOM54Dwz8VPF3hHQJvD2h3EMUU0m8SmAGSMnrtPTn3Bx2xXsZlnGX5XxNhMzrVJzoPDOMKsbOpUlaUXJ35bTTbWuztfraIwlKm4pa32O31XVP2l9JtTdT2okUdRbW8ErD/gKgn9K+ZP2z727vv2Vfivf6hO8s8vw9115ZJDlmY2E+Sa9y8G/F/xzpniG2a9165vLeSdEnt7ht+5ScHGckHnt+tcF/wAFSPD9poX7PfxSvbC3jjS++F+tzNHGm0bxY3AY+mTgH6nmvFxTpZ7hqOY4XFV5xo1qcZQry5mud6Si1pZ2s1a/y3tfu24tLVdD+TeS8RbfY8jF1/iqpNqE6t/Ds2fL/vVTmvJmZ/n3J/BVC6meSNcPX9z8p81KsXJtW/dtsdj/AMCrNuNQeSNt6Z2/dZaSaR13IvIZfm/hqs00f+0PLX+Ks5blxlzA0jxyfxLSNI6udj/e/hWoZjt2l9xKp/D92ozNlQ4Rt1YS2Oqn7siZppkZnLqP9rZUsMjlldPvVWXe8gR+f92p4V3Zd/lP8G2uOoexha0omjDG7D54dir92tGxby1SHZlt27zFqlZ/LCEwx+ati1jRcP8Axfx1xyj/ADH0+FxHNylqGFPL3w/xffqxHbbVMybiip92ktUeSMPJu+//ALtX7G16TxzfL/dril7srnu0cQVYY2+/N8vy53VIlvtb7jP8/wB5q0XsUaNXfy2/v/7NJ/Z8K7pidy7N336x5uY7adacfiM+S1fcQ8O7d83y1XmiRV+RMH+7WnNb/KuU+WP5qrzWs32hpt+35NtEolyrlNpBuaZ32f7K/eqJm2NvL8rLT7hUkm2TcbV/76qpcEQo3kyfMvzfN92tIxlzGFTHRiLNeBWe2RGDt/E1U7ieETBLlGO35d2+oZpn85HhfLfxVD9odd2/qu7/AIFXV7Pm9482pmHQfcXDrMon+Rfu7qr3Ujrl/OqO4vPOXyXRqga9EcLbHyd3ybq6KceU8+tjuaRLNeeVCu9Nq7qryX26Rnfb/s/PVS61Dzj/AHg33laq1xN5bfc3K38VdcaPuni4rHfZRpfai52dlqeym34/ut/drJjm+ZkR2+b7rVp6avmN9/a38Nb/AAnhVsR7T4Tcso33b04WtnS4XW4RJH3BlrF05Jm28fL/AHa6XTLd2kT512q3/AqfLA5uY2NLhcxj5Ffa/wD47WzYwwttPZvl+b5dtUdNjSFi7phW/irpNLtYVVXeH/crL4Z3OeUuaNi/oP7mZI4fubPvNXX6DDbMu9LbfL/z03bl/wC+a57SbVFuN8yL8v3P9mut0KPy5PuZfZubalaGJ1Ph/T7y4Yo825WiVl2rt27a6TS4UkZZHRWRl3S/PtrndFmRE2O8yzMy+Uu75drL93/Zro7CaGNl86Nfu/JtT7rVUfd+EOXlAbLeP+EbfmSFaqXkPnXB2bW/us1XpLqby1mkmXe27erJVOO5TctzZuzKzfJIyfLV+5I0p1OUryQx27Km/wC8v+saqdxvZvJ2ZRV3eZ/C1W/tJf8A0ZIcf7TfNuqnIyyTFJTtjX5t33fmrGUvsxPcwMpylEz739+vkpDl/wDnpu+7WJqluk2xIXUD5vupW3eQpHGk2z5t+3bWVqrTRr5Nsnytu2fxVlH2kdj7HB0+aOpyGrLDGzTJ8rr/AA/w1yGsLtmJdM/e+7/DXZatZvu/fOq/3lVf4a5nxBahVbyQuxn+dq5akv5j3aNOUtTt7fUEkh/cv80a7kjar1rqAmmMiIyD5fuvXGx6hHGip520fddo60bPVLaHajTcb933vu150eeJ8BKUDuIZt0Zffxt2tT7a6SPdwysz/M38O2ubh1j9z83G77rL95ttW21SGS3fyf3jMn3VevNrOqtEdNOpTjI34dQto9zwuySyfKjKvy/99UyOea3t12Op2/LuZ/vVjW944jaDYu1X3fNVyK6eSMpMi7GT/wAe/hrya7kqnKpHr4OrzxvI2Ydk0azXKYZX27W+61PurwW8LC5dfmXaq7tu2s2O6Qwqjt88fzJ/dX+GpLW8gvrHf+7l8x9yL/d21x1JR5z2acuaA66k8yNZNm9o/wC8n8NRXlrDbsvnfeb5k2/NVmT/AEhlh+Vn2bdy/Lu/3qSRbby9+9Qrffb+7XVQ1mOp8JiapNc+W6WzruX5kVovu/7K1jXkLyZd/mMe1d2zaqtXQ6g0LW7J/Av31X5WrE1C6RQvztv2/wC98tfYZd8BxVJe7zGbIv2iZ7W2TLfw/J8zU+GHzFVHmw2z5WVN3zVG0XzO6TN8v3NtXYdkLDznZH/uqlfS0/gPAxVacfiK627rbiZ3Ulf/AEGqeobFXznm84Mn3VX5t1aki+UqeTDsXa3zb/vf71ZmpRyeY0MjrtX76r95a6uc8aX7yZSmmSNUR0ZW27d2+mMyLI8PnK25N25qc2yGFk3xqu/5Wb+H/ZqhJBNt3ojfdrCtU5YHXSo8vuotWcwkkCO/ys23atbVmqRt+8fcNnzrXPwxuqxI/wDfrWtZEWYI6Mi/3m+b5a+dxlRfFE9TD03/ANum7p7eWqb7jhf+We2tOGR/M/1yrt/iZvu/7tY9vMPJU+cqtu/i/u1a+0bG+ebKSc/7W3/ZrwMRU5j2qNOlGNjQb99GyO6/e/3mp819NJM8KJGqfK37z5vl21lecPsY8mZkfzdqsy/w/wC7Wgt4625RHV3ZFXdJXnc3unR7P3S5byblEPyu33V+SrFtcLNIr3L58yLYv95dtUbdUkuPubdr/Lt+Vt2371aMNr+8ed7n5PlV2X7tP4jjlGcfeNLTY3a3WDep2/N81WNs0dyJEdl3N91f4t1V7fyYl8nyVR9ytuZ//Zasrav5n7mRt7I3y0SlL5GMYe0nK4+K38tnh37tr7d0jbmVqFWCSNZppst975U+VqmUpKqvMmz5V3Kv8VRtb3iszvZqjff+Z/vVnKEY7DtLl5SBrmGa1e5eFlVU3fL96rP2WFlP7mQPs+7/ABU6OSY70dMfKvyqnzNTLhLm3kSbZt3bf3ivuatIx55cpy1o+7qYmoSWelw70Rssnz7k3bWrntQH2yRv9Gbcv8TfdZa6XX43uLg3kO11V9r/AD1zWoRwx7oV+V9n3Wr08PT5o6RPCxEuaXumDJGjTP5afd+42yq81n++aaHa/wDfj/i/3q1rrZJl0eP7nzMq1SVXhZ/n+b7uP4lr1OX7J59SRB5c0nyQ/wB/bL5iVN5YmtQ7ws6bv4Vp6xu2PLG5l+//ALVWbWL7RP5M14wC/wB35tv/AAGj2ful0Zc0Cp5Y2/voVO75WVvl2/7tV2sUjjbejPub/e3VtSKkzB4U3qrbdrfxNVeOx/el5LZUP3nkjeol7p1x+Io6fZ/eQw87fkVv4auwR+VG6Sxrhmot5k+5nH+0v+992o1me3byfmfy23fN/drCMpR+E6vspMst5zbpk+ZF+/t+WrOn3UNvHsR8Kv8Ayzk/hrNhkmuZGd0y7N8nz/L/AN81XmvfJbzp5sL935q66NSRyVI/ym3JfQxqHS5/1i/MrL83/fVL9uhZB8kZ2v8APufb8tY9veJ5Kp5jFP8AZpJ72OOTej4/vq1dkZe8eZW90172eGe4RAi/d/hf/wBCqx532fbD2/gZV3VjrdQsrunySK33dn3f92pvtP2hYmhuVV/9qumnLm+I4pe7I2ftS7k2Pu2p87bPu/7NbeixbZd6TLiRflVovmb+9XPWKvMqQzTbEkf/AC1dh4f03zm8lEVfnVmaSujl9oTzmjp+lPdbd/Xf8kbJ8q/71bsPh3czMjq80iK3mR/d+X+7Uul6ftBezRQ+z+J/4q6O10b7RGEQbWVtm1v4quMeWPvC5omFp+h/Z1abyYyrJtfa+7bT5PDkKxF/tK7F+b93/C1dXa6DulXybNfN+9tZPlqxZ6L5KvClmz7n2/Kn8VRyw+IfOecat4dmt5P3yM67dzR+V95v96sS/wBFtrcD5edjMrN91f8AZr1HWPD6SzOj+dv/AIGVP4qxdS0WYWfyQqyK+7bto5eYIyPMNQ0l4f8ASXSNk2f8s/4d396sO68P3jLLDA7OrJuTdXp974Z+0RtBMjfvH3vuT5VrOuPC9+qvsRSfvP8A7K1yyo8upXtTzCbQUVZfORn2/MjLF81TWOmorf6lsR/Nt2/NXcXnhfy5BMiSOu/+H+KiHwztVvJhkUSbt7N95VrzMZh4VNGdtHESOc0/TUa62PCuzbteORPvVvx+HXWEJ9maV9jbl/8AZatW+k/ZFKXNsu77jrs+Zf8Aaq1Zs6zNDI8mGbb80Xzba+WxWBjGreMT6LB4r3eVlKHS0khHk7k3f8u7feWprfS9yyWH7xU2rtbd/FVqeO2Rlm/u/Nu/utVizL+Vs8lmdV+T+7/wKs44eMqfwnqSxkoy5IkNjpL+TJC6Lv2Lsjjfb81b9jYu0ccybss6rLGqfLHVWFXkt4oUt2/d7f3lbml2sMkaRTJgb1b/AIFXu4LDW1PNxWI6GnpOhhmEM0O1ofm+X+Ktaz0mG4jMMLsjsv8AwKo9LmTef3zeVu2+YqfNW/pqvHPsTblvldpF+8v96vpMPR948CtW5jMOg/6O/wBmf+Otaz0d3VY5tzbV3O396tS1s/l8xNv7z7nmfdrUsdJS8mSeaPYPvN5f8VetGj2PLqVDPsdBhlVZnmb5m3Mq/Lt/2a2LPwrDdI5mh3S7N2373/fNb+j+GUaMQlfkZ/8AWL81dLpfhW/t8eXMv+xJGn8NdEqPumEqxw8Ph1J49727PtX5fl+VaLzw/NcWoRN21olTbt+aSvQ4fDKJGqbGY72+9/DWfdaKi2cMN1uaNV/3a5JUxxqHl1z4ZeFt/k+U0nyorfe21y2uaL5N1Kk1tt2pXq2vaN++lT5W+f5fM+XbXG+JLXyZHTyZFWR9u7fu3LXDWonXTrcp5Z4g0yFvkd2Tc+5Fb7rVyuqW80e/Ztba+75V+Vf4flr0fXtPhuG3i23tG/8AF/yzX/ZrkdYsY4SLneu35t6/3a82pRpctj0KVa/2jz7VrN5pxJM/+rXbE396uc1Rkjj+Sbd/D5jJ8zf7tdv4iExh3/xsv3d67dtcTqkc0cM3kvtVX3fL/DXl1KMvsxPSo4jQ57UdQjmk2I7Mmza0jfdqjNJD5jQ75HXP3V+WpdSkjkuBD5LOi/Nt2fK3+1VO41COaTYkmE2/N/dVq2hS6BKtGQl1cQsiPMkifLs3K/zKtUbiZ1XYUb5X+63/AI7U9w7xxjeik/e2r/dqr526ZbZI9vy58yuuMeU5Kkub4iOTYsO/Zj/nq1Z80NtC2Uh2s38VWrpnm+RJmRP4WWmqqSKiTcKvyrJs3V0cvNHmOfm/mK0dvCsiOj7ttT2Nm+4oiSb2bav8LU3ckLb3RmZvl/fVes4ZvJbY7Hc+7dS92mXTjzFiz8lZkhn2/Ku35v71dPo9jHLO3kozeWqqrN92sCxsRjz5rbcit93f81dl4fO2NHRMn7u1fvLWfNaHKdX2C3Y6K8ckSwzLKq/eWT+JWrb0/S4I1HnIrtJ9/b8u2rOi6fbOqfJG0rfL8qNuX/ere03Rd7Dem9o1/u/Ltq6fvSMKlP8AlKVnp9vPZqkPVvnRY/vL/vV9tf8ABFf4eeJLH4r+I/iO+iTf2KNITTxfyIwjmuGnhk8pW6MQiZYA5UMufvCvkqOxs/tGxLZUfdvlaP7v3furX6Rf8Ek4oLH9miZ1kLR/8JdcucDJUeTb8fWvzrxazKpl3BVZQSftJQg79E3dv8DOlBOol2P0S+K/gnwV4x1K3TWfFkWl38NvlDLIoEkRY9mIzgg8g8Z57VyninxP4I+Hvge4+H3gi/GoXV7uW+vFfIXIGTuAwePlCg8c5Oev5xf8FJ/+Cu2oeLfi9pK/sd/ECYaVaeHli1Yah4eiGL3zHd1UzKWO0MEbAC7kJUuCGPzTe/8ABU/9tSDcY/ifYZVNwH/CNWvP/kKvn844B4pzDFYjFZZRw9OdZOPtZSqe05GrP3eRxjJrRyV3bz2wjiaMElNvToftT8NfGvhHUvCE3wx8eSG3tpGLWt5uOFJbOM4Owg8gnjrn30bDwZ8Hvh20niDXvFltrRVSLayASQMcd0UtuPbJwozz2x+Gcv8AwVm/bcjYGT4lWSIVyD/wjtnn/wBFVm3H/BXT9uVHkCfFnTtsfyhh4asiGb/v1XDhPC/janhqMMTRwlarRXLTnKVS8YrZSiqfLPl+zfb1Jlj8PzO3Mr+n+Z+5fwU8V+GNI8canq2oSwaVaz2j/Z4XkJVPnVtgY9TgH3PYdq4C8dJLuV423K0jFWA6jNfjDqH/AAWC/b8td6Q/FDTWZTjJ8N2OFP8A36rG1H/gs7/wUGtyij4q6dG5PzIfC1iwx9fKrwMf4M8eYvLKOCqTw6VKU5JqU1d1Gm9PZ2SVtEki4ZhhuZtJ6/13P3/0jRPhH4/8JafYjVrXQ9UtYttwSApkPcsXI35xkfNkZx7VpT6r4I+FfgPUvDdl4uTWrq9jYR24IdAWXb0UkKMcnJ5xX86d9/wWz/4KMwu4t/i/pZA9fClhlf8AyFVVf+C33/BRtSDP8ZNMCleWXwlp/wAp/wC/Ne1R8PuMMLS9pRw+EjiOT2ftFOqvd5eW/IocnNbra1+ltBrEUZyUbu3bQ/od+GvjXwjqXhCb4Y+PJDb20jFrW83HCktnGcHYQeQTx1z762h+GvhV8Krs+Lb7xtFqc8Kt9it4WRiGwf4ULZPYE4Az9MfzoWv/AAW6/wCCjEgw/wAYtOLqm5wPCVhj/wBE1q6d/wAFpv8AgofMFN38XNMH97HhWxP8oq5MLwLxjg8PQ+sUcLVrUElSqSlUvFL4U0oJS5fs329Ts9kpt2bSe5+/PgXxxZ6j8Zl8Y6/cxWUVw8uTLIdseYyqqWI+gycD6Vzfjy+s9U8aapqOn3CywT30jxSKDhlLEg81+H+m/wDBZv8Ab6uf9Z8ULA4OG3eGLFf/AGlV+3/4LC/t5PH57/F7TWXGVCeF7L5jnG3/AFVfOYzw442xeXfUq06LvVlVcuad3KSSf2LW0vsd1LATm+aLW1j92/i34k0DWvBvhix0nV4bia2s8Txxk5jOxF544OVPB579DXn9fjV/w+D/AG8kX5vibpx+Td/yLll0/wC/VR3X/BYr9u5WCQ/FXTRj7xbw1Zf/ABqubOvDTi/O8weKqyoxk1FWUp292Kj1g97GkMtq04WTX9fI/ZmvVNN17wP8WPBdl4b8Za3HpmqWHyQXDAKGAAAOT8uCMZXIOVyMCvwKk/4LH/t7om7/AIWvpgbOAp8M2X/xqqdx/wAFlP8AgoIiPInxa0zb2I8L2J2/+Qq3yPw74tyapUh+4q0qseWcJSnaSvdbQTTT2a2HLK61SKldK39dj+gjT7n4d/BPS7290fxLFq+s3EOyAJhgOeB8mQozgnJydvFcZ8MrPwXrvix/+FhX4jhdGdfMk8qOSQnozgjaOp7c9+x/BjUP+C0n/BQ20Uunxg04hU3c+E7D5v8AyFWHf/8ABb7/AIKS20Qli+L2ltkZIHhOw4/8g17GJ8P+LMXicK40cMsPQbcaPNUcHd3k5NwvJvTV6aLTe/M8DUpp3lq+p/QZDr3hb4ffFcat4Tne60y3lKtjDHay4cIT94DJwe+Op6nqde8D/Cr4h6pN4o0f4iwWTXL77iGUqPn7kK5Vlz1Oc8mv5utQ/wCC5v8AwUogmMafGbTVz2PhDT/l/wDINZ5/4Lv/APBTKOVo5fjNpeF/i/4RDTvm/wDINdOC8N+K4UauFxFDC1aE5uooc9WPJJ6e44xulbS2uxz1KLg076n9KN/rnw/+FXgnUPDPhbXhqep6ghSWVAHUZBXJI+UAAnC5JyfTpR8Ca78O/E3gIfD3xncx6fPFcF7a7EYTcScht2MBsfKd3UY59P5t5v8AgvJ/wU0iDbvjTpYK/wAP/CH6d/8AGagm/wCC9n/BTtCCPjJpYB/6k/Tv/jNdn/EPuO3jIShDCqhGDpKlzVHDkk7u75L8zdnzXvdet+aU6UVZt33uf0u6V8PvhL4Hv4vEWvfEOC+W3kDw28RU5YdMqhZm5wcDHTnivHf2wbbVP2kfhv448FaE8VjP4j8I6ho2mSXjtsiaa2liR5CoJA3SbjtBwM4z3/n6uv8Agvt/wU8ijPl/GzSt/wDCD4N07/4zVWT/AIL+f8FSViU/8Lq0oNs+bHg3Tvvf9+ajG+FPG1bCwwuAhhcPSU1NqM6knKS2cpSg20ui0XroYvGUIO8rt/L/ADPWJv8Ag1//AGxJHyvx4+GePQ3Gof8AyLUL/wDBrr+2MSPL+Pvw0AVcL/pGocf+StcBo3/BdD/gq/rkqrZfGTTCH27R/wAIZpv/AMYr2L4X/wDBQ7/gtT8SZzbaX49jmmdN9nDD4EsD9qHqv+j9PevtJYDx2itcXhfuf/yo8ypXymE9Yyv/AF5nLn/g1t/bH3Fx+0F8NMnr+/1D/wCRai/4hZf2xiuxvj98MjzkHz9Q/wDkWvtv9k/xD/wVi8c65Af2pP2i9C8BaZcqSGv/AAnYm4XPT90kO4fjXp+r6X+2BF8QNP0rwp+3pFq2lS6j5d7dL8OLFUSHPVf3e7NY/VPHHm/3zC/c/wD5UNYnLIaqnL8P8z80pv8Ag1k/bLlO4/H/AOGJb1NxqP8A8iUw/wDBq/8AtnnA/wCGg/hiADnifUf/AJEr7q+Kugf8Fh/CfiTUYNC/a/8ACcVktw39mxaj4V0yO4aHOVZkMX92us8KeJf2pND8Kpf/ABY/b7+1amyhpLTwv8KbeaOPIzt81odu7FZ1MH43R+LGYX7n/wDKjSGNy2W0Zf18z86k/wCDWH9s1Rj/AIaA+GGfafUcf+klTRf8Gs/7YyEF/j/8Mzjp/pGo/wDyLX254g+OX7ZXiFhb/DT9qfVLZkdgjav8PNMJn5+UELF8lePfHP4wf8F5vhTby69ofxV0nVtKjzslXwdpqSyZGR8pg+Xipjl/jdPbF4X7n/8AKzohmGAhLqv69TxKD/g16/a9iKs/x3+GhK+lxqH/AMi1dT/g2Q/a+UKP+F7fDYbVxkXGof8AyLXBa9/wWo/4K3+FdUk0nxB8W9MhmiH7yP8A4Q3Tdwb+7/qKpf8AD9r/AIKi/Ju+M2lr/e/4ozTv/jNctXAeNVPSWJw33P8A+VntYbFKX8OR6vb/APBtF+1zCgRvjt8OTg5B8+/4/wDJardt/wAG2X7W1tF8vxz+HRk3ZL+bff8AyNXlkP8AwXV/4KXSDd/wubTTn7o/4Q/Tv/jNWbb/AILn/wDBSaYbR8ZNL3e/hHT/AP4zXJPBeMnXE4b7n/8AKz1KdXGacrR6af8Ag21/azc4k+Ofw7ZQ2cGW+5+v+jVKP+Dbf9qVI/3fxo+HYccL+/vsAf8AgNXmSf8ABcv/AIKQttx8ZNLZi2GX/hEtP4/8g1Iv/BcT/gpDJGzxfGbTtxXKqfB+n8f+QayeD8YY74nD/c//AJWdDxOYxesl/XyPRX/4NtP2rJZC8nxt+HfPYT3/AP8AI1RSf8G1v7WMz8/HD4dqPae/P/ttXnJ/4Lmf8FJo4z5nxg0zAOGk/wCES0//AOM1Bdf8F1v+CkUXyp8Z9MDFsDPhHT//AIzV/UfGL/oJw/3P/wCVkfWMdvzL+vkehSf8Gz37WrqV/wCF6/DognODPf8A/wAi1Uu/+DY39rm4m3p8ePhyq+n2m/8A/kWvP7z/AILtf8FMIJWVPjPpexf4v+EP0/8A+M1nXP8AwXr/AOCnCTbE+NGlgc8nwbp2P/RNbU8F4zR0jicN9z/+VmFXEYt/FJHpEn/BsH+2Cz70+PHw1B9ftGof/ItVH/4Nc/2xHXZ/wv34a8nLf6RqHP8A5K15he/8F/f+Codqpb/hduk5Azj/AIQ3Tun/AH5rMk/4OEP+CpgLMvxy0kgfdA8F6b/8Yrohl/jVLRYrDfc//lZwVMTKMveZ65J/wa3ftmuMD9oD4Zn3a51HP/pJVeT/AINYf2zZHLn9oH4Zc/8ATzqP/wAiV5Hc/wDBwx/wVORN0fx20oHGcHwTpn/xiqR/4OJf+CrIBP8AwvTR+fuf8UVpv/xitVl/jZbTFYX7n/8AKzneKi9z2Y/8Grn7ZxUp/wAL/wDhfg/9POo//IlQv/warftqtwn7QnwuUbcYFxqP/wAiV42f+Din/gq55hjHx00fhc7v+EK0z/4xXqv7B/8AwXX/AOCk/wAc/wBs34W/B74j/GfSrzQfE/jrTdN1m0j8I6fC01tNcIkih0hDISpPIII7VOKpeNuCwlTETxWG5YRcnZO9kru37vfQ5HUw05WaZ8m/t5/8E6v2iP8AgnX8ULP4e/GuytLu11K0+0aH4n0SKd9N1EADzI45JY0JljLKHjIyu5TyrKT5Bp6ou1XdW/2a/Y7/AIOujjwT8ETx/wAhfXev/XKyr8d9Jt/Mbh+f46+84Az7G8T8H4bMcWl7SfMnbRNxnKN7dL2v67HDiKcaVVxWxs6OqMw+983/AH1XW6Xbusib4fmb5m2/drn9Bs0uWR8shV/++q7LQ7efcv8Adr7CUfdOf3jW0e38xlhSH7zV0tjZpJMYY0ysfzfd+9/s1l6PbuPkf7yv8u2umt9PkWPe6f7SKzbd1TzcpEpchd0nTfLk8maHczJt3L8u2t3S4nZkhd22/wB7+9UFjDDJC1s9sxT+FfvVsWdv/G9ttH3V/utT5uYiUeb4TRshJCsUOzeP4ZG/iat7S9Q8tNs02fk3Kqpu+asO3WbH2mDdEd25G+6sf+ytalhbW1vbo00Pk7n2/Mn3f9qrgTItXM0KrFsRU+XdKq/3qp6hcJJvh2MP7irSz3U1uzWc3kzLG25fl+WqV5dTMq/6verbttHNylxJFk2zbERt6ptZaz5JpLqb5Jmx/d2f7VMkuEjzM7sEZ/vf+zUz7VDdKJHmaJPmWKRU+XdUylynt5dHuF1JDJI9zs3vH8qKybttY+sybZk+RlZnbasbfLWjcSJJbojvsk2/I275W/2ax7qRJJlhuX+X727+JWrklLm+0fd4F8qimjD1BUjLQecrM3zbV/hrkdZj/cpHC6su9vl/vV12tN5cLzJyrL/wKuU1yHcyeWjbdu5GX5a55S5vePepRjHXmK8twisiIi/vPlerdrfbZF851Rdn3W/irEa82sj787f7yVXk1LdIsj8Ls/1bfxVzR5z8wlU5jutF1iFl/fzcbm+X+Ja1tNura3be7s275vmrzvSdWSFUTezN975q11151z/pK7fu+X/erxsT7X3uxtTly+9I7Zb5Lq1Lu7Yb+Jf4l/8AZad/aj+Wgs4WdPmZmWX5mrkB4kdIWRJN235X3Vfs9Y8yM2vnbFV1bdXgVKLlPmR7WFlzfaOtsdYktnSF92d+xfl3Ntb5quNceZGfu7/vbvu1zdvfSXRQedtZfv8Az/w1oWciKxR/MVl+aLcn3qihRnznuU6ns48sja8x5Jnm8ljJu2rtb5f96n3VxDHummuVVtny7fu7qqW9xBcKn/LMN8rzeb92iS3jmVE3/wAH3pE+9/u17eDp83xGWIqfyyI7jZJtROHkX7y/LWbcW95JcfO/C/I235dtbiwfaF/1Lbo6jurWO8h2OPup97+8tfT4Nez908ytiIx90wpNP8yNXTjb8zsz/wANLb7GvH+dmaT7jfe/75q9JboZAiQsIm+X5vu/dp8dpuVLqFNm6L5V3/xV79OUep4tap7ScnIr6hbwrGHhh3ps3bt+2su6kM0z+YjK391U21fvLd93l3I37U3Juf7tZd00vnb3udp2bdzfdreUuX3jnp/3ShdQpGp86ZlCtuRdu5WqpM23Lh/mb5WX+7WrNb+ZHH50zeZ91WqhJbv9sbemx/8AnotcGIqe6ejTjIgjV5tj/u0/uVbt/OVQ+9nmZtv3qjWDzJk8l/3Sruf5PvNVuGDGU85m+b5GWvmsZU9657GFj7pPZy/aLiV34fe3y/w1bhvHkZ0uU+VfuNs/hqqrIVKJ9+rS/eCBGI2fL/FXmVHze8enRo8vvFmNraaTf520qm19yVYgm85l+2feX5V3P8u2q0dvNCybPnHlbnX+LdUluvmK/wC5Yfw7vvfLXP7kjSp7sDXtftN5cJNA+N3yv/tVfj8lZobaa5UFXZpdyferEs97MifvMx/M/wAvy/7tbOlyPNMf9G3tH92plT97Q45VOaOpsW4dm2faVQr/AMs9n3lrStVT/XQorq3zbmb5lrLsVvGCO78fN9371acey4h86FI4z5S/w7fu1zVIzlHlCPJKRYa3m8tprNFZ1+9uT5VWkt7TzJG85/u/K38W7/apsLvJD5DvI4+67bKljj3RrsmXbu2oqpt2rUxhOJnKUeb3dhFjRZERIctI+x283d/u02aF1X/U7S3y7v7tSLGlvl0PyyP91arXl9DMzoiNG8f/AAL5a66VSXPojire9EwNQj3SfPudvmVF37aw76bdl4YFz/GrJ8y10OoSJCyzeczPH827+L5qw9Qhm+0P5zsUZNzsvy17+F908HERMW8VJleESful+bzG/hqC+s4Y1R38xx95F2/xVoTQ+TG0r7WTf91v4qgvFdpC7uwVk+aNW3V3Rj7Q4akYxj7xnJNDbtJ87I7fxM+6rNjcXN4qb0VWhTnb/F/vUxo08w/Jx/GrJu3VZtVRdiW3luzf3f4q0lH7JzR5o+9EuLLcwRlNm/cjMrN8u2qN1cJc7PkZf73/ANlV24kntWa2RFO5f+A/7tVWbcp+fa33kX+9/s1zypxO6nUnHUjs03QO7ou7dtRv71RW8kNunnfKPvKis9Tx2bq331XzPmTa/wB2qtxau0Ledt37/wDV7du6uKR3RkRtebVMyW0g/vL93b/tbqzrmbzmbZM25n3Ou3dV5rdPJ/fQ/eT5Y9/3qz7iN1bej7V37dtXT933kZ1veKkmoOqyeT5n91Pm21csbz7Yivv/ANWv9zdWZdWrtIvnR71b+Jvuq1aGkW72sfk+S3y/Lu37q76coyPLxHul1ZLm4uP9JRdvysjR/e3Vr6fYvJ5WdpX+6y/Nuqvpdj5jfOjFl+VFX+Fq3be1druL7x/6Zqv8X+1XZT3PLqSNHSdNdVWZHWRv7sn/ALLXf+F9BkmhSF4WRbj/AFW5/u/7TVz/AIW0OFZFO/HyblkZl+9Xc+G9NRvJ2XUcvlpuRW+6v+zXdGPuGUpcpu+G9DSNfuLlfldl+bdXRWOipHJ51m+dz7kZvlb/AID/AHqXw/bJ8syQsvkpt2t95t33ttdfpen2a4mtofmVd0X8X+9Wsf7xhKoY9nobxsk37xmX7+5tvy1aj8PPHbtt3fK7N5kb109jpaTKN8O9ZPl27/mq5a6F9nt33p8+/wDiT5an2cZC9ocFqHhfbMHhvG+bc7NI38TLWDcaG9qq+Wn+r+4u7dur0vUvD80k3zw/K21XrK1Dw/5f757aN1j+4yp822nyyiiPrHN7qPN7rQfLhb9zvi+b93J95d392qFxocM275Mqv8Mife/3q9FvNHeSEJMiqyo33k/75Wsi+8OvHGJt671RW2rUypwH7T3uVHn11ofnKrwyMiKv92ql9oqTR/uX3Mv313V2t1pb+TJa/vFb73zfdWsi4s/Lhkhs9zP/AHmX+H+9XnYimd1GRyl1D5IaHyW3Sf61v+Wjf7tQT2bxyKZnZJI0+T+FttbU2mo2Lx/lfftCyfLuqpfI7bpn+U79i7X3LXjYijLl909vD1OUof2bDJGyPNuEifwt8rMtIi/ZWitoY2HmfNu83atSah9naP5IdjSNudV+7Ql8jSbJR/1yVv7q1jSw/Nudntv5S7b2b7kR/lVflX5/mrbs/OW42Q8GNfvNWLpc0Eiqk53D73mbt33f71aGm3jqxSaZmVnXymb+LdXqYWn7tjixFT2h1mjtDAq7HVnk+Z1X/wBCrodP/eN591P5p/gXb91a5XTIf3weHcjr9xf7y10+jhGjimvN29m+6v3f92vZpRsebUlLmOk0u3j8tJtkmF+aJdu5f++a7bS9JSFQ80Hz/eRf/ZawPDtv5ePveUzfdrvdFsZvJSZ7bc/3tsn3lWvWjLlgedUlyy+I0dB0aH7OgSHbuTczf3a6jS/Du6GN9m1G+VGqHR7PzrdZprVtkbbEWP8Airo4YflRMYaN925l+b/gVOXvHB7b3/eM06T9n3vZvDjymXc38VZmoaPbRj7UiLtX+H71dZcRQx437f3ibkk/hrB1y3SSFvJTcscu5lhbb97+KuapH+U1jI4LxJY+dc7PszOFRfvf3q4TxBavI33MPI7I275WjavSvEVrNayS/ufnV1Z23fw1xPim1eaR4d7M0ibtuyuSpHmOynU5jzLWLGaVZY3RUaP5P9n/AL6rifEGk/KyTP5qL99v4a9M1uzht5tm/wCdkZmVv7tcR4mtftDP50zbFTYy/wDLNl/vK1cVSmdlOfNqeb+ItPtprd0S2w6/c/hauA8TWd8sif6vP8a7/wDx6vT9YtXknR4fmC/JFI1ed+KLt/OmuXXYZPlRY1/8erklT5Trp1u5wOqNMpZPJbfGjfNWbHLDNJsT7jPtfcvzM1aurRzLfTHf8uzcjVhySJu2ONh+8y7/ALtYcsTs5ubSJK139lbKDKL/AA7KpXExt2/fJuWT+Gp2mRlPk/xfc3feqvcXG1vkdSNu1mVfm/3a1jHm94mUo/aIjJDJuROv3VVYqj84s0qTOo2/dWP7q/8A2VK0jxqzvDtl/vb9vy1XaSHc3z79v+xV8vKc3NGXulkSQts86ZX/AIW8xa0bVg0LvsUFlXZtrKsV8vYnk/J97a3zbquRsm4um3YyfdX+GoqR6m1Hnjubmnt5cxSZNrbf4k3ba6zQ5PJji3+WX2/e/wDsa4rT7r/SNkjq3y/Pu/irY03UpFXzJtqfN/47XPKPKdsZfZPU/Dd5DHGzzTLj7vy/K3+zXUaLeTLai2+VTt+eRfu15r4f1ZAqPv3xx/6rzH+9XT6XrzyOiJ5abX+fd/FVUfdCpI7WOZIdjwwq5VdsrRp8y/3q/Rb/AIJTO0n7M127Rhc+K7rgDH/LG3r80V1BJIkR5maRfm+VvlX/AIDX6T/8Elpzcfsv3jsCGHi27DZ9fJt6/LfGx34Hf/XyH6mNOUZVND82fFyxL4m1AWiKR9vlYt91vvmuY1aTczwpNuX+838VdJ47mhbxBeP9pZNt7Kqtt/2zXN3iiSPehX938qqzfer9nw8v3MfRHjVInP30n2e8TyfkZv733ayppIbiR0eFVDPuZf71bGuKlw7onKKu6KSNN3/Aawry3TzA6W2z+Hdu/vV6MZRkccvd+IydXiTzmfzP726ue1Rvv+ZwPK+7t/irodQhSFpk+V/n+9XN3yzSRzP8u9m+ZVrKUvcKp83OYOoqjRrMg3t/Dt/u1QRFaP5E+Vv4mq7cRwsrQzQsu3/a2qtUr1oY1CRuqovy7f4a4K38qO6nyc4+1/cybn/v7V/2q07e9eGP+983zf7NZDSJHbpJ0C/w/wB2pI7ry5Dvfn/ZSvNqU4bHq4eodRpupQsm/wA5tv8Adata31CGaHY02U3/ALpY/lZq5KxvE2qk3yH/AH/vVcs9R86bzpvk/hirzamHj0PYw+Il8J1KyHyXkSZcqmx/MpVuvLUJcopZk/0hVrCk1ab5387G37n8VTpM7fOny7k+833t1c0qfKdMa3NLlLVzdJhfNhwu/wCTb/DUU0t5JLshhXCoy/M6ru/4DTFm+ZE+1Kz/AHtv8NR+TdSM291+b5kVfutUcsfiOiNSXwxM7VIQyrM6MpZ/97b/ALNYeoafCrfO+FrpLi3eP/l53f3qyp7dGmaHyV2yfLuauqn7sDnrcn2jm76yeNTv2rt+7WfcWqSf3WP95q3byO2ZWdHZh93/AIFVBrEyTP8A39v8SfLXpYfmlqeTiDGmsYXDbnZtvy1nXVq43b/ur93dXRTWflx/cZv93+KtLQ/hf4h8XahbW2m6bM/2j/VKsW6vQp+R4uI92JwkOh3moXwtrOGRnkfaixpur6t/YF/4Jb/Fr9r7xtDoOm2EkNlC6tqWpNas3kx/e/dr/FJ/s19N/wDBNz/gjzefFjxVpU3jOwupHaf9/b+Q0EUK/e3SSN/eX+7X7b/Dn9nnwZ8J9BT4Y/BCw0vwpp0Ngtkl1ptv/pLN/wAtJt395v71dNTFRpw0PErSnUl/dPze+B//AASd+A/wd1S2TWPDV9qOsQzqmm6LdWH2m5mZfvNJHH8sf/Aq+uLXwD4k+GetSX7+NrHwfdw6WsVloui6XDJdLGq7lVY41ZlZmr1T4qeF4f2bvBaW3gDWLfQra+vP+Ko8d65ceZcxx/xLBu+ZpGrxyL9v/wCFeg+E/EkP7NnhCd/EFlEyweLvEumbvtG370237zLXFUxXNLlZdPD8vvI5fwr8UE+D91qXjb4/fDy81WXUvm03UvHF6ts8jbvurD95v++a7vwX/wAFNP8AgnnovgtbbWjpqazMzLcaboejSSLbyL/C0lfEXi34YfFb9p/xtF4++K3jjUPEOo3tuzT6pNbybY42b5VhjX5VX/dqprn7Er/BbxlpGt23wc8VeKtLjt1nnt5L/wCw/arjd93d/d/9CrncKkpe4+U0TpU9z7A8Uf8ABQ34FXes2vie5+IXgl9NUtDB4dvtJVGjbd8rS3Mq/NXC/FL47eJNcmmh8K+OfCOo6Rq22WLTdDZZFt938LMteM/EDwWnxc8Nv4V1j9jzQfDdtJLHturrxD9p8v8A2fu/erX8K/sY/Ejwr8PLPXdBh8F21nYzt5tro8reb5f8O5qy96UV7xMow+Jm58MdF/aO0fWP+EttvBOn30Cy/wCi3FrKrK393crfxV3mg/Ej4naa9zqvxX/Zj1TxHaXXmNPqEflvJ5f95VX5dq14/pf7Q2sfD9pfCvjPVY8W8u6KO1l3Ku2vdfgD+258HNeuLawh1W8QL/x+QyReWtZ+2UQlRlKMXE8y+K3/AAT1/YA/buhv5ktLjwl4kvLNks75bVoJ7ebb8vmf8Cr8yPj9/wAEr/2t/wBmfxlc+FdV+G8nirSmlb+zvEGnwSNHcQr96Rm2/LX7r+LPiZ+yv8QtaTwro/jDRdM1yN/Numb9w0f8K7pPusy12fw7+GPxI8K+H7m80D4/WfiCHytthb3iLL5i/wB35vl211RxMakbTd0OjWr4efun8ynjj4HPoumpqtnbXFvcxytFe6beOqvHtX7yr97bXAR2cMbfI6vX9Gf7c37HvwW+PHg26vPiL8KNJ0PxDNAyweJtBSOLcyr91lX7zV+Mf7Y37HOm/BHXnv8Awf4kj1K0js/Nlt2XbPG38W5VrKtTpSjzQZ9Bl+cc0+SZ85NZ9Ajrhfv7V+aljtZFWQOjfM23/eq15aSMuy22/wAT7qmhs3j2zfKSv8Neb8Punv8AN7Qz5I3WMJ5PzN/eWq7Wf7nZ5C5V/vba12jmRRNH93fVC+V45nm2b93+d1bRjORn7SEdjEvI3be7w/d/i3ferH1BflE3ksvybdtb2oRpIvlx8Lv27t1YeoRzRspPI/3q6acffOKtWnuc5ffvH8x3xt+Xb/FWJfrM0iIUXDbvu/w1t30b7t6f99VhX3neWybNy/3a9GnE8utWl9oo3XkhTx/wKqUknzEbPm/2at3SptG9/vJ92qsm+OT5ErrjE45S94rsu1N3zV7v/wAEt0j/AOHjfwOGcMPinox2/wDb3HXhEnmN8j7m/i2/3a97/wCCWiSD/go38DzJ1/4Wfo3/AKVx15mfR/4QcX/16qf+ksKcv3i9T9OP+DriNW8F/BFmcjbq+u9P+uVlX49aXDtZdiV+xH/B1r/yJPwTOAcavrvX/rlZV+POkxbfnmRt+3+Fq+E8G438PMJ61P8A07M0xv8AHl8jqdFby2RtmNv92u00GSGaETBG37vkb/Z/2q5DR2RlVP73zP8A3q6zQY9sY8ng7l2s3zV+mfEc3wnV6CiLceSm1tzL97+GumsY48+Sk3DS/J8lcxo7PMzOZlPzfdrp9N/0hTCkLbt/yQ/3anm5dSJROh023hkVHR93+7WtZQzPMPuhI2Zoo1/i/wB6sfS4cSDfMy7fm21r2Mn3pt7P8u5Pl+ao5uYy5v5S/b2qRt587/Mzfw/N8tXVuBG2/r8i7I9vy1Vt7qF1Mexj/srT5ZPsrGNUUI3zO2+tB9feG6hef65JkVN0X3l/9B21kXghjt08maQ7fl87+JmqxeJscTbN25PnWs5mud3nR+Wrfd8tX+aiX8o6cRtxcvIzzTplvvP/AHajmuniVX8/d8+1F+9taqWrNJ9yG9Vk2K0qr/D/ALNUpJk8nfvYj7q/PUylCR7eDjCP+I1LrUEZh522ZY5f7n3mrN8xI1xczLuZ/kbZ/DVaS4huMWwfaFl3bWqT7c7WzPcp8jPtRdlcMv7p9nl9ToynqGySTzkfA3fPJ/s/3Wrm9cjjkVURG2K3/Aq1tUuka1LpMyI33t1YGsXzxhl6r/G1YxjM96nL7MjmJLza7J5LEfwrvqpLcpG331z/ABU6TYshSGbO37+3+9Va+kxhP4dv3ttTzR5j809nLoPXUEjZZvmYrV231JJFVHmXfWM11D5gkd9ir/CtCrumWSF5Bt+b5X+9WVSnSkzeMeY6NdUdsIVbLf3fmrXs9QTam99jNtV1/wDsa5C3ldbpXd2T+JdrVs29xNFIv2xN7fd8yuKtgYSlzROvD1PZzOvt9WSGRXmfytvy/L826tzT9Q8xVmjvPm+Vf+A1xWn3CSTN8jOv/LJq3bOY7kf+8/8Au7Vrn+pwjt8R308T/MdVYyIqmaaFiZJdqM33flrUs7h1ZP4d3zbvvbV/irm9OWaSN0hufvS/LIv3a39Nkm2In8O3a6qtdmHpx6mntJcvul+ORIWV4oWO5tvmR/8As1WZLWRo2fyVDr8zSL/dqKxYKpm+Xb/dZ6nmWaOR3Td93+J/l217mHjynBOpLdle6V45Bsdfl+Z91VLi4S3he5m/dJ/FI33aLq4zC1zM8avGzIix/erJurrGUmm2p93a1epE82pL3veI5LgSb385n8z/AMeqltmZn5jy3ypQ29F/cvH/ALy/3acZbZsbH2t/erWXLyijIAsPlpsfZ/Dub7rNVfUIYY2Te6qzJufa9R7YJJGTyfmZ/nVn/wDHqsyW9tcfONr+X8u7+7Xk1viPSoVJSjqU43RX2b/9zb92nrHbRyJ8+xmX/Vs9NhVPOVNjMP4Pl+9SzbGkKbMp/dWvBxEff909vD1Pdvylq3Z/suxBj73m7n+Zqt+T5axPD5n3drL/AA/99VUt4MbEe2Zvl27lrVjhSS4W2+ZNsTfe/wDQa8ytKPKenSl/MJb7FZX3t975WV6nFuiyfvrpiq/ekb+KnW6+XH/pPmfL/wCPU4Q/ZlDu6n+H7n8Nc/NH7I5SiTtHItuqW25H+Vnk/hbbVuGRGISdFDb9ztH91lqCxhmmiTH+p+8u7+GrUcaGPe7sG+78q/K1S5cxxycox2NTT1DM8zu2zerr/d3f3q17G5mmkZ3O3y/l3bKxrdkjaGKBMP8A3d/3lrYhuPMkeFPmZvutu+ZamTnzbCj7sbl2OZJo/tTwbRH8v+9/u1BJH9gszDePuH3k8tNvy/3auxqkalEmkTbKreXJ/DVS4XdOYZpJGH3vmeojLm90nl9pqE0iLGHSbYFXe6/xbqz9UmhkZEgTYm35mhX5t1SboW82bZuTdtXbVS4s5FU+TuaRl2r5f3WrpwseWXMY1/ep8sSjfbLlftLOpVdu9mqpdQzOskybf7r/AMXy1ptHMqtbQoodYt37z7rbv4qimt5rdSfOU+Z8vy17+H5payPDxEeWVjGa3hkUSOi7f7u+qclrD5L7E2fPu3fwtWvcQpOvnfxf3v7u2s+8uvtEKw23z+X8zKy16EThqUeaJnLa+W0SeT80n/AlpbVYZJN8Lqw3bPMb5fLpWVPMZ0fCt8v/AAKooJn3CFyqtu3OzVv9g5pU+XUtTW8Cr5KOx3f+PUn2NPLR0m3f7LVHDJNHceTszt+ZWZPlb/ZqxYq7I0Mm3LfcrGpGXIVR92VmOaxS4hKeTtX7yrVC7h/5bb13/wB5mrVhZ4Yy8fKM+35qo3UaKwg+bNebzSjN3O7l+EzJNPuURH3r/Eu1vmZf9qobjT03eT91W/hati3t38xfOTb833VXdup4s3kd/wBypjj+Vfk+XbU/asayj7vMYkeh7lCW3l7NzNtk/h/3a0tN0GFZVQPkrtaL5PmatRbFFbe6f65fk3Lu8utG1sPL2bIW/hXbXVHmjseXiI8xW0vQ3tcXUyM7feXb8u1q2NH0lFm/fbl8z7/yfdqXTdPTcUhg+ddzfM3y7a2NNs0kMW+FmRfl271+b/er0qPunj1o8u5e0iytmmSZEjdI/l3fw7a7fw/bIsavb/KrfN8vzbax9NsYY9qW3zwrEvzbN3zf3a7Dw/Y23mwvB8oZfm/2q9OPwHmVpf3jqNDWbzFMyM6+b8m6u10W3hijMKQsr7P975a5vw6qRyQwvNtZm+Rfvf8AfX92u50+3SCb/j5k3TRbfMVd1
Download .txt
gitextract_an16xrke/

├── .dockerignore
├── .gitattributes
├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── detect.py
├── hubconf.py
├── models/
│   ├── __init__.py
│   ├── common.py
│   ├── experimental.py
│   ├── export.py
│   ├── hub/
│   │   ├── yolov3-spp.yaml
│   │   ├── yolov5-fpn.yaml
│   │   └── yolov5-panet.yaml
│   ├── yolo.py
│   ├── yolov5l.yaml
│   ├── yolov5m.yaml
│   ├── yolov5s.yaml
│   └── yolov5x.yaml
├── requirements.txt
├── sotabench.py
├── test.py
├── train.py
├── tutorial.ipynb
├── utils/
│   ├── __init__.py
│   ├── activations.py
│   ├── datasets.py
│   ├── evolve.sh
│   ├── general.py
│   ├── google_utils.py
│   └── torch_utils.py
└── weights/
    └── download_weights.sh
Download .txt
SYMBOL INDEX (213 symbols across 13 files)

FILE: detect.py
  function detect (line 21) | def detect(save_img=False):

FILE: hubconf.py
  function create (line 17) | def create(name, pretrained, channels, classes):
  function yolov5s (line 46) | def yolov5s(pretrained=False, channels=3, classes=80):
  function yolov5m (line 60) | def yolov5m(pretrained=False, channels=3, classes=80):
  function yolov5l (line 74) | def yolov5l(pretrained=False, channels=3, classes=80):
  function yolov5x (line 88) | def yolov5x(pretrained=False, channels=3, classes=80):

FILE: models/common.py
  function autopad (line 8) | def autopad(k, p=None):  # kernel, padding
  function DWConv (line 15) | def DWConv(c1, c2, k=1, s=1, act=True):
  class Conv (line 20) | class Conv(nn.Module):
    method __init__ (line 22) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in,...
    method forward (line 28) | def forward(self, x):
    method fuseforward (line 31) | def fuseforward(self, x):
  class Bottleneck (line 35) | class Bottleneck(nn.Module):
    method __init__ (line 37) | def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):  # ch_in, ch_ou...
    method forward (line 44) | def forward(self, x):
  class BottleneckCSP (line 48) | class BottleneckCSP(nn.Module):
    method __init__ (line 50) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ...
    method forward (line 61) | def forward(self, x):
  class SPP (line 67) | class SPP(nn.Module):
    method __init__ (line 69) | def __init__(self, c1, c2, k=(5, 9, 13)):
    method forward (line 76) | def forward(self, x):
  class Focus (line 81) | class Focus(nn.Module):
    method __init__ (line 83) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in,...
    method forward (line 87) | def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
  class Concat (line 91) | class Concat(nn.Module):
    method __init__ (line 93) | def __init__(self, dimension=1):
    method forward (line 97) | def forward(self, x):
  class Flatten (line 101) | class Flatten(nn.Module):
    method forward (line 104) | def forward(x):
  class Classify (line 108) | class Classify(nn.Module):
    method __init__ (line 110) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1):  # ch_in, ch_out, k...
    method forward (line 116) | def forward(self, x):

FILE: models/experimental.py
  class CrossConv (line 11) | class CrossConv(nn.Module):
    method __init__ (line 13) | def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
    method forward (line 21) | def forward(self, x):
  class C3 (line 25) | class C3(nn.Module):
    method __init__ (line 27) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ...
    method forward (line 38) | def forward(self, x):
  class Sum (line 44) | class Sum(nn.Module):
    method __init__ (line 46) | def __init__(self, n, weight=False):  # n: number of inputs
    method forward (line 53) | def forward(self, x):
  class GhostConv (line 65) | class GhostConv(nn.Module):
    method __init__ (line 67) | def __init__(self, c1, c2, k=1, s=1, g=1, act=True):  # ch_in, ch_out,...
    method forward (line 73) | def forward(self, x):
  class GhostBottleneck (line 78) | class GhostBottleneck(nn.Module):
    method __init__ (line 80) | def __init__(self, c1, c2, k, s):
    method forward (line 89) | def forward(self, x):
  class MixConv2d (line 93) | class MixConv2d(nn.Module):
    method __init__ (line 95) | def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
    method forward (line 113) | def forward(self, x):
  class Ensemble (line 117) | class Ensemble(nn.ModuleList):
    method __init__ (line 119) | def __init__(self):
    method forward (line 122) | def forward(self, x, augment=False):
  function attempt_load (line 132) | def attempt_load(weights, map_location=None):

FILE: models/yolo.py
  class Detect (line 19) | class Detect(nn.Module):
    method __init__ (line 23) | def __init__(self, nc=80, anchors=(), ch=()):  # detection layer
    method forward (line 35) | def forward(self, x):
    method _make_grid (line 56) | def _make_grid(nx=20, ny=20):
  class Model (line 61) | class Model(nn.Module):
    method __init__ (line 62) | def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None):  # model, input...
    method forward (line 95) | def forward(self, x, augment=False, profile=False):
    method forward_once (line 115) | def forward_once(self, x, profile=False):
    method _initialize_biases (line 140) | def _initialize_biases(self, cf=None):  # initialize biases into Detec...
    method _print_biases (line 149) | def _print_biases(self):
    method fuse (line 160) | def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layers
    method info (line 171) | def info(self, verbose=False):  # print model information
  function parse_model (line 175) | def parse_model(d, ch):  # model_dict, input_channels(3)

FILE: sotabench.py
  function test (line 27) | def test(data,

FILE: test.py
  function test (line 21) | def test(data,

FILE: train.py
  function train (line 36) | def train(hyp, opt, device, tb_writer=None):

FILE: utils/activations.py
  class Swish (line 7) | class Swish(nn.Module):  #
    method forward (line 9) | def forward(x):
  class Hardswish (line 13) | class Hardswish(nn.Module):  # export-friendly version of nn.Hardswish()
    method forward (line 15) | def forward(x):
  class MemoryEfficientSwish (line 20) | class MemoryEfficientSwish(nn.Module):
    class F (line 21) | class F(torch.autograd.Function):
      method forward (line 23) | def forward(ctx, x):
      method backward (line 28) | def backward(ctx, grad_output):
    method forward (line 33) | def forward(self, x):
  class Mish (line 38) | class Mish(nn.Module):
    method forward (line 40) | def forward(x):
  class MemoryEfficientMish (line 44) | class MemoryEfficientMish(nn.Module):
    class F (line 45) | class F(torch.autograd.Function):
      method forward (line 47) | def forward(ctx, x):
      method backward (line 52) | def backward(ctx, grad_output):
    method forward (line 58) | def forward(self, x):
  class FReLU (line 63) | class FReLU(nn.Module):
    method __init__ (line 64) | def __init__(self, c1, k=3):  # ch_in, kernel
    method forward (line 69) | def forward(self, x):

FILE: utils/datasets.py
  function get_hash (line 29) | def get_hash(files):
  function exif_size (line 34) | def exif_size(img):
  function create_dataloader (line 49) | def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, au...
  class InfiniteDataLoader (line 75) | class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader):
    method __init__ (line 81) | def __init__(self, *args, **kwargs):
    method __len__ (line 86) | def __len__(self):
    method __iter__ (line 89) | def __iter__(self):
    class _RepeatSampler (line 93) | class _RepeatSampler(object):
      method __init__ (line 100) | def __init__(self, sampler):
      method __iter__ (line 103) | def __iter__(self):
  class LoadImages (line 108) | class LoadImages:  # for inference
    method __init__ (line 109) | def __init__(self, path, img_size=640):
    method __iter__ (line 137) | def __iter__(self):
    method __next__ (line 141) | def __next__(self):
    method new_video (line 180) | def new_video(self, path):
    method __len__ (line 185) | def __len__(self):
  class LoadWebcam (line 189) | class LoadWebcam:  # for inference
    method __init__ (line 190) | def __init__(self, pipe=0, img_size=640):
    method __iter__ (line 211) | def __iter__(self):
    method __next__ (line 215) | def __next__(self):
    method __len__ (line 250) | def __len__(self):
  class LoadStreams (line 254) | class LoadStreams:  # multiple IP or RTSP cameras
    method __init__ (line 255) | def __init__(self, sources='streams.txt', img_size=640):
    method update (line 288) | def update(self, index, cap):
    method __iter__ (line 300) | def __iter__(self):
    method __next__ (line 304) | def __next__(self):
    method __len__ (line 323) | def __len__(self):
  class LoadImagesAndLabels (line 327) | class LoadImagesAndLabels(Dataset):  # for training/testing
    method __init__ (line 328) | def __init__(self, path, img_size=640, batch_size=16, augment=False, h...
    method cache_labels (line 478) | def cache_labels(self, path='labels.cache'):
    method __len__ (line 504) | def __len__(self):
    method __getitem__ (line 513) | def __getitem__(self, index):
    method collate_fn (line 597) | def collate_fn(batch):
  function load_image (line 605) | def load_image(self, index):
  function augment_hsv (line 622) | def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5):
  function load_mosaic (line 641) | def load_mosaic(self, index):
  function replicate (line 699) | def replicate(img, labels):
  function letterbox (line 716) | def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=Tru...
  function random_perspective (line 749) | def random_perspective(img, targets=(), degrees=10, translate=.1, scale=...
  function box_candidates (line 836) | def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1):  # bo...
  function cutout (line 844) | def cutout(image, labels):
  function reduce_img_size (line 890) | def reduce_img_size(path='path/images', img_size=1024):  # from utils.da...
  function recursive_dataset2bmp (line 907) | def recursive_dataset2bmp(dataset='path/dataset_bmp'):  # from utils.dat...
  function imagelist2folder (line 927) | def imagelist2folder(path='path/images.txt'):  # from utils.datasets imp...
  function create_folder (line 936) | def create_folder(path='./new'):

FILE: utils/general.py
  function torch_distributed_zero_first (line 39) | def torch_distributed_zero_first(local_rank: int):
  function set_logging (line 50) | def set_logging(rank=-1):
  function init_seeds (line 56) | def init_seeds(seed=0):
  function get_latest_run (line 62) | def get_latest_run(search_dir='./runs'):
  function check_git_status (line 68) | def check_git_status():
  function check_img_size (line 76) | def check_img_size(img_size, s=32):
  function check_anchors (line 84) | def check_anchors(dataset, model, thr=4.0, imgsz=640):
  function check_anchor_order (line 118) | def check_anchor_order(m):
  function check_file (line 129) | def check_file(file):
  function check_dataset (line 139) | def check_dataset(dict):
  function make_divisible (line 159) | def make_divisible(x, divisor):
  function labels_to_class_weights (line 164) | def labels_to_class_weights(labels, nc=80):
  function labels_to_image_weights (line 183) | def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
  function coco80_to_coco91_class (line 192) | def coco80_to_coco91_class():  # converts 80-index (val2014) to 91-index...
  function xyxy2xywh (line 204) | def xyxy2xywh(x):
  function xywh2xyxy (line 214) | def xywh2xyxy(x):
  function scale_coords (line 224) | def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
  function clip_coords (line 240) | def clip_coords(boxes, img_shape):
  function ap_per_class (line 248) | def ap_per_class(tp, conf, pred_cls, target_cls):
  function compute_ap (line 311) | def compute_ap(recall, precision):
  function bbox_iou (line 340) | def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=Fal...
  function box_iou (line 385) | def box_iou(box1, box2):
  function wh_iou (line 410) | def wh_iou(wh1, wh2):
  class FocalLoss (line 418) | class FocalLoss(nn.Module):
    method __init__ (line 420) | def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
    method forward (line 428) | def forward(self, pred, true):
  function smooth_BCE (line 448) | def smooth_BCE(eps=0.1):  # https://github.com/ultralytics/yolov3/issues...
  class BCEBlurWithLogitsLoss (line 453) | class BCEBlurWithLogitsLoss(nn.Module):
    method __init__ (line 455) | def __init__(self, alpha=0.05):
    method forward (line 460) | def forward(self, pred, true):
  function compute_loss (line 470) | def compute_loss(p, targets, model):  # predictions, targets, model
  function build_targets (line 533) | def build_targets(p, targets, model):
  function non_max_suppression (line 590) | def non_max_suppression(prediction, conf_thres=0.1, iou_thres=0.6, merge...
  function strip_optimizer (line 672) | def strip_optimizer(f='weights/best.pt', s=''):  # from utils.general im...
  function coco_class_count (line 686) | def coco_class_count(path='../coco/labels/train2014/'):
  function coco_only_people (line 697) | def coco_only_people(path='../coco/labels/train2017/'):  # from utils.ge...
  function crop_images_random (line 706) | def crop_images_random(path='../images/', scale=0.50):  # from utils.gen...
  function coco_single_class_labels (line 729) | def coco_single_class_labels(path='../coco/labels/train2014/', label_cla...
  function kmean_anchors (line 751) | def kmean_anchors(path='./data/coco128.yaml', n=9, img_size=640, thr=4.0...
  function print_mutation (line 850) | def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''):
  function apply_classifier (line 881) | def apply_classifier(x, model, img, im0):
  function fitness (line 916) | def fitness(x):
  function output_to_target (line 922) | def output_to_target(output, width, height):
  function increment_dir (line 944) | def increment_dir(dir, comment=''):
  function hist2d (line 955) | def hist2d(x, y, n=100):
  function butter_lowpass_filtfilt (line 964) | def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):
  function plot_one_box (line 976) | def plot_one_box(x, img, color=None, label=None, line_thickness=None):
  function plot_wh_methods (line 990) | def plot_wh_methods():  # from utils.general import *; plot_wh_methods()
  function plot_images (line 1011) | def plot_images(images, targets, paths=None, fname='images.jpg', names=N...
  function plot_lr_scheduler (line 1094) | def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):
  function plot_test_txt (line 1111) | def plot_test_txt():  # from utils.general import *; plot_test()
  function plot_targets_txt (line 1128) | def plot_targets_txt():  # from utils.general import *; plot_targets_txt()
  function plot_study_txt (line 1141) | def plot_study_txt(f='study.txt', x=None):  # from utils.general import ...
  function plot_labels (line 1173) | def plot_labels(labels, save_dir=''):
  function plot_evolution (line 1205) | def plot_evolution(yaml_file='data/hyp.finetune.yaml'):  # from utils.ge...
  function plot_results_overlay (line 1229) | def plot_results_overlay(start=0, stop=0):  # from utils.general import ...
  function plot_results (line 1252) | def plot_results(start=0, stop=0, bucket='', id=(), labels=(),

FILE: utils/google_utils.py
  function gsutil_getsize (line 14) | def gsutil_getsize(url=''):
  function attempt_download (line 20) | def attempt_download(weights):
  function gdrive_download (line 56) | def gdrive_download(id='1n_oKgR81BJtqk75b00eAjdv03qVCQn2f', name='coco12...
  function get_token (line 90) | def get_token(cookie="./cookie"):

FILE: utils/torch_utils.py
  function init_seeds (line 16) | def init_seeds(seed=0):
  function select_device (line 28) | def select_device(device='', batch_size=None):
  function time_synchronized (line 55) | def time_synchronized():
  function is_parallel (line 60) | def is_parallel(model):
  function intersect_dicts (line 64) | def intersect_dicts(da, db, exclude=()):
  function initialize_weights (line 69) | def initialize_weights(model):
  function find_modules (line 81) | def find_modules(model, mclass=nn.Conv2d):
  function sparsity (line 86) | def sparsity(model):
  function prune (line 95) | def prune(model, amount=0.3):
  function fuse_conv_and_bn (line 106) | def fuse_conv_and_bn(conv, bn):
  function model_info (line 131) | def model_info(model, verbose=False):
  function load_classifier (line 153) | def load_classifier(name='resnet101', n=2):
  function scale_img (line 174) | def scale_img(img, ratio=1.0, same_shape=False):  # img(16,3,256,416), r...
  function copy_attr (line 188) | def copy_attr(a, b, include=(), exclude=()):
  class ModelEMA (line 197) | class ModelEMA:
    method __init__ (line 207) | def __init__(self, model, decay=0.9999, updates=0):
    method update (line 217) | def update(self, model):
    method update_attr (line 229) | def update_attr(self, model, include=(), exclude=('process_group', 're...
Condensed preview — 33 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (3,558K chars).
[
  {
    "path": ".dockerignore",
    "chars": 3602,
    "preview": "# Repo-specific DockerIgnore -------------------------------------------------------------------------------------------"
  },
  {
    "path": ".gitattributes",
    "chars": 75,
    "preview": "# this drop notebooks from GitHub language stats\n*.ipynb linguist-vendored\n"
  },
  {
    "path": ".gitignore",
    "chars": 3825,
    "preview": "# Repo-specific GitIgnore ----------------------------------------------------------------------------------------------"
  },
  {
    "path": "Dockerfile",
    "chars": 1804,
    "preview": "# Start FROM Nvidia PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch\nFROM nvcr.io/nvidia/pytorch:2"
  },
  {
    "path": "LICENSE",
    "chars": 35126,
    "preview": "GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation,"
  },
  {
    "path": "README.md",
    "chars": 12400,
    "preview": "# 基于 YOLOv5 的卫星图像目标检测 \n\n\n\n## 1 数据准备\n\n\n\n### 1.1 数据集采集\n\n本项目使用 DOTA 数据集,数据集链接:[下载地址](https://captain-whu.github.io/DOTA/dat"
  },
  {
    "path": "detect.py",
    "chars": 7642,
    "preview": "import argparse\nimport os\nimport platform\nimport shutil\nimport time\nfrom pathlib import Path\n\nimport cv2\nimport torch\nim"
  },
  {
    "path": "hubconf.py",
    "chars": 3358,
    "preview": "\"\"\"File for accessing YOLOv5 via PyTorch Hub https://pytorch.org/hub/\n\nUsage:\n    import torch\n    model = torch.hub.loa"
  },
  {
    "path": "models/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "models/common.py",
    "chars": 4281,
    "preview": "# This file contains modules common to various models\nimport math\n\nimport torch\nimport torch.nn as nn\n\n\ndef autopad(k, p"
  },
  {
    "path": "models/experimental.py",
    "chars": 5553,
    "preview": "# This file contains experimental modules\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom models.common imp"
  },
  {
    "path": "models/export.py",
    "chars": 3374,
    "preview": "\"\"\"Exports a YOLOv5 *.pt model to ONNX and TorchScript formats\n\nUsage:\n    $ export PYTHONPATH=\"$PWD\" && python models/e"
  },
  {
    "path": "models/hub/yolov3-spp.yaml",
    "chars": 1530,
    "preview": "# parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channe"
  },
  {
    "path": "models/hub/yolov5-fpn.yaml",
    "chars": 1244,
    "preview": "# parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channe"
  },
  {
    "path": "models/hub/yolov5-panet.yaml",
    "chars": 1458,
    "preview": "# parameters\nnc: 80  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channe"
  },
  {
    "path": "models/yolo.py",
    "chars": 11535,
    "preview": "import argparse\nimport logging\nimport math\nfrom copy import deepcopy\nfrom pathlib import Path\n\nimport torch\nimport torch"
  },
  {
    "path": "models/yolov5l.yaml",
    "chars": 1453,
    "preview": "# parameters\nnc: 16  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channe"
  },
  {
    "path": "models/yolov5m.yaml",
    "chars": 1455,
    "preview": "# parameters\nnc: 16  # number of classes\ndepth_multiple: 0.67  # model depth multiple\nwidth_multiple: 0.75  # layer chan"
  },
  {
    "path": "models/yolov5s.yaml",
    "chars": 1455,
    "preview": "# parameters\nnc: 16  # number of classes\ndepth_multiple: 0.33  # model depth multiple\nwidth_multiple: 0.50  # layer chan"
  },
  {
    "path": "models/yolov5x.yaml",
    "chars": 1455,
    "preview": "# parameters\nnc: 16  # number of classes\ndepth_multiple: 1.33  # model depth multiple\nwidth_multiple: 1.25  # layer chan"
  },
  {
    "path": "requirements.txt",
    "chars": 569,
    "preview": "# pip install -r requirements.txt\n\n# base ----------------------------------------\nCython\nmatplotlib>=3.2.2\nnumpy>=1.18."
  },
  {
    "path": "sotabench.py",
    "chars": 14354,
    "preview": "import argparse\nimport glob\nimport json\nimport os\nimport shutil\nfrom pathlib import Path\n\nimport numpy as np\nimport torc"
  },
  {
    "path": "test.py",
    "chars": 13620,
    "preview": "import argparse\nimport glob\nimport json\nimport os\nimport shutil\nfrom pathlib import Path\n\nimport numpy as np\nimport torc"
  },
  {
    "path": "train.py",
    "chars": 27722,
    "preview": "import argparse\nimport glob\nimport logging\nimport math\nimport os\nimport random\nimport shutil\nimport time\nfrom pathlib im"
  },
  {
    "path": "tutorial.ipynb",
    "chars": 3273734,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"colab_type\": \"text\",\n    \"id\": \"view-in-github\"\n   }"
  },
  {
    "path": "utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "utils/activations.py",
    "chars": 2176,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\n# Swish https://arxiv.org/pdf/1905.02244.pdf ------"
  },
  {
    "path": "utils/datasets.py",
    "chars": 38988,
    "preview": "import glob\nimport math\nimport os\nimport random\nimport shutil\nimport time\nfrom pathlib import Path\nfrom threading import"
  },
  {
    "path": "utils/evolve.sh",
    "chars": 747,
    "preview": "#!/bin/bash\n# Hyperparameter evolution commands (avoids CUDA memory leakage issues)\n# Replaces train.py python generatio"
  },
  {
    "path": "utils/general.py",
    "chars": 53641,
    "preview": "import glob\nimport logging\nimport math\nimport os\nimport platform\nimport random\nimport shutil\nimport subprocess\nimport ti"
  },
  {
    "path": "utils/google_utils.py",
    "chars": 4971,
    "preview": "# This file contains google utils: https://cloud.google.com/storage/docs/reference/libraries\n# pip install --upgrade goo"
  },
  {
    "path": "utils/torch_utils.py",
    "chars": 8991,
    "preview": "import logging\nimport math\nimport os\nimport time\nfrom copy import deepcopy\n\nimport torch\nimport torch.backends.cudnn as "
  },
  {
    "path": "weights/download_weights.sh",
    "chars": 245,
    "preview": "#!/bin/bash\n# Download common models\n\npython -c \"\nfrom utils.google_utils import *;\nattempt_download('weights/yolov5s.pt"
  }
]

About this extraction

This page contains the full source code of the KevinMuyaoGuo/yolov5s_for_satellite_imagery GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 33 files (3.4 MB), approximately 887.0k tokens, and a symbol index with 213 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!