Showing preview only (615K chars total). Download the full file or copy to clipboard to get everything.
Repository: dlut-dimt/TarDAL
Branch: main
Commit: 6a9edd744b44
Files: 121
Total size: 578.2 KB
Directory structure:
gitextract_5xl4bhv7/
├── .github/
│ └── workflows/
│ └── sync.yml
├── .gitignore
├── CITATION.cff
├── LICENSE
├── README.md
├── assets/
│ └── sample/
│ └── s1/
│ └── meta/
│ └── pred.txt
├── config/
│ ├── __init__.py
│ ├── default.yaml
│ ├── exp/
│ │ ├── i-tardal-dt.yaml
│ │ └── t-tardal-ct.yaml
│ └── official/
│ ├── colab.yaml
│ ├── infer/
│ │ ├── tardal-ct.yaml
│ │ ├── tardal-dt.yaml
│ │ └── tardal-tt.yaml
│ └── train/
│ ├── tardal-ct.yaml
│ ├── tardal-dt.yaml
│ └── tardal-tt.yaml
├── data/
│ └── README.md
├── functions/
│ ├── __init__.py
│ ├── div_loss.py
│ └── get_param_groups.py
├── infer.py
├── loader/
│ ├── __init__.py
│ ├── m3fd.py
│ ├── roadscene.py
│ ├── tno.py
│ └── utils/
│ ├── __init__.py
│ ├── checker.py
│ └── reader.py
├── module/
│ ├── __init__.py
│ ├── detect/
│ │ ├── README.md
│ │ ├── models/
│ │ │ ├── __init__.py
│ │ │ ├── common.py
│ │ │ ├── experimental.py
│ │ │ ├── hub/
│ │ │ │ ├── anchors.yaml
│ │ │ │ ├── yolov3-spp.yaml
│ │ │ │ ├── yolov3-tiny.yaml
│ │ │ │ ├── yolov3.yaml
│ │ │ │ ├── yolov5-bifpn.yaml
│ │ │ │ ├── yolov5-fpn.yaml
│ │ │ │ ├── yolov5-p2.yaml
│ │ │ │ ├── yolov5-p34.yaml
│ │ │ │ ├── yolov5-p6.yaml
│ │ │ │ ├── yolov5-p7.yaml
│ │ │ │ ├── yolov5-panet.yaml
│ │ │ │ ├── yolov5l6.yaml
│ │ │ │ ├── yolov5m6.yaml
│ │ │ │ ├── yolov5n6.yaml
│ │ │ │ ├── yolov5s-ghost.yaml
│ │ │ │ ├── yolov5s-transformer.yaml
│ │ │ │ ├── yolov5s6.yaml
│ │ │ │ └── yolov5x6.yaml
│ │ │ ├── tf.py
│ │ │ ├── yolo.py
│ │ │ ├── yolov5l.yaml
│ │ │ ├── yolov5m.yaml
│ │ │ ├── yolov5n.yaml
│ │ │ ├── yolov5s.yaml
│ │ │ └── yolov5x.yaml
│ │ ├── requirements.txt
│ │ └── utils/
│ │ ├── __init__.py
│ │ ├── activations.py
│ │ ├── augmentations.py
│ │ ├── autoanchor.py
│ │ ├── autobatch.py
│ │ ├── aws/
│ │ │ ├── __init__.py
│ │ │ ├── mime.sh
│ │ │ ├── resume.py
│ │ │ └── userdata.sh
│ │ ├── benchmarks.py
│ │ ├── callbacks.py
│ │ ├── dataloaders.py
│ │ ├── docker/
│ │ │ ├── Dockerfile
│ │ │ ├── Dockerfile-arm64
│ │ │ └── Dockerfile-cpu
│ │ ├── downloads.py
│ │ ├── flask_rest_api/
│ │ │ ├── README.md
│ │ │ ├── example_request.py
│ │ │ └── restapi.py
│ │ ├── general.py
│ │ ├── google_app_engine/
│ │ │ ├── Dockerfile
│ │ │ ├── additional_requirements.txt
│ │ │ └── app.yaml
│ │ ├── loggers/
│ │ │ ├── __init__.py
│ │ │ └── wandb/
│ │ │ ├── README.md
│ │ │ ├── __init__.py
│ │ │ ├── log_dataset.py
│ │ │ ├── sweep.py
│ │ │ ├── sweep.yaml
│ │ │ └── wandb_utils.py
│ │ ├── loss.py
│ │ ├── metrics.py
│ │ ├── plots.py
│ │ └── torch_utils.py
│ ├── fuse/
│ │ ├── __init__.py
│ │ ├── discriminator.py
│ │ └── generator.py
│ └── saliency/
│ ├── __init__.py
│ └── u2net.py
├── pipeline/
│ ├── __init__.py
│ ├── detect.py
│ ├── fuse.py
│ ├── iqa.py
│ ├── saliency.py
│ └── train.py
├── requirements.txt
├── scripts/
│ ├── __init__.py
│ ├── infer_f.py
│ ├── infer_fd.py
│ ├── train_f.py
│ ├── train_fd.py
│ └── utils/
│ └── smart_optimizer.py
├── tools/
│ ├── choose_images.py
│ ├── convert_to_png.py
│ ├── data_preview.py
│ ├── dict_to_device.py
│ ├── environment_probe.py
│ ├── generate_mask.py
│ └── scenario_reader.py
├── train.py
└── tutorial.ipynb
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/sync.yml
================================================
name: Mirror to DUT DIMT
on: [ push, delete, create ]
jobs:
git-mirror:
runs-on: ubuntu-latest
steps:
- name: Configure Private Key
env:
SSH_PRIVATE_KEY: ${{ secrets.PRIVATE_KEY }}
run: |
mkdir -p ~/.ssh
echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
echo "StrictHostKeyChecking no" >> ~/.ssh/config
- name: Push Mirror
env:
SOURCE_REPO: 'https://github.com/JinyuanLiu-CV/TarDAL.git'
DESTINATION_REPO: 'git@github.com:dlut-dimt/TarDAL.git'
run: |
git clone --mirror "$SOURCE_REPO" && cd `basename "$SOURCE_REPO"`
git remote set-url --push origin "$DESTINATION_REPO"
git fetch -p origin
git for-each-ref --format 'delete %(refname)' refs/pull | git update-ref --stdin
git push --mirror
================================================
FILE: .gitignore
================================================
# project config file (contain sensitive: server information)
.idea/*
# fuse results (contain images that can be reproduced by given model parameters)
runs/*
# macOS finder file (contain sensitive: local username)
**/.DS_Store
# python cache
**/__pycache__
# experimental data
data/*
!data/README.md
# weights (update by release)
weights/*
# test files
**/test/*
# wandb
wandb/*
================================================
FILE: CITATION.cff
================================================
@inproceedings{liu2022target,
title={Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection},
author={Liu, Jinyuan and Fan, Xin and Huang, Zhanbo and Wu, Guanyao and Liu, Risheng and Zhong, Wei and Luo, Zhongxuan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5802--5811},
year={2022}
}
================================================
FILE: LICENSE
================================================
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
================================================
FILE: README.md
================================================
# TarDAL
[](https://colab.research.google.com/github/JinyuanLiu-CV/TarDAL/blob/main/tutorial.ipynb)

Jinyuan Liu, Xin Fan*, Zhangbo Huang, Guanyao Wu, Risheng Liu , Wei Zhong, Zhongxuan Luo,**“Target-aware Dual
Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection”**,
IEEE/CVF Conference on Computer Vision and Pattern Recognition **(CVPR)**, 2022. **(Oral)**
- [*[ArXiv]*](https://arxiv.org/abs/2203.16220v1)
- [*[CVPR]*](https://openaccess.thecvf.com/content/CVPR2022/papers/Liu_Target-Aware_Dual_Adversarial_Learning_and_a_Multi-Scenario_Multi-Modality_Benchmark_To_CVPR_2022_paper.pdf)
---

---
<h2> <p align="center"> M3FD Dataset </p> </h2>
### Preview
The preview of our dataset is as follows.
---


---
### Details
- **Sensor**: A synchronized system containing one binocular optical camera and one binocular infrared sensor. More
details are available in the paper.
- **Main scene**:
- Campus of Dalian University of Technology.
- State Tourism Holiday Resort at the Golden Stone Beach in Dalian, China.
- Main roads in Jinzhou District, Dalian, China.
- **Total number of images**:
- **8400** (for fusion, detection and fused-based detection)
- **600** (independent scene for fusion)
- **Total number of image pairs**:
- **4200** (for fusion, detection and fused-based detection)
- **300** (independent scene for fusion)
- **Format of images**:
- [Infrared] 24-bit grayscale bitmap
- [Visible] 24-bit color bitmap
- **Image size**: **1024 x 768** pixels (mostly)
- **Registration**: **All image pairs are registered.** The visible images are calibrated by using the internal
parameters of our synchronized system, and the infrared images are artificially distorted by homography matrix.
- **Labeling**: **34407 labels** have been manually labeled, containing 6 kinds of targets: **{People, Car, Bus,
Motorcycle, Lamp, Truck}**. (Limited by manpower, some targets may be mismarked or missed. We would appreciate if you
would point out wrong or missing labels to help us improve the dataset)
### Download
- [Google Drive](https://drive.google.com/drive/folders/1H-oO7bgRuVFYDcMGvxstT1nmy0WF_Y_6?usp=sharing)
- [Baidu Yun](https://pan.baidu.com/s/1GoJrrl_mn2HNQVDSUdPCrw?pwd=M3FD)
If you have any question or suggestion about the dataset, please email to [Guanyao Wu](mailto:rollingplainko@gmail.com)
or [Jinyuan Liu](mailto:atlantis918@hotmail.com).
<h2> <p align="center"> TarDAL Fusion </p> </h2>
### Baselines
In the experiment process, we used the following **outstanding** work as our baseline.
*Note: Sorted alphabetically*
- [AUIF](https://ieeexplore.ieee.org/document/9416456) (IEEE TCSVT 2021)
- [DDcGAN](https://github.com/hanna-xu/DDcGAN) (IJCAI 2019)
- [Densefuse](https://github.com/hli1221/imagefusion_densefuse) (IEEE TIP 2019)
- [DIDFuse](https://github.com/Zhaozixiang1228/IVIF-DIDFuse) (IJCAI 2020)
- [FusionGAN](https://github.com/jiayi-ma/FusionGAN) (Information Fusion 2019)
- [GANMcC](https://github.com/HaoZhang1018/GANMcC) (IEEE TIM 2021)
- [MFEIF](https://github.com/JinyuanLiu-CV/MFEIF) (IEEE TCSVT 2021)
- [RFN-Nest](https://github.com/hli1221/imagefusion-rfn-nest) (Information Fusion 2021)
- [SDNet](https://github.com/HaoZhang1018/SDNet) (IJCV 2021)
- [U2Fusion](https://github.com/hanna-xu/U2Fusion) (IEEE TPAMI 2020)
### Quick Start
Under normal circumstances, you may just be curious about the results of the fusion task, so we have prepared an online demonstration.
Our online preview (free) in [Colab](https://colab.research.google.com/github/JinyuanLiu-CV/TarDAL/blob/main/tutorial.ipynb).
### Set Up on Your Own Machine
When you want to dive deeper or apply it on a larger scale, you can configure our TarDAL on your computer following the steps below.
#### Virtual Environment
We strongly recommend that you use Conda as a package manager.
```shell
# create virtual environment
conda create -n tardal python=3.10
conda activate tardal
# select pytorch version yourself
# install tardal requirements
pip install -r requirements.txt
# install yolov5 requirements
pip install -r module/detect/requirements.txt
```
#### Data Preparation
You should put the data in the correct place in the following form.
```
TarDAL ROOT
├── data
| ├── m3fd
| | ├── ir # infrared images
| | ├── vi # visible images
| | ├── labels # labels in txt format (yolo format)
| | └── meta # meta data, includes: pred.txt, train.txt, val.txt
| ├── tno
| | ├── ir # infrared images
| | ├── vi # visible images
| | └── meta # meta data, includes: pred.txt, train.txt, val.txt
| ├── roadscene
| └── ...
```
You can directly download the TNO and RoadScene datasets organized in this format from here.
- [Google Drive](https://drive.google.com/drive/folders/1H-oO7bgRuVFYDcMGvxstT1nmy0WF_Y_6?usp=sharing)
- [Baidu Yun](https://pan.baidu.com/s/1GoJrrl_mn2HNQVDSUdPCrw?pwd=M3FD)
#### Fuse or Eval
In this section, we will guide you to generate fusion images using our pre-trained model.
As we mentioned in our paper, we provide three pre-trained models.
| Name | Description |
|-----------|-----------------------------------------------------------------|
| TarDAL-DT | Optimized for human vision. (Default) |
| TarDAL-TT | Optimized for object detection. |
| TarDAL-CT | Optimal solution for joint human vision and detection accuracy. |
You can find their corresponding configuration file path in [configs](config/official/infer).
Some settings you should pay attention to:
* config.yaml
* `strategy`: save images (fuse) or save images & labels (fuse & detect)
* `dataset`: name & root
* `inference`: each item in inference
* infer.py
* `--cfg`: config file path, such as `configs/official/tardal-dt.yaml`
* `--save_dir`: result save folder
Under normal circumstances, you don't need to manually download the model parameters, our program will do it for you.
```shell
# TarDAL-DT
# use official tardal-dt infer config and save images to runs/tardal-dt
python infer.py --cfg configs/official/tardal-dt.yaml --save_dir runs/tardal-dt
# TarDAL-TT
# use official tardal-tt infer config and save images to runs/tardal-tt
python infer.py --cfg configs/official/tardal-tt.yaml --save_dir runs/tardal-tt
# TarDAL-CT
# use official tardal-ct infer config and save images to runs/tardal-ct
python infer.py --cfg configs/official/tardal-ct.yaml --save_dir runs/tardal-ct
```
#### Train
We provide some training script for you to train your own model.
Please note: The training code is only intended to assist in understanding the paper and is not recommended for direct application in
production environments.
Unlike previous code versions, you don't need to preprocess the data, we will automatically calculate the IQA weights and mask.
```shell
# TarDAL-DT
python train.py --cfg configs/official/tardal-dt.yaml --auth $YOUR_WANDB_KEY
# TarDAL-TT
python train.py --cfg configs/official/tardal-tt.yaml --auth $YOUR_WANDB_KEY
# TarDAL-CT
python train.py --cfg configs/official/tardal-ct.yaml --auth $YOUR_WANDB_KEY
```
If you want to base your approach on ours and extend it to a production environment, here are some additional suggestions for you.
[Suggestion: A better train process for everyone.](assets/train_process.png)
### Any Question
If you have any other questions about the code, please email [Zhanbo Huang](mailto:zbhuang917@hotmail.com).
Due to job changes, the previous link `zbhuang@mail.dlut.edu.cn` is no longer available.
## Citation
If this work has been helpful to you, please feel free to cite our paper!
```
@inproceedings{liu2022target,
title={Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection},
author={Liu, Jinyuan and Fan, Xin and Huang, Zhanbo and Wu, Guanyao and Liu, Risheng and Zhong, Wei and Luo, Zhongxuan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5802--5811},
year={2022}
}
```
================================================
FILE: assets/sample/s1/meta/pred.txt
================================================
M3FD_00471.png
ROAD_040.jpg
TNO_028.bmp
================================================
FILE: config/__init__.py
================================================
class ConfigDict(dict):
__setattr__ = dict.__setitem__
__getattr__ = dict.__getitem__
def from_dict(obj) -> ConfigDict:
if not isinstance(obj, dict):
return obj
d = ConfigDict()
for k, v in obj.items():
d[k] = from_dict(v)
return d
================================================
FILE: config/default.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
log : INFO # log level
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse & detect
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: weights/v1/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: weights/v1/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : M3FD # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/m3fd # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 320, 320 ] # training image size in (h, w)
batch_size : 16 # batch size used to train
num_workers : 8 # number of workers used in data loading
epochs : 300 # number of epochs to train
eval_interval: 1 # evaluation interval during training
save_interval: 5 # save interval during training
freeze : [ ] # freeze layers (e.g. backbone, head, ...)
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 8 # number of workers used in data loading
use_eval : true # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
save_txt : false # save label file
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v0 # v0: 0.01*ssim + 0.99*l1 | v1: ms-ssim
src : 1 # src loss gain (v0: 0.8)
adv : 0 # adv loss gain (v0: 0.2)
t_adv : 1 # target loss gain (v0: 1)
d_adv : 1 # detail loss gain (v0: 1)
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 1 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.3 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 0.7 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
warm : 2 # bridge warm up epochs (det -> det, fuse -> fuse)
# optimizer settings:
optimizer:
name : sgd # optimizer name
lr_i : 1.0e-2 # initial learning rate
lr_f : 1.0e-1 # final learning rate (lr_i * lr_f)
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
lr_d : 1.0e-4 # discriminator learning rate
# scheduler settings:
scheduler:
warmup_epochs : [ 2.0, 3.0 ] # start-[0]: bridge warm (keep const), [0]-[1]: normal warm, [1]-end: normal decay
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/exp/i-tardal-dt.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse & detect
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : M3FD # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/m3fd # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 224, 224 ] # training image size in (h, w)
batch_size : 32 # batch size used to train
num_workers : 8 # number of workers used in data loading
epochs : 1000 # number of epochs to train
eval_interval: 1 # evaluation interval during training
save_interval: 5 # save interval during training
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 8 # number of workers used in data loading
use_eval : ~ # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
save_txt : false # save label file
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0 # target loss gain
d_adv : 0 # detail loss gain
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.5 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 1.0 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
# optimizer settings:
optimizer:
name : adamw # optimizer name
lr_i : 1.0e-3 # initial learning rate
lr_f : 1.0e-3 # final learning rate
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
# scheduler settings:
scheduler:
warmup_epochs : 3.0 # warmup epochs
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/exp/t-tardal-ct.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse & detect
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : M3FD # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/m3fd # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 320, 320 ] # training image size in (h, w)
batch_size : 16 # batch size used to train
num_workers : 8 # number of workers used in data loading
epochs : 1000 # number of epochs to train
eval_interval: 1 # evaluation interval during training
save_interval: 5 # save interval during training
freeze : [ ] # freeze layers (e.g. backbone, head, ...)
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 8 # number of workers used in data loading
use_eval : True # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
save_txt : false # save label file
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0 # target loss gain
d_adv : 0 # detail loss gain
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.5 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 1.0 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
# optimizer settings:
optimizer:
name : adamw # optimizer name
lr_i : 1.0e-3 # initial learning rate
lr_f : 1.0e-3 # final learning rate
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
# scheduler settings:
scheduler:
warmup_epochs : 3.0 # warmup epochs
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/official/colab.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
log : INFO # log level
wandb_mode: 'offline' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-ct.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-ct.pth # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : roadscene # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : assets/sample/s1 # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 224, 224 ] # training image size in (h, w)
batch_size : 32 # batch size used to train
num_workers : 8 # number of workers used in data loading
epochs : 1000 # number of epochs to train
eval_interval: 1 # evaluation interval during training
save_interval: 5 # save interval during training
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 8 # number of workers used in data loading
use_eval : ~ # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
save_txt : false # save label file
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0 # target loss gain
d_adv : 0 # detail loss gain
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.5 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 1.0 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
# optimizer settings:
optimizer:
name : adamw # optimizer name
lr_i : 1.0e-3 # initial learning rate
lr_f : 1.0e-3 # final learning rate
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
# scheduler settings:
scheduler:
warmup_epochs : 3.0 # warmup epochs
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/official/infer/tardal-ct.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
log : INFO # log level
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-ct.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-ct.pth # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : M3FD # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/m3fd # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 224, 224 ] # training image size in (h, w)
batch_size : 32 # batch size used to train
num_workers : 8 # number of workers used in data loading
epochs : 1000 # number of epochs to train
eval_interval: 1 # evaluation interval during training
save_interval: 5 # save interval during training
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 8 # number of workers used in data loading
use_eval : ~ # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
save_txt : false # save label file
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0 # target loss gain
d_adv : 0 # detail loss gain
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.5 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 1.0 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
# optimizer settings:
optimizer:
name : adamw # optimizer name
lr_i : 1.0e-3 # initial learning rate
lr_f : 1.0e-3 # final learning rate
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
# scheduler settings:
scheduler:
warmup_epochs : 3.0 # warmup epochs
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/official/infer/tardal-dt.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
log : INFO # log level
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : M3FD # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/m3fd # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 224, 224 ] # training image size in (h, w)
batch_size : 32 # batch size used to train
num_workers : 8 # number of workers used in data loading
epochs : 1000 # number of epochs to train
eval_interval: 1 # evaluation interval during training
save_interval: 5 # save interval during training
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 8 # number of workers used in data loading
use_eval : ~ # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
save_txt : false # save label file
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0 # target loss gain
d_adv : 0 # detail loss gain
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.5 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 1.0 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
# optimizer settings:
optimizer:
name : adamw # optimizer name
lr_i : 1.0e-3 # initial learning rate
lr_f : 1.0e-3 # final learning rate
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
# scheduler settings:
scheduler:
warmup_epochs : 3.0 # warmup epochs
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/official/infer/tardal-tt.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
log : INFO # log level
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-tt.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-tt.pth # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : M3FD # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/m3fd # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 224, 224 ] # training image size in (h, w)
batch_size : 32 # batch size used to train
num_workers : 12 # number of workers used in data loading
epochs : 1000 # number of epochs to train
eval_interval: 5 # evaluation interval during training
save_interval: 5 # save interval during training
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 12 # number of workers used in data loading
use_eval : true # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0.5 # target loss gain
d_adv : 0.5 # detail loss gain
det : 1.0 # det loss gain (available only for detect or fuse+detect mode)
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.5 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 1.0 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
# optimizer settings:
optimizer:
name : adamw # optimizer name
lr_i : 1.0e-3 # initial learning rate
lr_f : 1.0e-3 # final learning rate
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
# scheduler settings:
scheduler:
warmup_epochs : 3.0 # warmup epochs
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/official/train/tardal-ct.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
log : INFO # log level
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse & detect
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: ~ # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : M3FD # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/m3fd # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 320, 320 ] # training image size in (h, w)
batch_size : 16 # batch size used to train
num_workers : 8 # number of workers used in data loading
epochs : 300 # number of epochs to train
eval_interval: 1 # evaluation interval during training
save_interval: 5 # save interval during training
freeze : [ ] # freeze layers (e.g. backbone, head, ...)
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 8 # number of workers used in data loading
use_eval : true # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
save_txt : false # save label file
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0.5 # target loss gain
d_adv : 0.5 # detail loss gain
det : 1.0 # det loss gain (available only for detect or fuse+detect mode)
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.3 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 0.7 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
warm : 2 # bridge warm up epochs (det -> det, fuse -> fuse)
# optimizer settings:
optimizer:
name : sgd # optimizer name
lr_i : 1.0e-2 # initial learning rate
lr_f : 1.0e-1 # final learning rate (lr_i * lr_f)
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
lr_d : 1.0e-4 # discriminator learning rate
# scheduler settings:
scheduler:
warmup_epochs : [ 2.0, 3.0 ] # start-[0]: bridge warm (keep const), [0]-[1]: normal warm, [1]-end: normal decay
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/official/train/tardal-dt.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
log : INFO # log level
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : fuse
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: ~ # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: ~ # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : RoadScene # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/roadscene # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 224, 224 ] # training image size in (h, w)
batch_size : 32 # batch size used to train
num_workers : 12 # number of workers used in data loading
epochs : 1000 # number of epochs to train
eval_interval: 5 # evaluation interval during training
save_interval: 5 # save interval during training
freeze : [ ] # freeze layers (e.g. backbone, head, ...)
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 12 # number of workers used in data loading
use_eval : true # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0.5 # target loss gain
d_adv : 0.5 # detail loss gain
det : 1.0 # det loss gain (available only for detect or fuse+detect mode)
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.5 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 1.0 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
# optimizer settings:
optimizer:
name : adamw # optimizer name
lr_i : 1.0e-3 # initial learning rate
lr_f : 1.0e-3 # final learning rate
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
# scheduler settings:
scheduler:
warmup_epochs : 3.0 # warmup epochs
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: config/official/train/tardal-tt.yaml
================================================
# base settings
device : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)
save_dir : 'cache' # folder used for saving the model, logs results
# debug mode settings
debug :
log : INFO # log level
wandb_mode: 'online' # wandb connection mode
fast_run : false # use a small subset of the dataset for debugging code
# framework training strategy:
# backward method: fuse (direct training DT)
# backward method: detect (task-oriented training TT)
# backward method: fuse & detect (cooperative training CT)
strategy : detect
# fuse network settings: core of infrared and visible fusion
fuse :
dim : 32 # features base dimensions for generator and discriminator
depth : 3 # depth of dense architecture
pretrained: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth # ~: disable, path or url: load with pretrained parameters
# detect network settings: available if framework in joint mode (detect, fuse + detect)
detect :
model : yolov5s # yolo model (yolov5 n,s,m,l,x)
channels : 3 # input channels (3: rgb or 1: grayscale)
pretrained: ~ # ~: disable, path or url: load with pretrained parameters
# saliency network settings: generating mask for training tardal
saliency :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/mask-u2.pth
# iqa settings: information measurement
iqa :
url: https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/iqa-vgg.pth
# dataset settings:
# we provide four built-in representative datasets,
# if you want to use some custom datasets, please refer to the documentation to write yourself or open an issue.
dataset :
name : M3FD # dataset folder to be trained with (fuse: TNO, RoadScene; fuse & detect: M3FD, MultiSpectral, etc.)
root : data/m3fd # dataset root path
# only available for fuse & detect
detect:
hsv : [ 0.015,0.7,0.4 ] # image HSV augmentation (fraction) [developing]
degrees : 0 # image rotation (+/- degrees) [developing]
translate : 0.1 # image translation (+/- fraction) [developing]
scale : 0.9 # image scale (+/- gain) [developing]
shear : 0.0 # image shear (+/- degrees) [developing]
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 [developing]
flip_ud : 0.0 # image flip up-down (probability)
flip_lr : 0.5 # image flip left-right (probability)
# train settings:
train :
image_size : [ 320, 320 ] # training image size in (h, w)
batch_size : 16 # batch size used to train
num_workers : 8 # number of workers used in data loading
epochs : 300 # number of epochs to train
eval_interval: 1 # evaluation interval during training
save_interval: 5 # save interval during training
freeze : [ ] # freeze layers (e.g. backbone, head, ...)
# inference settings:
inference:
batch_size : 8 # batch size used to train
num_workers: 8 # number of workers used in data loading
use_eval : true # use eval mode in inference mode, default true, false for v0 weights.
grayscale : false # ignore dataset settings, save as grayscale image
# loss settings:
loss :
# fuse loss: src(l1+ssim/ms-ssim) + adv(target+detail) + det
fuse :
src_fn: v1 # v0: 1*ssim + 20*l1 | v1: ms-ssim
src : 0.8 # src loss gain (1 during v0)
adv : 0.2 # adv loss gain (0.1 during v0)
t_adv : 0.5 # target loss gain
d_adv : 0.5 # detail loss gain
det : 1.0 # det loss gain (available only for detect or fuse+detect mode)
d_mask: false # use mask for detail discriminator (v0: true)
d_warm: 10 # discriminator warmup epochs
# detect loss: box + cls + obj
detect:
box : 0.05 # box loss gain
cls : 0.3 # cls loss gain
cls_pw : 1.0 # cls BCELoss positive weight
obj : 0.7 # obj loss gain (scale with pixels)
obj_pw : 1.0 # obj BCELoss positive weight
iou_t : 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
# bridge
bridge:
fuse : 0.5 # fuse loss gain for generator
detect: 0.5 # detect loss gain for generator
warm : 2 # bridge warm up epochs (det -> det, fuse -> fuse)
# optimizer settings:
optimizer:
name : sgd # optimizer name
lr_i : 1.0e-2 # initial learning rate
lr_f : 1.0e-1 # final learning rate (lr_i * lr_f)
momentum : 0.937 # adam beta1
weight_decay: 5.0e-4 # decay rate used in optimizer
lr_d : 1.0e-4 # discriminator learning rate
# scheduler settings:
scheduler:
warmup_epochs : [ 2.0, 3.0 ] # start-[0]: bridge warm (keep const), [0]-[1]: normal warm, [1]-end: normal decay
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr : 0.1 # warmup initial bias lr
================================================
FILE: data/README.md
================================================
# Dataset Configure Reference
## Official Supported Datasets
* TNO: fuse
* RoadScene: fuse
* MultiSpectral: fuse + detect
* M3FD: fuse + detect
## Other Datasets
You can write scripts for your own custom dataset in `loader/{$NAME}.py`, and raise a pull request (optional).
## Prepare
Datasets should have the following structure:
```
data
|__ TNO // name of the dataset
|__ ir // infrared images
|__ vi // visible images
|__ meta // dataset meta information
|__ train.txt // image name for training
|__ val.txt // image name for validation
|__ M3FD // name of the dataset
|__ ir // infrared images
|__ vi // visible images
|__ labels // object labels (ground truth, cxcywh)
|__ meta // dataset meta information
|__ train.txt // image name for training
|__ val.txt // image name for validation
```
================================================
FILE: functions/__init__.py
================================================
================================================
FILE: functions/div_loss.py
================================================
import logging
import torch
import torch.autograd as autograd
def div_loss(disc, real_x, fake_x, wp: int = 6, eps: float = 1e-6):
logging.debug(f'calculating div: real {real_x.mean():.2f}, fake {fake_x.mean():.2f}')
alpha = torch.rand((real_x.shape[0], 1, 1, 1)).cuda()
tmp_x = (alpha * real_x + (1 - alpha) * fake_x).requires_grad_(True)
tmp_y = disc(tmp_x)
grad = autograd.grad(
outputs=tmp_y,
inputs=tmp_x,
grad_outputs=torch.ones_like(tmp_y),
create_graph=True,
retain_graph=True,
only_inputs=True,
)[0]
grad = grad.view(tmp_x.shape[0], -1) + eps
div = (grad.norm(2, dim=1) ** wp).mean()
return div
================================================
FILE: functions/get_param_groups.py
================================================
from typing import List
from torch import nn
def get_param_groups(module) -> tuple[List, List, List]:
group = [], [], []
bn = tuple(v for k, v in nn.__dict__.items() if 'Norm' in k) # normalization layers
for v in module.modules():
if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
"bias"
group[2].append(v.bias)
if isinstance(v, bn):
"weight (no decay)"
group[1].append(v.weight)
elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
"weight (with decay)"
group[0].append(v.weight)
return group
================================================
FILE: infer.py
================================================
import argparse
import logging
from pathlib import Path
import torch.backends.cudnn
import yaml
import scripts
from config import from_dict
if __name__ == '__main__':
# args parser
parser = argparse.ArgumentParser()
parser.add_argument('--cfg', default='config/default.yaml', help='config file path')
parser.add_argument('--save_dir', default='runs/tmp', help='fusion result save folder')
args = parser.parse_args()
# init config
config = yaml.safe_load(Path(args.cfg).open('r'))
config = from_dict(config) # convert dict to object
config = config
# init logger
log_f = '%(asctime)s | %(filename)s[line:%(lineno)d] | %(levelname)s | %(message)s'
logging.basicConfig(level=config.debug.log, format=log_f)
# init device & anomaly detector
torch.backends.cudnn.benchmark = True
torch.autograd.set_detect_anomaly(True)
# choose inference script
logging.info(f'enter {config.strategy} inference mode')
match config.strategy:
case 'fuse':
infer_p = getattr(scripts, 'InferF')
# check pretrained weights
if config.fuse.pretrained is None:
logging.warning('no pretrained weights specified, use official pretrained weights')
config.fuse.pretrained = 'https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-dt.pth'
case 'fuse & detect':
infer_p = getattr(scripts, 'InferFD')
# check pretrained weights
if config.fuse.pretrained is None:
logging.warning('no pretrained weights specified, use official pretrained weights')
config.fuse.pretrained = 'https://github.com/JinyuanLiu-CV/TarDAL/releases/download/v1.0.0/tardal-ct.pth'
case 'detect':
raise NotImplementedError('detect mode is useless during inference period, please use fuse & detect mode')
case _:
raise ValueError(f'unknown strategy: {config.strategy}')
# create script instance
infer = infer_p(config, args.save_dir)
infer.run()
================================================
FILE: loader/__init__.py
================================================
from loader.m3fd import M3FD
from loader.roadscene import RoadScene
from loader.tno import TNO
__all__ = ['TNO', 'RoadScene', 'M3FD']
================================================
FILE: loader/m3fd.py
================================================
import logging
import random
from pathlib import Path
from typing import Literal, List, Optional
import torch
from kornia.geometry import vflip, hflip, resize
from torch import Tensor, Size
from torch.utils.data import Dataset
from torchvision.ops import box_convert
from torchvision.transforms import Resize
from torchvision.utils import draw_bounding_boxes
from config import ConfigDict
from loader.utils.checker import check_mask, check_image, check_labels, check_iqa, get_max_size
from loader.utils.reader import gray_read, ycbcr_read, label_read, img_write, label_write
from tools.scenario_reader import scenario_counter, generate_meta
class M3FD(Dataset):
type = 'fuse & detect' # dataset type: 'fuse' or 'fuse & detect'
color = True # dataset visible format: false -> 'gray' or true -> 'color'
classes = ['People', 'Car', 'Bus', 'Lamp', 'Motorcycle', 'Truck']
palette = ['#FF0000', '#C1C337', '#2FA7B4', '#F541C4', '#F84F2C', '#7D2CC8']
generate_meta_lock = False # generate meta once
def __init__(self, root: str | Path, mode: Literal['train', 'val', 'pred'], config: ConfigDict):
super().__init__()
root = Path(root)
self.root = root
self.mode = mode
self.config = config
# check json meta config
if M3FD.generate_meta_lock is False:
if Path(root / 'meta' / 'scenario.json').exists():
logging.info('found scenario.json, generating train & val list.')
scenario_counter(root / 'meta' / 'scenario.json')
generate_meta(root)
M3FD.generate_meta_lock = True
else:
logging.warning('not found scenario.json, using current train & val list.')
# read corresponding list
img_list = Path(root / 'meta' / f'{mode}.txt').read_text().splitlines()
logging.info(f'load {len(img_list)} images from {root.name}')
self.img_list = img_list
# check images
check_image(root, img_list)
# check labels
self.labels = check_labels(root, img_list)
# more check
match mode:
case 'train' | 'val':
# check mask cache
check_mask(root, img_list, config)
# check iqa cache
check_iqa(root, img_list, config)
case _:
# get max shape
self.max_size = get_max_size(root, img_list)
self.transform_fn = Resize(size=self.max_size)
def __len__(self) -> int:
return len(self.img_list)
def __getitem__(self, index: int) -> dict:
# choose get item method
match self.mode:
case 'train' | 'val':
return self.train_val_item(index)
case _:
return self.pred_item(index)
def train_val_item(self, index: int) -> dict:
# image name, like '028.png'
name = self.img_list[index]
logging.debug(f'train-val mode: loading item {name}')
# load infrared and visible
ir = gray_read(self.root / 'ir' / name)
vi, cbcr = ycbcr_read(self.root / 'vi' / name)
# load mask
mask = gray_read(self.root / 'mask' / name)
# load information measurement
ir_w = gray_read(self.root / 'iqa' / 'ir' / name)
vi_w = gray_read(self.root / 'iqa' / 'vi' / name)
# load label
label_p = Path(name).stem + '.txt'
labels = label_read(self.root / 'labels' / label_p)
# concat images for transform(s)
t = torch.cat([ir, vi, mask, ir_w, vi_w, cbcr], dim=0)
# transform (resize)
resize_fn = Resize(size=self.config.train.image_size)
t = resize_fn(t)
# transform (flip up-down)
if random.random() < self.config.dataset.detect.flip_ud:
t = vflip(t)
if len(labels):
labels[:, 2] = 1 - labels[:, 2]
# transform (flip left-right)
if random.random() < self.config.dataset.detect.flip_lr:
t = hflip(t)
if len(labels):
labels[:, 1] = 1 - labels[:, 1]
# transform labels (cls, x1, y1, x2, y2) -> (0, cls, ...)
labels_o = torch.zeros((len(labels), 6))
if len(labels):
labels_o[:, 1:] = labels
# unpack images
ir, vi, mask, ir_w, vi_w, cbcr = torch.split(t, [1, 1, 1, 1, 1, 2], dim=0)
# merge data
sample = {
'name': name,
'ir': ir, 'vi': vi,
'ir_w': ir_w, 'vi_w': vi_w, 'mask': mask, 'cbcr': cbcr,
'labels': labels_o
}
# return as expected
return sample
def pred_item(self, index: int) -> dict:
# image name, like '028.png'
name = self.img_list[index]
logging.debug(f'pred mode: loading item {name}')
# load infrared and visible
ir = gray_read(self.root / 'ir' / name)
vi, cbcr = ycbcr_read(self.root / 'vi' / name)
# transform (resize)
s = ir.shape[1:]
t = torch.cat([ir, vi, cbcr], dim=0)
ir, vi, cbcr = torch.split(self.transform_fn(t), [1, 1, 2], dim=0)
# merge data
sample = {'name': name, 'ir': ir, 'vi': vi, 'cbcr': cbcr, 'shape': s}
# return as expected
return sample
@staticmethod
def pred_save(fus: Tensor, names: List[str | Path], shape: List[Size], pred: Optional[Tensor] = None, save_txt: bool = False):
if pred is None:
return M3FD.pred_save_no_boxes(fus, names, shape)
return M3FD.pred_save_with_boxes(fus, names, shape, pred, save_txt)
@staticmethod
def pred_save_no_boxes(fus: Tensor, names: List[str | Path], shape: List[Size]):
for img_t, img_p, img_s in zip(fus, names, shape):
img_t = resize(img_t, img_s)
img_write(img_t, img_p)
@staticmethod
def pred_save_with_boxes(fus: Tensor, names: List[str | Path], shape: List[Size], pred: Tensor, save_txt: bool = False):
for img_t, img_p, img_s, pred_i in zip(fus, names, shape, pred):
# reshape target
cur_s = img_t.shape[1:]
scale_x, scale_y = cur_s[1] / img_s[1], cur_s[0] / img_s[0]
pred_i[:, :4] *= Tensor([scale_x, scale_y, scale_x, scale_y]).to(pred_i.device)
# reshape image
img_t = resize(img_t, img_s)
img = (img_t.clamp_(0, 1) * 255).to(torch.uint8)
# draw bounding box
pred_x = list(filter(lambda x: x[4] > 0.6, pred_i))
boxes = [x[:4] for x in pred_x]
cls_idx = [int(x[5].cpu().numpy()) for x in pred_x]
labels = [f'{M3FD.classes[cls]}: {x[4].cpu().numpy():.2f}' for cls, x in zip(cls_idx, pred_x)]
colors = [M3FD.palette[cls] for cls, x in zip(cls_idx, pred_x)]
if len(boxes):
img = draw_bounding_boxes(img, torch.stack(boxes, dim=0), labels, colors, width=2)
img = img.float() / 255
# save labeled images
img_p = Path(img_p.parent) / 'images' / img_p.name
img_write(img, img_p)
# save label txt
if save_txt:
txt_p = Path(str(img_p.parent).replace('images', 'labels')) / (img_p.stem + '.txt')
txt_p.unlink(missing_ok=True)
txt_p.touch()
pred_i[:, :4] /= Tensor([img_s[1], img_s[0], img_s[1], img_s[0]]).to(pred_i.device)
pred_i[:, :4] = box_convert(pred_i[:, :4], 'xyxy', 'cxcywh')
label_write(pred_i, txt_p)
@staticmethod
def collate_fn(data: List[dict]) -> dict:
# keys
keys = data[0].keys()
# merge
new_data = {}
for key in keys:
k_data = [d[key] for d in data]
match key:
case 'name' | 'shape':
# (name, name)
new_data[key] = k_data
case 'labels':
# (labels, image_index)
for i, lb in enumerate(k_data):
lb[:, 0] = i
new_data[key] = torch.cat(k_data, dim=0)
case _:
# (img, img)
new_data[key] = torch.stack(k_data, dim=0)
# return as expected
return new_data
================================================
FILE: loader/roadscene.py
================================================
import logging
from pathlib import Path
from typing import Literal, List
import torch
from kornia.geometry import resize
from torch import Tensor, Size
from torch.utils.data import Dataset
from torchvision.transforms import Resize
from config import ConfigDict
from loader.utils.checker import check_mask, check_image, check_iqa, get_max_size
from loader.utils.reader import gray_read, ycbcr_read, img_write
class RoadScene(Dataset):
type = 'fuse' # dataset type: 'fuse' or 'fuse & detect'
color = True # dataset visible format: false -> 'gray' or true -> 'color'
def __init__(self, root: str | Path, mode: Literal['train', 'val', 'pred'], config: ConfigDict):
super().__init__()
root = Path(root)
self.root = root
self.mode = mode
# read corresponding list
img_list = Path(root / 'meta' / f'{mode}.txt').read_text().splitlines()
logging.info(f'load {len(img_list)} images from {root.name}')
self.img_list = img_list
# check images
check_image(root, img_list)
# more check
match mode:
case 'train' | 'val':
# check mask cache
check_mask(root, img_list, config)
# check iqa cache
check_iqa(root, img_list, config)
case _:
# get max shape
self.max_size = get_max_size(root, img_list)
# choose transform
match mode:
case 'train' | 'val':
self.transform_fn = Resize(size=config.train.image_size)
case _:
self.transform_fn = Resize(size=self.max_size)
def __len__(self) -> int:
return len(self.img_list)
def __getitem__(self, index: int) -> dict:
# choose get item method
match self.mode:
case 'train' | 'val':
return self.train_val_item(index)
case _:
return self.pred_item(index)
def train_val_item(self, index: int) -> dict:
# image name, like '003.png'
name = self.img_list[index]
logging.debug(f'train-val mode: loading item {name}')
# load infrared and visible
ir = gray_read(self.root / 'ir' / name)
vi, cbcr = ycbcr_read(self.root / 'vi' / name)
# load mask
mask = gray_read(self.root / 'mask' / name)
# load information measurement
ir_w = gray_read(self.root / 'iqa' / 'ir' / name)
vi_w = gray_read(self.root / 'iqa' / 'vi' / name)
# transform (resize)
t = torch.cat([ir, vi, mask, ir_w, vi_w, cbcr], dim=0)
ir, vi, mask, ir_w, vi_w, cbcr = torch.split(self.transform_fn(t), [1, 1, 1, 1, 1, 2], dim=0)
# merge data
sample = {'name': name, 'ir': ir, 'vi': vi, 'ir_w': ir_w, 'vi_w': vi_w, 'mask': mask, 'cbcr': cbcr}
# return as expected
return sample
def pred_item(self, index: int) -> dict:
# image name, like '003.png'
name = self.img_list[index]
logging.debug(f'pred mode: loading item {name}')
# load infrared and visible
ir = gray_read(self.root / 'ir' / name)
vi, cbcr = ycbcr_read(self.root / 'vi' / name)
# transform (resize)
s = ir.shape[1:]
t = torch.cat([ir, vi, cbcr], dim=0)
ir, vi, cbcr = torch.split(self.transform_fn(t), [1, 1, 2], dim=0)
# merge data
sample = {'name': name, 'ir': ir, 'vi': vi, 'cbcr': cbcr, 'shape': s}
# return as expected
return sample
@staticmethod
def pred_save(fus: Tensor, names: List[str | Path], shape: List[Size]):
for img_t, img_p, img_s in zip(fus, names, shape):
img_t = resize(img_t, img_s)
img_write(img_t, img_p)
@staticmethod
def collate_fn(data: List[dict]) -> dict:
# keys
keys = data[0].keys()
# merge
new_data = {}
for key in keys:
k_data = [d[key] for d in data]
new_data[key] = k_data if isinstance(k_data[0], str) or isinstance(k_data[0], Size) else torch.stack(k_data)
# return as expected
return new_data
================================================
FILE: loader/tno.py
================================================
import logging
from pathlib import Path
from typing import Literal, List
import torch
from kornia.geometry import resize
from torch import Tensor, Size
from torch.utils.data import Dataset
from torchvision.transforms import Resize
from config import ConfigDict
from loader.utils.checker import check_mask, check_image, check_iqa, get_max_size
from loader.utils.reader import gray_read, img_write
class TNO(Dataset):
type = 'fuse' # dataset type: 'fuse' or 'fuse & detect'
color = False # dataset visible format: false -> 'gray' or true -> 'color'
def __init__(self, root: str | Path, mode: Literal['train', 'val', 'pred'], config: ConfigDict):
super().__init__()
root = Path(root)
self.root = root
self.mode = mode
# read corresponding list
img_list = Path(root / 'meta' / f'{mode}.txt').read_text().splitlines()
logging.info(f'load {len(img_list)} images from {root.name}')
self.img_list = img_list
# check images
check_image(root, img_list)
# more check
match mode:
case 'train' | 'val':
# check mask cache
check_mask(root, img_list, config)
# check iqa cache
check_iqa(root, img_list, config)
case _:
# get max shape
self.max_size = get_max_size(root, img_list)
# choose transform
match mode:
case 'train' | 'val':
self.transform_fn = Resize(size=config.train.image_size)
case _:
self.transform_fn = Resize(size=self.max_size)
def __len__(self) -> int:
return len(self.img_list)
def __getitem__(self, index: int) -> dict:
# choose get item method
match self.mode:
case 'train' | 'val':
return self.train_val_item(index)
case _:
return self.pred_item(index)
def train_val_item(self, index: int) -> dict:
# image name, like '028.png'
name = self.img_list[index]
logging.debug(f'train-val mode: loading item {name}')
# load infrared and visible
ir = gray_read(self.root / 'ir' / name)
vi = gray_read(self.root / 'vi' / name)
# load mask
mask = gray_read(self.root / 'mask' / name)
# load information measurement
ir_w = gray_read(self.root / 'iqa' / 'ir' / name)
vi_w = gray_read(self.root / 'iqa' / 'vi' / name)
# transform (resize)
t = torch.cat([ir, vi, mask, ir_w, vi_w], dim=0)
ir, vi, mask, ir_w, vi_w = torch.chunk(self.transform_fn(t), chunks=5, dim=0)
# merge data
sample = {'name': name, 'ir': ir, 'vi': vi, 'ir_w': ir_w, 'vi_w': vi_w, 'mask': mask}
# return as expected
return sample
def pred_item(self, index: int) -> dict:
# image name, like '028.png'
name = self.img_list[index]
logging.debug(f'pred mode: loading item {name}')
# load infrared and visible
ir = gray_read(self.root / 'ir' / name)
vi = gray_read(self.root / 'vi' / name)
# transform (resize)
s = ir.shape[1:]
t = torch.cat([ir, vi], dim=0)
ir, vi = torch.chunk(self.transform_fn(t), chunks=2, dim=0)
# merge data
sample = {'name': name, 'ir': ir, 'vi': vi, 'shape': s}
# return as expected
return sample
@staticmethod
def pred_save(fus: Tensor, names: List[str | Path], shape: List[Size]):
for img_t, img_p, img_s in zip(fus, names, shape):
img_t = resize(img_t, img_s)
img_write(img_t, img_p)
@staticmethod
def collate_fn(data: List[dict]) -> dict:
# keys
keys = data[0].keys()
# merge
new_data = {}
for key in keys:
k_data = [d[key] for d in data]
new_data[key] = k_data if isinstance(k_data[0], str) or isinstance(k_data[0], Size) else torch.stack(k_data)
# return as expected
return new_data
================================================
FILE: loader/utils/__init__.py
================================================
================================================
FILE: loader/utils/checker.py
================================================
import logging
import sys
from pathlib import Path
from typing import List
from torch import Tensor, Size
from tqdm import tqdm
from config import ConfigDict
from loader.utils.reader import label_read, gray_read
from pipeline.iqa import IQA
from pipeline.saliency import Saliency
def check_image(root: Path, img_list: List[str]):
assert (root / 'ir').exists() and (root / 'vi').exists(), f'ir and vi folders are required'
for img_name in img_list:
if not (root / 'ir' / img_name).exists() or not (root / 'vi' / img_name).exists():
logging.fatal(f'empty img {img_name} in {root.name}')
sys.exit(1)
logging.info('find all images on list')
def check_iqa(root: Path, img_list: List[str], config: ConfigDict):
iqa_cache = True
if (root / 'iqa').exists():
for img_name in img_list:
if not (root / 'iqa' / 'ir' / img_name).exists() or not (root / 'iqa' / 'vi' / img_name).exists():
iqa_cache = False
break
else:
iqa_cache = False
if iqa_cache:
logging.info(f'find iqa cache in folder, skip information measurement')
else:
logging.info(f'find no iqa cache in folder, start information measurement')
iqa = IQA(url=config.iqa.url)
iqa.inference(src=root, dst=root / 'iqa')
def check_labels(root: Path, img_list: List[str]) -> List[Tensor]:
assert (root / 'labels').exists(), f'labels folder is required'
labels = []
for img_name in img_list:
label_name = Path(img_name).stem + '.txt'
if not (root / 'labels' / label_name).exists():
logging.fatal(f'empty label {label_name} in {root.name}')
sys.exit(1)
labels.append(label_read(root / 'labels' / label_name))
logging.info('find all labels on list')
return labels
def check_mask(root: Path, img_list: List[str], config: ConfigDict):
mask_cache = True
if (root / 'mask').exists():
for img_name in img_list:
if not (root / 'mask' / img_name).exists():
mask_cache = False
break
else:
mask_cache = False
if mask_cache:
logging.info('find mask cache in folder, skip saliency detection')
else:
logging.info('find no mask cache in folder, start saliency detection')
saliency = Saliency(url=config.saliency.url)
saliency.inference(src=root / 'ir', dst=root / 'mask')
def get_max_size(root: Path, img_list: List[str]):
max_h, max_w = -1, -1
logging.info('find suitable size for prediction')
img_l = tqdm(img_list)
for img_name in img_l:
img_l.set_description('finding suitable size')
img = gray_read(root / 'ir' / img_name)
max_h = max(max_h, img.shape[1])
max_w = max(max_w, img.shape[2])
logging.info(f'max size in dataset: H:{max_h} x W:{max_w}')
return Size((max_h, max_w))
================================================
FILE: loader/utils/reader.py
================================================
from pathlib import Path
from typing import Tuple
import cv2
import numpy
import torch
from kornia import image_to_tensor, tensor_to_image
from kornia.color import rgb_to_ycbcr, bgr_to_rgb, rgb_to_bgr
from torch import Tensor
from torchvision.ops import box_convert
def gray_read(img_path: str | Path) -> Tensor:
img_n = cv2.imread(str(img_path), cv2.IMREAD_GRAYSCALE)
img_t = image_to_tensor(img_n).float() / 255
return img_t
def ycbcr_read(img_path: str | Path) -> Tuple[Tensor, Tensor]:
img_n = cv2.imread(str(img_path), cv2.IMREAD_COLOR)
img_t = image_to_tensor(img_n).float() / 255
img_t = rgb_to_ycbcr(bgr_to_rgb(img_t))
y, cbcr = torch.split(img_t, [1, 2], dim=0)
return y, cbcr
def label_read(label_path: str | Path) -> Tensor:
target = numpy.loadtxt(str(label_path), dtype=numpy.float32)
labels = torch.from_numpy(target).view(-1, 5) # (cls, cx, cy, w, h)
labels[:, 1:] = box_convert(labels[:, 1:], 'cxcywh', 'xyxy') # (cls, x1, y1, x2, y2)
return labels
def img_write(img_t: Tensor, img_path: str | Path):
if img_t.shape[0] == 3:
img_t = rgb_to_bgr(img_t)
img_n = tensor_to_image(img_t.squeeze().cpu()) * 255
cv2.imwrite(str(img_path), img_n)
def label_write(pred_i: Tensor, txt_path: str | Path):
for *pos, conf, cls in pred_i.tolist():
line = (cls, *pos, conf)
with txt_path.open('a') as f:
f.write(('%g ' * len(line)).rstrip() % line + '\n')
================================================
FILE: module/__init__.py
================================================
================================================
FILE: module/detect/README.md
================================================
# Detect
Based on YOLOv5.
Reference: [YOLOv5 official](https://github.com/ultralytics/yolov5)
================================================
FILE: module/detect/models/__init__.py
================================================
================================================
FILE: module/detect/models/common.py
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Common modules
"""
import json
import os
import platform
import sys
import warnings
from collections import OrderedDict, namedtuple
from copy import copy
from pathlib import Path
import cv2
import math
import numpy as np
import pandas as pd
import requests
import torch
import torch.nn as nn
import yaml
from PIL import Image
from torch.cuda import amp
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # YOLOv5 root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
if platform.system() != 'Windows':
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
from utils.dataloaders import exif_transpose, letterbox
from utils.general import (LOGGER, check_requirements, check_suffix, check_version, colorstr, increment_path,
make_divisible, non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import copy_attr, time_sync
def autopad(k, p=None): # kernel, padding
# Pad to 'same'
if p is None:
p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
return p
class Conv(nn.Module):
# Standard convolution
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
def forward(self, x):
return self.act(self.bn(self.conv(x)))
def forward_fuse(self, x):
return self.act(self.conv(x))
class DWConv(Conv):
# Depth-wise convolution class
def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act)
class DWConvTranspose2d(nn.ConvTranspose2d):
# Depth-wise transpose convolution class
def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out
super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2))
class TransformerLayer(nn.Module):
# Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
def __init__(self, c, num_heads):
super().__init__()
self.q = nn.Linear(c, c, bias=False)
self.k = nn.Linear(c, c, bias=False)
self.v = nn.Linear(c, c, bias=False)
self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
self.fc1 = nn.Linear(c, c, bias=False)
self.fc2 = nn.Linear(c, c, bias=False)
def forward(self, x):
x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
x = self.fc2(self.fc1(x)) + x
return x
class TransformerBlock(nn.Module):
# Vision Transformer https://arxiv.org/abs/2010.11929
def __init__(self, c1, c2, num_heads, num_layers):
super().__init__()
self.conv = None
if c1 != c2:
self.conv = Conv(c1, c2)
self.linear = nn.Linear(c2, c2) # learnable position embedding
self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers)))
self.c2 = c2
def forward(self, x):
if self.conv is not None:
x = self.conv(x)
b, _, w, h = x.shape
p = x.flatten(2).permute(2, 0, 1)
return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h)
class Bottleneck(nn.Module):
# Standard bottleneck
def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c_, c2, 3, 1, g=g)
self.add = shortcut and c1 == c2
def forward(self, x):
return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
class BottleneckCSP(nn.Module):
# CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
self.cv4 = Conv(2 * c_, c2, 1, 1)
self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
self.act = nn.SiLU()
self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
def forward(self, x):
y1 = self.cv3(self.m(self.cv1(x)))
y2 = self.cv2(x)
return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1))))
class CrossConv(nn.Module):
# Cross Convolution Downsample
def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
# ch_in, ch_out, kernel, stride, groups, expansion, shortcut
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, (1, k), (1, s))
self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
self.add = shortcut and c1 == c2
def forward(self, x):
return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
class C3(nn.Module):
# CSP Bottleneck with 3 convolutions
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c1, c_, 1, 1)
self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
def forward(self, x):
return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
class C3x(C3):
# C3 module with cross-convolutions
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
super().__init__(c1, c2, n, shortcut, g, e)
c_ = int(c2 * e)
self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)))
class C3TR(C3):
# C3 module with TransformerBlock()
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
super().__init__(c1, c2, n, shortcut, g, e)
c_ = int(c2 * e)
self.m = TransformerBlock(c_, c_, 4, n)
class C3SPP(C3):
# C3 module with SPP()
def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
super().__init__(c1, c2, n, shortcut, g, e)
c_ = int(c2 * e)
self.m = SPP(c_, c_, k)
class C3Ghost(C3):
# C3 module with GhostBottleneck()
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
super().__init__(c1, c2, n, shortcut, g, e)
c_ = int(c2 * e) # hidden channels
self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))
class SPP(nn.Module):
# Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
def __init__(self, c1, c2, k=(5, 9, 13)):
super().__init__()
c_ = c1 // 2 # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
def forward(self, x):
x = self.cv1(x)
with warnings.catch_warnings():
warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
class SPPF(nn.Module):
# Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
super().__init__()
c_ = c1 // 2 # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c_ * 4, c2, 1, 1)
self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
def forward(self, x):
x = self.cv1(x)
with warnings.catch_warnings():
warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
y1 = self.m(x)
y2 = self.m(y1)
return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1))
class Focus(nn.Module):
# Focus wh information into c-space
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__()
self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
# self.contract = Contract(gain=2)
def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1))
# return self.conv(self.contract(x))
class GhostConv(nn.Module):
# Ghost Convolution https://github.com/huawei-noah/ghostnet
def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
super().__init__()
c_ = c2 // 2 # hidden channels
self.cv1 = Conv(c1, c_, k, s, None, g, act)
self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
def forward(self, x):
y = self.cv1(x)
return torch.cat((y, self.cv2(y)), 1)
class GhostBottleneck(nn.Module):
# Ghost Bottleneck https://github.com/huawei-noah/ghostnet
def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
super().__init__()
c_ = c2 // 2
self.conv = nn.Sequential(
GhostConv(c1, c_, 1, 1), # pw
DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
GhostConv(c_, c2, 1, 1, act=False)
) # pw-linear
self.shortcut = nn.Sequential(
DWConv(c1, c1, k, s, act=False), Conv(
c1, c2, 1, 1,
act=False
)
) if s == 2 else nn.Identity()
def forward(self, x):
return self.conv(x) + self.shortcut(x)
class Contract(nn.Module):
# Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
def __init__(self, gain=2):
super().__init__()
self.gain = gain
def forward(self, x):
b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'
s = self.gain
x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2)
x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40)
class Expand(nn.Module):
# Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
def __init__(self, gain=2):
super().__init__()
self.gain = gain
def forward(self, x):
b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
s = self.gain
x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80)
x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160)
class Concat(nn.Module):
# Concatenate a list of tensors along dimension
def __init__(self, dimension=1):
super().__init__()
self.d = dimension
def forward(self, x):
return torch.cat(x, self.d)
class DetectMultiBackend(nn.Module):
# YOLOv5 MultiBackend class for python inference on various backends
def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True):
# Usage:
# PyTorch: weights = *.pt
# TorchScript: *.torchscript
# ONNX Runtime: *.onnx
# ONNX OpenCV DNN: *.onnx with --dnn
# OpenVINO: *.xml
# CoreML: *.mlmodel
# TensorRT: *.engine
# TensorFlow SavedModel: *_saved_model
# TensorFlow GraphDef: *.pb
# TensorFlow Lite: *.tflite
# TensorFlow Edge TPU: *_edgetpu.tflite
from models.experimental import attempt_download, attempt_load # scoped to avoid circular import
super().__init__()
w = str(weights[0] if isinstance(weights, list) else weights)
pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs = self.model_type(w) # get backend
w = attempt_download(w) # download if not local
fp16 &= (pt or jit or onnx or engine) and device.type != 'cpu' # FP16
stride, names = 32, [f'class{i}' for i in range(1000)] # assign defaults
if data: # assign class names (optional)
with open(data, errors='ignore') as f:
names = yaml.safe_load(f)['names']
if pt: # PyTorch
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
stride = max(int(model.stride.max()), 32) # model stride
names = model.module.names if hasattr(model, 'module') else model.names # get class names
model.half() if fp16 else model.float()
self.model = model # explicitly assign for to(), cpu(), cuda(), half()
elif jit: # TorchScript
LOGGER.info(f'Loading {w} for TorchScript inference...')
extra_files = {'config.txt': ''} # model metadata
model = torch.jit.load(w, _extra_files=extra_files)
model.half() if fp16 else model.float()
if extra_files['config.txt']:
d = json.loads(extra_files['config.txt']) # extra_files dict
stride, names = int(d['stride']), d['names']
elif dnn: # ONNX OpenCV DNN
LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...')
check_requirements(('opencv-python>=4.5.4',))
net = cv2.dnn.readNetFromONNX(w)
elif onnx: # ONNX Runtime
LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
cuda = torch.cuda.is_available()
check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))
import onnxruntime
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
session = onnxruntime.InferenceSession(w, providers=providers)
meta = session.get_modelmeta().custom_metadata_map # metadata
if 'stride' in meta:
stride, names = int(meta['stride']), eval(meta['names'])
elif xml: # OpenVINO
LOGGER.info(f'Loading {w} for OpenVINO inference...')
check_requirements(('openvino',)) # requires openvino-dev: https://pypi.org/project/openvino-dev/
from openvino.runtime import Core, Layout, get_batch
ie = Core()
if not Path(w).is_file(): # if not *.xml
w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir
network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin'))
if network.get_parameters()[0].get_layout().empty:
network.get_parameters()[0].set_layout(Layout("NCHW"))
batch_dim = get_batch(network)
if batch_dim.is_static:
batch_size = batch_dim.get_length()
executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2
output_layer = next(iter(executable_network.outputs))
meta = Path(w).with_suffix('.yaml')
if meta.exists():
stride, names = self._load_metadata(meta) # load metadata
elif engine: # TensorRT
LOGGER.info(f'Loading {w} for TensorRT inference...')
import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download
check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0
Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
logger = trt.Logger(trt.Logger.INFO)
with open(w, 'rb') as f, trt.Runtime(logger) as runtime:
model = runtime.deserialize_cuda_engine(f.read())
bindings = OrderedDict()
fp16 = False # default updated below
for index in range(model.num_bindings):
name = model.get_binding_name(index)
dtype = trt.nptype(model.get_binding_dtype(index))
shape = tuple(model.get_binding_shape(index))
data = torch.from_numpy(np.empty(shape, dtype=np.dtype(dtype))).to(device)
bindings[name] = Binding(name, dtype, shape, data, int(data.data_ptr()))
if model.binding_is_input(index) and dtype == np.float16:
fp16 = True
binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())
context = model.create_execution_context()
batch_size = bindings['images'].shape[0]
elif coreml: # CoreML
LOGGER.info(f'Loading {w} for CoreML inference...')
import coremltools as ct
model = ct.models.MLModel(w)
else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
if saved_model: # SavedModel
LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...')
import tensorflow as tf
keras = False # assume TF1 saved_model
model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w)
elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...')
import tensorflow as tf
def wrap_frozen_graph(gd, inputs, outputs):
x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped
ge = x.graph.as_graph_element
return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs))
gd = tf.Graph().as_graph_def() # graph_def
with open(w, 'rb') as f:
gd.ParseFromString(f.read())
frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs="Identity:0")
elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python
try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu
from tflite_runtime.interpreter import Interpreter, load_delegate
except ImportError:
import tensorflow as tf
Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate,
if edgetpu: # Edge TPU https://coral.ai/software/#edgetpu-runtime
LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...')
delegate = {
'Linux': 'libedgetpu.so.1',
'Darwin': 'libedgetpu.1.dylib',
'Windows': 'edgetpu.dll'}[platform.system()]
interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)])
else: # Lite
LOGGER.info(f'Loading {w} for TensorFlow Lite inference...')
interpreter = Interpreter(model_path=w) # load TFLite model
interpreter.allocate_tensors() # allocate
input_details = interpreter.get_input_details() # inputs
output_details = interpreter.get_output_details() # outputs
elif tfjs:
raise Exception('ERROR: YOLOv5 TF.js inference is not supported')
else:
raise Exception(f'ERROR: {w} is not a supported format')
self.__dict__.update(locals()) # assign all variables to self
def forward(self, im, augment=False, visualize=False, val=False):
# YOLOv5 MultiBackend inference
b, ch, h, w = im.shape # batch, channel, height, width
if self.fp16 and im.dtype != torch.float16:
im = im.half() # to FP16
if self.pt: # PyTorch
y = self.model(im, augment=augment, visualize=visualize)[0]
elif self.jit: # TorchScript
y = self.model(im)[0]
elif self.dnn: # ONNX OpenCV DNN
im = im.cpu().numpy() # torch to numpy
self.net.setInput(im)
y = self.net.forward()
elif self.onnx: # ONNX Runtime
im = im.cpu().numpy() # torch to numpy
y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0]
elif self.xml: # OpenVINO
im = im.cpu().numpy() # FP32
y = self.executable_network([im])[self.output_layer]
elif self.engine: # TensorRT
assert im.shape == self.bindings['images'].shape, (im.shape, self.bindings['images'].shape)
self.binding_addrs['images'] = int(im.data_ptr())
self.context.execute_v2(list(self.binding_addrs.values()))
y = self.bindings['output'].data
elif self.coreml: # CoreML
im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
im = Image.fromarray((im[0] * 255).astype('uint8'))
# im = im.resize((192, 320), Image.ANTIALIAS)
y = self.model.predict({'image': im}) # coordinates are xywh normalized
if 'confidence' in y:
box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels
conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)
y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1)
else:
k = 'var_' + str(sorted(int(k.replace('var_', '')) for k in y)[-1]) # output key
y = y[k] # output
else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
if self.saved_model: # SavedModel
y = (self.model(im, training=False) if self.keras else self.model(im)).numpy()
elif self.pb: # GraphDef
y = self.frozen_func(x=self.tf.constant(im)).numpy()
else: # Lite or Edge TPU
input, output = self.input_details[0], self.output_details[0]
int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model
if int8:
scale, zero_point = input['quantization']
im = (im / scale + zero_point).astype(np.uint8) # de-scale
self.interpreter.set_tensor(input['index'], im)
self.interpreter.invoke()
y = self.interpreter.get_tensor(output['index'])
if int8:
scale, zero_point = output['quantization']
y = (y.astype(np.float32) - zero_point) * scale # re-scale
y[..., :4] *= [w, h, w, h] # xywh normalized to pixels
if isinstance(y, np.ndarray):
y = torch.tensor(y, device=self.device)
return (y, []) if val else y
def warmup(self, imgsz=(1, 3, 640, 640)):
# Warmup model by running inference once
warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb
if any(warmup_types) and self.device.type != 'cpu':
im = torch.zeros(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input
for _ in range(2 if self.jit else 1): #
self.forward(im) # warmup
@staticmethod
def model_type(p='path/to/model.pt'):
# Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx
from export import export_formats
suffixes = list(export_formats().Suffix) + ['.xml'] # export suffixes
check_suffix(p, suffixes) # checks
p = Path(p).name # eliminate trailing separators
pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, xml2 = (s in p for s in suffixes)
xml |= xml2 # *_openvino_model or *.xml
tflite &= not edgetpu # *.tflite
return pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs
@staticmethod
def _load_metadata(f='path/to/meta.yaml'):
# Load metadata from meta.yaml if it exists
with open(f, errors='ignore') as f:
d = yaml.safe_load(f)
return d['stride'], d['names'] # assign stride, names
class AutoShape(nn.Module):
# YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
conf = 0.25 # NMS confidence threshold
iou = 0.45 # NMS IoU threshold
agnostic = False # NMS class-agnostic
multi_label = False # NMS multiple labels per box
classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
max_det = 1000 # maximum number of detections per image
amp = False # Automatic Mixed Precision (AMP) inference
def __init__(self, model, verbose=True):
super().__init__()
if verbose:
LOGGER.info('Adding AutoShape... ')
copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes
self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance
self.pt = not self.dmb or model.pt # PyTorch model
self.model = model.eval()
def _apply(self, fn):
# Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
self = super()._apply(fn)
if self.pt:
m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect()
m.stride = fn(m.stride)
m.grid = list(map(fn, m.grid))
if isinstance(m.anchor_grid, list):
m.anchor_grid = list(map(fn, m.anchor_grid))
return self
@torch.no_grad()
def forward(self, imgs, size=640, augment=False, profile=False):
# Inference from various sources. For height=640, width=1280, RGB images example inputs are:
# file: imgs = 'data/images/zidane.jpg' # str or PosixPath
# URI: = 'https://ultralytics.com/images/zidane.jpg'
# OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
# PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
# numpy: = np.zeros((640,1280,3)) # HWC
# torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
# multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
t = [time_sync()]
p = next(self.model.parameters()) if self.pt else torch.zeros(1, device=self.model.device) # for device, type
autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference
if isinstance(imgs, torch.Tensor): # torch
with amp.autocast(autocast):
return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
# Pre-process
n, imgs = (len(imgs), list(imgs)) if isinstance(imgs, (list, tuple)) else (1, [imgs]) # number, list of images
shape0, shape1, files = [], [], [] # image and inference shapes, filenames
for i, im in enumerate(imgs):
f = f'image{i}' # filename
if isinstance(im, (str, Path)): # filename or uri
im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im
im = np.asarray(exif_transpose(im))
elif isinstance(im, Image.Image): # PIL Image
im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f
files.append(Path(f).with_suffix('.jpg').name)
if im.shape[0] < 5: # image in CHW
im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3) # enforce 3ch input
s = im.shape[:2] # HWC
shape0.append(s) # image shape
g = (size / max(s)) # gain
shape1.append([y * g for y in s])
imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update
shape1 = [make_divisible(x, self.stride) if self.pt else size for x in np.array(shape1).max(0)] # inf shape
x = [letterbox(im, shape1, auto=False)[0] for im in imgs] # pad
x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW
x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32
t.append(time_sync())
with amp.autocast(autocast):
# Inference
y = self.model(x, augment, profile) # forward
t.append(time_sync())
# Post-process
y = non_max_suppression(
y if self.dmb else y[0],
self.conf,
self.iou,
self.classes,
self.agnostic,
self.multi_label,
max_det=self.max_det
) # NMS
for i in range(n):
scale_coords(shape1, y[i][:, :4], shape0[i])
t.append(time_sync())
return Detections(imgs, y, files, t, self.names, x.shape)
class Detections:
# YOLOv5 detections class for inference results
def __init__(self, imgs, pred, files, times=(0, 0, 0, 0), names=None, shape=None):
super().__init__()
d = pred[0].device # device
gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs] # normalizations
self.imgs = imgs # list of images as numpy arrays
self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
self.names = names # class names
self.files = files # image filenames
self.times = times # profiling times
self.xyxy = pred # xyxy pixels
self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
self.n = len(self.pred) # number of images (batch size)
self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms)
self.s = shape # inference BCHW shape
def display(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')):
crops = []
for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):
s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string
if pred.shape[0]:
for c in pred[:, -1].unique():
n = (pred[:, -1] == c).sum() # detections per class
s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
if show or save or render or crop:
annotator = Annotator(im, example=str(self.names))
for *box, conf, cls in reversed(pred): # xyxy, confidence, class
label = f'{self.names[int(cls)]} {conf:.2f}'
if crop:
file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None
crops.append(
{
'box': box,
'conf': conf,
'cls': cls,
'label': label,
'im': save_one_box(box, im, file=file, save=save)}
)
else: # all others
annotator.box_label(box, label if labels else '', color=colors(cls))
im = annotator.im
else:
s += '(no detections)'
im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np
if pprint:
print(s.rstrip(', '))
if show:
im.show(self.files[i]) # show
if save:
f = self.files[i]
im.save(save_dir / f) # save
if i == self.n - 1:
LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")
if render:
self.imgs[i] = np.asarray(im)
if crop:
if save:
LOGGER.info(f'Saved results to {save_dir}\n')
return crops
def print(self):
self.display(pprint=True) # print results
print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t)
def show(self, labels=True):
self.display(show=True, labels=labels) # show results
def save(self, labels=True, save_dir='runs/detect/exp'):
save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir
self.display(save=True, labels=labels, save_dir=save_dir) # save results
def crop(self, save=True, save_dir='runs/detect/exp'):
save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None
return self.display(crop=True, save=save, save_dir=save_dir) # crop results
def render(self, labels=True):
self.display(render=True, labels=labels) # render results
return self.imgs
def pandas(self):
# return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
new = copy(self) # return copy
ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns
cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns
for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update
setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
return new
def tolist(self):
# return a list of Detections objects, i.e. 'for result in results.tolist():'
r = range(self.n) # iterable
x = [Detections([self.imgs[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r]
# for d in x:
# for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
# setattr(d, k, getattr(d, k)[0]) # pop out of list
return x
def __len__(self):
return self.n # override len(results)
def __str__(self):
self.print() # override print(results)
return ''
class Classify(nn.Module):
# Classification head, i.e. x(b,c1,20,20) to x(b,c2)
def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__()
self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1)
self.flat = nn.Flatten()
def forward(self, x):
z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
return self.flat(self.conv(z)) # flatten to x(b,c2)
================================================
FILE: module/detect/models/experimental.py
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Experimental modules
"""
import math
import numpy as np
import torch
import torch.nn as nn
from models.common import Conv
from utils.downloads import attempt_download
class Sum(nn.Module):
# Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
def __init__(self, n, weight=False): # n: number of inputs
super().__init__()
self.weight = weight # apply weights boolean
self.iter = range(n - 1) # iter object
if weight:
self.w = nn.Parameter(-torch.arange(1.0, n) / 2, requires_grad=True) # layer weights
def forward(self, x):
y = x[0] # no weight
if self.weight:
w = torch.sigmoid(self.w) * 2
for i in self.iter:
y = y + x[i + 1] * w[i]
else:
for i in self.iter:
y = y + x[i + 1]
return y
class MixConv2d(nn.Module):
# Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595
def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): # ch_in, ch_out, kernel, stride, ch_strategy
super().__init__()
n = len(k) # number of convolutions
if equal_ch: # equal c_ per group
i = torch.linspace(0, n - 1E-6, c2).floor() # c2 indices
c_ = [(i == g).sum() for g in range(n)] # intermediate channels
else: # equal weight.numel() per group
b = [c2] + [0] * n
a = np.eye(n + 1, n, k=-1)
a -= np.roll(a, 1, axis=1)
a *= np.array(k) ** 2
a[0] = 1
c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
self.m = nn.ModuleList([
nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)])
self.bn = nn.BatchNorm2d(c2)
self.act = nn.SiLU()
def forward(self, x):
return self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
class Ensemble(nn.ModuleList):
# Ensemble of models
def __init__(self):
super().__init__()
def forward(self, x, augment=False, profile=False, visualize=False):
y = [module(x, augment, profile, visualize)[0] for module in self]
# y = torch.stack(y).max(0)[0] # max ensemble
# y = torch.stack(y).mean(0) # mean ensemble
y = torch.cat(y, 1) # nms ensemble
return y, None # inference, train output
def attempt_load(weights, device=None, inplace=True, fuse=True):
# Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
from models.yolo import Detect, Model
model = Ensemble()
for w in weights if isinstance(weights, list) else [weights]:
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model
model.append(ckpt.fuse().eval() if fuse else ckpt.eval()) # fused or un-fused model in eval mode
# Compatibility updates
for m in model.modules():
t = type(m)
if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model):
m.inplace = inplace # torch 1.7.0 compatibility
if t is Detect and not isinstance(m.anchor_grid, list):
delattr(m, 'anchor_grid')
setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)
elif t is Conv:
m._non_persistent_buffers_set = set() # torch 1.6.0 compatibility
elif t is nn.Upsample and not hasattr(m, 'recompute_scale_factor'):
m.recompute_scale_factor = None # torch 1.11.0 compatibility
if len(model) == 1:
return model[-1] # return model
print(f'Ensemble created with {weights}\n')
for k in 'names', 'nc', 'yaml':
setattr(model, k, getattr(model[0], k))
model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride
assert all(model[0].nc == m.nc for m in model), f'Models have different class counts: {[m.nc for m in model]}'
return model # return ensemble
================================================
FILE: module/detect/models/hub/anchors.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Default anchors for COCO data
# P5 -------------------------------------------------------------------------------------------------------------------
# P5-640:
anchors_p5_640:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# P6 -------------------------------------------------------------------------------------------------------------------
# P6-640: thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11, 21,19, 17,41, 43,32, 39,70, 86,64, 65,131, 134,130, 120,265, 282,180, 247,354, 512,387
anchors_p6_640:
- [9,11, 21,19, 17,41] # P3/8
- [43,32, 39,70, 86,64] # P4/16
- [65,131, 134,130, 120,265] # P5/32
- [282,180, 247,354, 512,387] # P6/64
# P6-1280: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27, 44,40, 38,94, 96,68, 86,152, 180,137, 140,301, 303,264, 238,542, 436,615, 739,380, 925,792
anchors_p6_1280:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# P6-1920: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41, 67,59, 57,141, 144,103, 129,227, 270,205, 209,452, 455,396, 358,812, 653,922, 1109,570, 1387,1187
anchors_p6_1920:
- [28,41, 67,59, 57,141] # P3/8
- [144,103, 129,227, 270,205] # P4/16
- [209,452, 455,396, 358,812] # P5/32
- [653,922, 1109,570, 1387,1187] # P6/64
# P7 -------------------------------------------------------------------------------------------------------------------
# P7-640: thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11, 13,30, 29,20, 30,46, 61,38, 39,92, 78,80, 146,66, 79,163, 149,150, 321,143, 157,303, 257,402, 359,290, 524,372
anchors_p7_640:
- [11,11, 13,30, 29,20] # P3/8
- [30,46, 61,38, 39,92] # P4/16
- [78,80, 146,66, 79,163] # P5/32
- [149,150, 321,143, 157,303] # P6/64
- [257,402, 359,290, 524,372] # P7/128
# P7-1280: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22, 54,36, 32,77, 70,83, 138,71, 75,173, 165,159, 148,334, 375,151, 334,317, 251,626, 499,474, 750,326, 534,814, 1079,818
anchors_p7_1280:
- [19,22, 54,36, 32,77] # P3/8
- [70,83, 138,71, 75,173] # P4/16
- [165,159, 148,334, 375,151] # P5/32
- [334,317, 251,626, 499,474] # P6/64
- [750,326, 534,814, 1079,818] # P7/128
# P7-1920: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34, 81,55, 47,115, 105,124, 207,107, 113,259, 247,238, 222,500, 563,227, 501,476, 376,939, 749,711, 1126,489, 801,1222, 1618,1227
anchors_p7_1920:
- [29,34, 81,55, 47,115] # P3/8
- [105,124, 207,107, 113,259] # P4/16
- [247,238, 222,500, 563,227] # P5/32
- [501,476, 376,939, 749,711] # P6/64
- [1126,489, 801,1222, 1618,1227] # P7/128
================================================
FILE: module/detect/models/hub/yolov3-spp.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# darknet53 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [32, 3, 1]], # 0
[-1, 1, Conv, [64, 3, 2]], # 1-P1/2
[-1, 1, Bottleneck, [64]],
[-1, 1, Conv, [128, 3, 2]], # 3-P2/4
[-1, 2, Bottleneck, [128]],
[-1, 1, Conv, [256, 3, 2]], # 5-P3/8
[-1, 8, Bottleneck, [256]],
[-1, 1, Conv, [512, 3, 2]], # 7-P4/16
[-1, 8, Bottleneck, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
[-1, 4, Bottleneck, [1024]], # 10
]
# YOLOv3-SPP head
head:
[[-1, 1, Bottleneck, [1024, False]],
[-1, 1, SPP, [512, [5, 9, 13]]],
[-1, 1, Conv, [1024, 3, 1]],
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P4
[-1, 1, Bottleneck, [512, False]],
[-1, 1, Bottleneck, [512, False]],
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P3
[-1, 1, Bottleneck, [256, False]],
[-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
[[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov3-tiny.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,14, 23,27, 37,58] # P4/16
- [81,82, 135,169, 344,319] # P5/32
# YOLOv3-tiny backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [16, 3, 1]], # 0
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 1-P1/2
[-1, 1, Conv, [32, 3, 1]],
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 3-P2/4
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 5-P3/8
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 7-P4/16
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, nn.MaxPool2d, [2, 2, 0]], # 9-P5/32
[-1, 1, Conv, [512, 3, 1]],
[-1, 1, nn.ZeroPad2d, [[0, 1, 0, 1]]], # 11
[-1, 1, nn.MaxPool2d, [2, 1, 0]], # 12
]
# YOLOv3-tiny head
head:
[[-1, 1, Conv, [1024, 3, 1]],
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [512, 3, 1]], # 15 (P5/32-large)
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P4
[-1, 1, Conv, [256, 3, 1]], # 19 (P4/16-medium)
[[19, 15], 1, Detect, [nc, anchors]], # Detect(P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov3.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# darknet53 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [32, 3, 1]], # 0
[-1, 1, Conv, [64, 3, 2]], # 1-P1/2
[-1, 1, Bottleneck, [64]],
[-1, 1, Conv, [128, 3, 2]], # 3-P2/4
[-1, 2, Bottleneck, [128]],
[-1, 1, Conv, [256, 3, 2]], # 5-P3/8
[-1, 8, Bottleneck, [256]],
[-1, 1, Conv, [512, 3, 2]], # 7-P4/16
[-1, 8, Bottleneck, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
[-1, 4, Bottleneck, [1024]], # 10
]
# YOLOv3 head
head:
[[-1, 1, Bottleneck, [1024, False]],
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [1024, 3, 1]],
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P4
[-1, 1, Bottleneck, [512, False]],
[-1, 1, Bottleneck, [512, False]],
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P3
[-1, 1, Bottleneck, [256, False]],
[-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
[[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov5-bifpn.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 BiFPN head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14, 6], 1, Concat, [1]], # cat P4 <--- BiFPN change
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov5-fpn.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 FPN head
head:
[[-1, 3, C3, [1024, False]], # 10 (P5/32-large)
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 1, Conv, [512, 1, 1]],
[-1, 3, C3, [512, False]], # 14 (P4/16-medium)
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 1, Conv, [256, 1, 1]],
[-1, 3, C3, [256, False]], # 18 (P3/8-small)
[[18, 14, 10], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov5-p2.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head with (P2, P3, P4, P5) outputs
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 2], 1, Concat, [1]], # cat backbone P2
[-1, 1, C3, [128, False]], # 21 (P2/4-xsmall)
[-1, 1, Conv, [128, 3, 2]],
[[-1, 18], 1, Concat, [1]], # cat head P3
[-1, 3, C3, [256, False]], # 24 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 27 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 30 (P5/32-large)
[[21, 24, 27, 30], 1, Detect, [nc, anchors]], # Detect(P2, P3, P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov5-p34.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[ [ -1, 1, Conv, [ 64, 6, 2, 2 ] ], # 0-P1/2
[ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
[ -1, 3, C3, [ 128 ] ],
[ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
[ -1, 6, C3, [ 256 ] ],
[ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
[ -1, 9, C3, [ 512 ] ],
[ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
[ -1, 3, C3, [ 1024 ] ],
[ -1, 1, SPPF, [ 1024, 5 ] ], # 9
]
# YOLOv5 v6.0 head with (P3, P4) outputs
head:
[ [ -1, 1, Conv, [ 512, 1, 1 ] ],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
[ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
[ -1, 3, C3, [ 512, False ] ], # 13
[ -1, 1, Conv, [ 256, 1, 1 ] ],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
[ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
[ -1, 3, C3, [ 256, False ] ], # 17 (P3/8-small)
[ -1, 1, Conv, [ 256, 3, 2 ] ],
[ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4
[ -1, 3, C3, [ 512, False ] ], # 20 (P4/16-medium)
[ [ 17, 20 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4)
]
================================================
FILE: module/detect/models/hub/yolov5-p6.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head with (P3, P4, P5, P6) outputs
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]
================================================
FILE: module/detect/models/hub/yolov5-p7.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, Conv, [1280, 3, 2]], # 11-P7/128
[-1, 3, C3, [1280]],
[-1, 1, SPPF, [1280, 5]], # 13
]
# YOLOv5 v6.0 head with (P3, P4, P5, P6, P7) outputs
head:
[[-1, 1, Conv, [1024, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 10], 1, Concat, [1]], # cat backbone P6
[-1, 3, C3, [1024, False]], # 17
[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 21
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 25
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 29 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 26], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 32 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 22], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 35 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 18], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 38 (P6/64-xlarge)
[-1, 1, Conv, [1024, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P7
[-1, 3, C3, [1280, False]], # 41 (P7/128-xxlarge)
[[29, 32, 35, 38, 41], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6, P7)
]
================================================
FILE: module/detect/models/hub/yolov5-panet.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 PANet head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov5l6.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]
================================================
FILE: module/detect/models/hub/yolov5m6.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.67 # model depth multiple
width_multiple: 0.75 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]
================================================
FILE: module/detect/models/hub/yolov5n6.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.25 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]
================================================
FILE: module/detect/models/hub/yolov5s-ghost.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, GhostConv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3Ghost, [128]],
[-1, 1, GhostConv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3Ghost, [256]],
[-1, 1, GhostConv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3Ghost, [512]],
[-1, 1, GhostConv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3Ghost, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, GhostConv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3Ghost, [512, False]], # 13
[-1, 1, GhostConv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3Ghost, [256, False]], # 17 (P3/8-small)
[-1, 1, GhostConv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3Ghost, [512, False]], # 20 (P4/16-medium)
[-1, 1, GhostConv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3Ghost, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov5s-transformer.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3TR, [1024]], # 9 <--- C3TR() Transformer module
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
================================================
FILE: module/detect/models/hub/yolov5s6.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]
================================================
FILE: module/detect/models/hub/yolov5x6.yaml
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 80 # number of classes
depth_multiple: 1.33 # model depth multiple
width_multiple: 1.25 # layer channel multiple
anchors:
- [19,27, 44,40, 38,94] # P3/8
- [96,68, 86,152, 180,137] # P4/16
- [140,301, 303,264, 238,542] # P5/32
- [436,615, 739,380, 925,792] # P6/64
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [768, 3, 2]], # 7-P5/32
[-1, 3, C3, [768]],
[-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [768, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P5
[-1, 3, C3, [768, False]], # 15
[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 19
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 23 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 20], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 26 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [768, False]], # 29 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P6
[-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
[[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
]
================================================
FILE: module/detect/models/tf.py
================================================
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
TensorFlow, Keras and TFLite versions of YOLOv5
Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127
Usage:
$ python models/tf.py --weights yolov5s.pt
Export:
$ python path/to/export.py --weights yolov5s.pt --include saved_model pb tflite tfjs
"""
import argparse
import sys
from copy import deepcopy
from pathlib import Path
FILE = Path(__file__).resolve()
ROOT = FILE.parents[1] # YOLOv5 root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
# ROOT = ROOT.relative_to(Path.cwd()) # relative
import numpy as np
import tensorflow as tf
import torch
import torch.nn as nn
from tensorflow import keras
from models.common import (C3, SPP, SPPF, Bottleneck, BottleneckCSP, C3x, Concat, Conv, CrossConv, DWConv,
DWConvTranspose2d, Focus, autopad)
from models.experimental import MixConv2d, attempt_load
from models.yolo import Detect
from utils.activations import SiLU
from utils.general import LOGGER, make_divisible, print_args
class TFBN(keras.layers.Layer):
# TensorFlow BatchNormalization wrapper
def __init__(self, w=None):
super().__init__()
self.bn = keras.layers.BatchNormalization(
beta_initializer=keras.initializers.Constant(w.bias.numpy()),
gamma_initializer=keras.initializers.Constant(w.weight.numpy()),
moving_mean_initializer=keras.initializers.Constant(w.running_mean.numpy()),
moving_variance_initializer=keras.initializers.Constant(w.running_var.numpy()),
epsilon=w.eps)
def call(self, inputs):
return self.bn(inputs)
class TFPad(keras.layers.Layer):
# Pad inputs in spatial dimensions 1 and 2
def __init__(self, pad):
super().__init__()
if isinstance(pad, int):
self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]])
else: # tuple/list
self.pad = tf.constant([[0, 0], [pad[0], pad[0]], [pad[1], pad[1]], [0, 0]])
def call(self, inputs):
return tf.pad(inputs, self.pad, mode='constant', constant_values=0)
class TFConv(keras.layers.Layer):
# Standard convolution
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
# ch_in, ch_out, weights, kernel, stride, padding, groups
super().__init__()
assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
# TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding)
# see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch
conv = keras.layers.Conv2D(
filters=c2,
kernel_size=k,
strides=s,
padding='SAME' if s == 1 else 'VALID',
use_bias=not hasattr(w, 'bn'),
kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),
bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])
self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity
self.act = activations(w.act) if act else tf.identity
def call(self, inputs):
return self.act(self.bn(self.conv(inputs)))
class TFDWConv(keras.layers.Layer):
# Depthwise convolution
def __init__(self, c1, c2, k=1, s=1, p=None, act=True, w=None):
# ch_in, ch_out, weights, kernel, stride, padding, groups
super().__init__()
assert c2 % c1 == 0, f'TFDWConv() output={c2} must be a multiple of input={c1} channels'
conv = keras.layers.DepthwiseConv2D(
kernel_size=k,
depth_multiplier=c2 // c1,
strides=s,
padding='SAME' if s == 1 else 'VALID',
use_bias=not hasattr(w, 'bn'),
depthwise_
gitextract_5xl4bhv7/ ├── .github/ │ └── workflows/ │ └── sync.yml ├── .gitignore ├── CITATION.cff ├── LICENSE ├── README.md ├── assets/ │ └── sample/ │ └── s1/ │ └── meta/ │ └── pred.txt ├── config/ │ ├── __init__.py │ ├── default.yaml │ ├── exp/ │ │ ├── i-tardal-dt.yaml │ │ └── t-tardal-ct.yaml │ └── official/ │ ├── colab.yaml │ ├── infer/ │ │ ├── tardal-ct.yaml │ │ ├── tardal-dt.yaml │ │ └── tardal-tt.yaml │ └── train/ │ ├── tardal-ct.yaml │ ├── tardal-dt.yaml │ └── tardal-tt.yaml ├── data/ │ └── README.md ├── functions/ │ ├── __init__.py │ ├── div_loss.py │ └── get_param_groups.py ├── infer.py ├── loader/ │ ├── __init__.py │ ├── m3fd.py │ ├── roadscene.py │ ├── tno.py │ └── utils/ │ ├── __init__.py │ ├── checker.py │ └── reader.py ├── module/ │ ├── __init__.py │ ├── detect/ │ │ ├── README.md │ │ ├── models/ │ │ │ ├── __init__.py │ │ │ ├── common.py │ │ │ ├── experimental.py │ │ │ ├── hub/ │ │ │ │ ├── anchors.yaml │ │ │ │ ├── yolov3-spp.yaml │ │ │ │ ├── yolov3-tiny.yaml │ │ │ │ ├── yolov3.yaml │ │ │ │ ├── yolov5-bifpn.yaml │ │ │ │ ├── yolov5-fpn.yaml │ │ │ │ ├── yolov5-p2.yaml │ │ │ │ ├── yolov5-p34.yaml │ │ │ │ ├── yolov5-p6.yaml │ │ │ │ ├── yolov5-p7.yaml │ │ │ │ ├── yolov5-panet.yaml │ │ │ │ ├── yolov5l6.yaml │ │ │ │ ├── yolov5m6.yaml │ │ │ │ ├── yolov5n6.yaml │ │ │ │ ├── yolov5s-ghost.yaml │ │ │ │ ├── yolov5s-transformer.yaml │ │ │ │ ├── yolov5s6.yaml │ │ │ │ └── yolov5x6.yaml │ │ │ ├── tf.py │ │ │ ├── yolo.py │ │ │ ├── yolov5l.yaml │ │ │ ├── yolov5m.yaml │ │ │ ├── yolov5n.yaml │ │ │ ├── yolov5s.yaml │ │ │ └── yolov5x.yaml │ │ ├── requirements.txt │ │ └── utils/ │ │ ├── __init__.py │ │ ├── activations.py │ │ ├── augmentations.py │ │ ├── autoanchor.py │ │ ├── autobatch.py │ │ ├── aws/ │ │ │ ├── __init__.py │ │ │ ├── mime.sh │ │ │ ├── resume.py │ │ │ └── userdata.sh │ │ ├── benchmarks.py │ │ ├── callbacks.py │ │ ├── dataloaders.py │ │ ├── docker/ │ │ │ ├── Dockerfile │ │ │ ├── Dockerfile-arm64 │ │ │ └── Dockerfile-cpu │ │ ├── downloads.py │ │ ├── flask_rest_api/ │ │ │ ├── README.md │ │ │ ├── example_request.py │ │ │ └── restapi.py │ │ ├── general.py │ │ ├── google_app_engine/ │ │ │ ├── Dockerfile │ │ │ ├── additional_requirements.txt │ │ │ └── app.yaml │ │ ├── loggers/ │ │ │ ├── __init__.py │ │ │ └── wandb/ │ │ │ ├── README.md │ │ │ ├── __init__.py │ │ │ ├── log_dataset.py │ │ │ ├── sweep.py │ │ │ ├── sweep.yaml │ │ │ └── wandb_utils.py │ │ ├── loss.py │ │ ├── metrics.py │ │ ├── plots.py │ │ └── torch_utils.py │ ├── fuse/ │ │ ├── __init__.py │ │ ├── discriminator.py │ │ └── generator.py │ └── saliency/ │ ├── __init__.py │ └── u2net.py ├── pipeline/ │ ├── __init__.py │ ├── detect.py │ ├── fuse.py │ ├── iqa.py │ ├── saliency.py │ └── train.py ├── requirements.txt ├── scripts/ │ ├── __init__.py │ ├── infer_f.py │ ├── infer_fd.py │ ├── train_f.py │ ├── train_fd.py │ └── utils/ │ └── smart_optimizer.py ├── tools/ │ ├── choose_images.py │ ├── convert_to_png.py │ ├── data_preview.py │ ├── dict_to_device.py │ ├── environment_probe.py │ ├── generate_mask.py │ └── scenario_reader.py ├── train.py └── tutorial.ipynb
SYMBOL INDEX (609 symbols across 51 files)
FILE: config/__init__.py
class ConfigDict (line 1) | class ConfigDict(dict):
function from_dict (line 6) | def from_dict(obj) -> ConfigDict:
FILE: functions/div_loss.py
function div_loss (line 7) | def div_loss(disc, real_x, fake_x, wp: int = 6, eps: float = 1e-6):
FILE: functions/get_param_groups.py
function get_param_groups (line 6) | def get_param_groups(module) -> tuple[List, List, List]:
FILE: loader/m3fd.py
class M3FD (line 20) | class M3FD(Dataset):
method __init__ (line 28) | def __init__(self, root: str | Path, mode: Literal['train', 'val', 'pr...
method __len__ (line 68) | def __len__(self) -> int:
method __getitem__ (line 71) | def __getitem__(self, index: int) -> dict:
method train_val_item (line 79) | def train_val_item(self, index: int) -> dict:
method pred_item (line 137) | def pred_item(self, index: int) -> dict:
method pred_save (line 158) | def pred_save(fus: Tensor, names: List[str | Path], shape: List[Size],...
method pred_save_no_boxes (line 164) | def pred_save_no_boxes(fus: Tensor, names: List[str | Path], shape: Li...
method pred_save_with_boxes (line 170) | def pred_save_with_boxes(fus: Tensor, names: List[str | Path], shape: ...
method collate_fn (line 201) | def collate_fn(data: List[dict]) -> dict:
FILE: loader/roadscene.py
class RoadScene (line 16) | class RoadScene(Dataset):
method __init__ (line 20) | def __init__(self, root: str | Path, mode: Literal['train', 'val', 'pr...
method __len__ (line 52) | def __len__(self) -> int:
method __getitem__ (line 55) | def __getitem__(self, index: int) -> dict:
method train_val_item (line 63) | def train_val_item(self, index: int) -> dict:
method pred_item (line 89) | def pred_item(self, index: int) -> dict:
method pred_save (line 110) | def pred_save(fus: Tensor, names: List[str | Path], shape: List[Size]):
method collate_fn (line 116) | def collate_fn(data: List[dict]) -> dict:
FILE: loader/tno.py
class TNO (line 16) | class TNO(Dataset):
method __init__ (line 20) | def __init__(self, root: str | Path, mode: Literal['train', 'val', 'pr...
method __len__ (line 52) | def __len__(self) -> int:
method __getitem__ (line 55) | def __getitem__(self, index: int) -> dict:
method train_val_item (line 63) | def train_val_item(self, index: int) -> dict:
method pred_item (line 89) | def pred_item(self, index: int) -> dict:
method pred_save (line 110) | def pred_save(fus: Tensor, names: List[str | Path], shape: List[Size]):
method collate_fn (line 116) | def collate_fn(data: List[dict]) -> dict:
FILE: loader/utils/checker.py
function check_image (line 15) | def check_image(root: Path, img_list: List[str]):
function check_iqa (line 24) | def check_iqa(root: Path, img_list: List[str], config: ConfigDict):
function check_labels (line 41) | def check_labels(root: Path, img_list: List[str]) -> List[Tensor]:
function check_mask (line 54) | def check_mask(root: Path, img_list: List[str], config: ConfigDict):
function get_max_size (line 71) | def get_max_size(root: Path, img_list: List[str]):
FILE: loader/utils/reader.py
function gray_read (line 13) | def gray_read(img_path: str | Path) -> Tensor:
function ycbcr_read (line 19) | def ycbcr_read(img_path: str | Path) -> Tuple[Tensor, Tensor]:
function label_read (line 27) | def label_read(label_path: str | Path) -> Tensor:
function img_write (line 34) | def img_write(img_t: Tensor, img_path: str | Path):
function label_write (line 41) | def label_write(pred_i: Tensor, txt_path: str | Path):
FILE: module/detect/models/common.py
function autopad (line 40) | def autopad(k, p=None): # kernel, padding
class Conv (line 47) | class Conv(nn.Module):
method __init__ (line 49) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in,...
method forward (line 55) | def forward(self, x):
method forward_fuse (line 58) | def forward_fuse(self, x):
class DWConv (line 62) | class DWConv(Conv):
method __init__ (line 64) | def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kern...
class DWConvTranspose2d (line 68) | class DWConvTranspose2d(nn.ConvTranspose2d):
method __init__ (line 70) | def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, ke...
class TransformerLayer (line 74) | class TransformerLayer(nn.Module):
method __init__ (line 76) | def __init__(self, c, num_heads):
method forward (line 85) | def forward(self, x):
class TransformerBlock (line 91) | class TransformerBlock(nn.Module):
method __init__ (line 93) | def __init__(self, c1, c2, num_heads, num_layers):
method forward (line 102) | def forward(self, x):
class Bottleneck (line 110) | class Bottleneck(nn.Module):
method __init__ (line 112) | def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_ou...
method forward (line 119) | def forward(self, x):
class BottleneckCSP (line 123) | class BottleneckCSP(nn.Module):
method __init__ (line 125) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ...
method forward (line 136) | def forward(self, x):
class CrossConv (line 142) | class CrossConv(nn.Module):
method __init__ (line 144) | def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
method forward (line 152) | def forward(self, x):
class C3 (line 156) | class C3(nn.Module):
method __init__ (line 158) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ...
method forward (line 166) | def forward(self, x):
class C3x (line 170) | class C3x(C3):
method __init__ (line 172) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
class C3TR (line 178) | class C3TR(C3):
method __init__ (line 180) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
class C3SPP (line 186) | class C3SPP(C3):
method __init__ (line 188) | def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
class C3Ghost (line 194) | class C3Ghost(C3):
method __init__ (line 196) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
class SPP (line 202) | class SPP(nn.Module):
method __init__ (line 204) | def __init__(self, c1, c2, k=(5, 9, 13)):
method forward (line 211) | def forward(self, x):
class SPPF (line 218) | class SPPF(nn.Module):
method __init__ (line 220) | def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
method forward (line 227) | def forward(self, x):
class Focus (line 236) | class Focus(nn.Module):
method __init__ (line 238) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in,...
method forward (line 243) | def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
class GhostConv (line 248) | class GhostConv(nn.Module):
method __init__ (line 250) | def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out,...
method forward (line 256) | def forward(self, x):
class GhostBottleneck (line 261) | class GhostBottleneck(nn.Module):
method __init__ (line 263) | def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
method forward (line 278) | def forward(self, x):
class Contract (line 282) | class Contract(nn.Module):
method __init__ (line 284) | def __init__(self, gain=2):
method forward (line 288) | def forward(self, x):
class Expand (line 296) | class Expand(nn.Module):
method __init__ (line 298) | def __init__(self, gain=2):
method forward (line 302) | def forward(self, x):
class Concat (line 310) | class Concat(nn.Module):
method __init__ (line 312) | def __init__(self, dimension=1):
method forward (line 316) | def forward(self, x):
class DetectMultiBackend (line 320) | class DetectMultiBackend(nn.Module):
method __init__ (line 322) | def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), d...
method forward (line 462) | def forward(self, im, augment=False, visualize=False, val=False):
method warmup (line 523) | def warmup(self, imgsz=(1, 3, 640, 640)):
method model_type (line 532) | def model_type(p='path/to/model.pt'):
method _load_metadata (line 544) | def _load_metadata(f='path/to/meta.yaml'):
class AutoShape (line 551) | class AutoShape(nn.Module):
method __init__ (line 561) | def __init__(self, model, verbose=True):
method _apply (line 570) | def _apply(self, fn):
method forward (line 582) | def forward(self, imgs, size=640, augment=False, profile=False):
class Detections (line 646) | class Detections:
method __init__ (line 648) | def __init__(self, imgs, pred, files, times=(0, 0, 0, 0), names=None, ...
method display (line 665) | def display(self, pprint=False, show=False, save=False, crop=False, re...
method print (line 710) | def print(self):
method show (line 714) | def show(self, labels=True):
method save (line 717) | def save(self, labels=True, save_dir='runs/detect/exp'):
method crop (line 721) | def crop(self, save=True, save_dir='runs/detect/exp'):
method render (line 725) | def render(self, labels=True):
method pandas (line 729) | def pandas(self):
method tolist (line 739) | def tolist(self):
method __len__ (line 748) | def __len__(self):
method __str__ (line 751) | def __str__(self):
class Classify (line 756) | class Classify(nn.Module):
method __init__ (line 758) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, k...
method forward (line 764) | def forward(self, x):
FILE: module/detect/models/experimental.py
class Sum (line 15) | class Sum(nn.Module):
method __init__ (line 17) | def __init__(self, n, weight=False): # n: number of inputs
method forward (line 24) | def forward(self, x):
class MixConv2d (line 36) | class MixConv2d(nn.Module):
method __init__ (line 38) | def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): # ch_in, ch...
method forward (line 57) | def forward(self, x):
class Ensemble (line 61) | class Ensemble(nn.ModuleList):
method __init__ (line 63) | def __init__(self):
method forward (line 66) | def forward(self, x, augment=False, profile=False, visualize=False):
function attempt_load (line 74) | def attempt_load(weights, device=None, inplace=True, fuse=True):
FILE: module/detect/models/tf.py
class TFBN (line 38) | class TFBN(keras.layers.Layer):
method __init__ (line 40) | def __init__(self, w=None):
method call (line 49) | def call(self, inputs):
class TFPad (line 53) | class TFPad(keras.layers.Layer):
method __init__ (line 55) | def __init__(self, pad):
method call (line 62) | def call(self, inputs):
class TFConv (line 66) | class TFConv(keras.layers.Layer):
method __init__ (line 68) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
method call (line 86) | def call(self, inputs):
class TFDWConv (line 90) | class TFDWConv(keras.layers.Layer):
method __init__ (line 92) | def __init__(self, c1, c2, k=1, s=1, p=None, act=True, w=None):
method call (line 108) | def call(self, inputs):
class TFDWConvTranspose2d (line 112) | class TFDWConvTranspose2d(keras.layers.Layer):
method __init__ (line 114) | def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0, w=None):
method call (line 131) | def call(self, inputs):
class TFFocus (line 135) | class TFFocus(keras.layers.Layer):
method __init__ (line 137) | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
method call (line 142) | def call(self, inputs): # x(b,w,h,c) -> y(b,w/2,h/2,4c)
class TFBottleneck (line 148) | class TFBottleneck(keras.layers.Layer):
method __init__ (line 150) | def __init__(self, c1, c2, shortcut=True, g=1, e=0.5, w=None): # ch_i...
method call (line 157) | def call(self, inputs):
class TFCrossConv (line 161) | class TFCrossConv(keras.layers.Layer):
method __init__ (line 163) | def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False, w=None):
method call (line 170) | def call(self, inputs):
class TFConv2d (line 174) | class TFConv2d(keras.layers.Layer):
method __init__ (line 176) | def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None):
method call (line 188) | def call(self, inputs):
class TFBottleneckCSP (line 192) | class TFBottleneckCSP(keras.layers.Layer):
method __init__ (line 194) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
method call (line 206) | def call(self, inputs):
class TFC3 (line 212) | class TFC3(keras.layers.Layer):
method __init__ (line 214) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
method call (line 223) | def call(self, inputs):
class TFC3x (line 227) | class TFC3x(keras.layers.Layer):
method __init__ (line 229) | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
method call (line 239) | def call(self, inputs):
class TFSPP (line 243) | class TFSPP(keras.layers.Layer):
method __init__ (line 245) | def __init__(self, c1, c2, k=(5, 9, 13), w=None):
method call (line 252) | def call(self, inputs):
class TFSPPF (line 257) | class TFSPPF(keras.layers.Layer):
method __init__ (line 259) | def __init__(self, c1, c2, k=5, w=None):
method call (line 266) | def call(self, inputs):
class TFDetect (line 273) | class TFDetect(keras.layers.Layer):
method __init__ (line 275) | def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None)...
method call (line 292) | def call(self, inputs):
method _make_grid (line 316) | def _make_grid(nx=20, ny=20):
class TFUpsample (line 323) | class TFUpsample(keras.layers.Layer):
method __init__ (line 325) | def __init__(self, size, scale_factor, mode, w=None): # warning: all ...
method call (line 334) | def call(self, inputs):
class TFConcat (line 338) | class TFConcat(keras.layers.Layer):
method __init__ (line 340) | def __init__(self, dimension=1, w=None):
method call (line 345) | def call(self, inputs):
function parse_model (line 349) | def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3)
class TFModel (line 403) | class TFModel:
method __init__ (line 405) | def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, model=None, imgs...
method predict (line 421) | def predict(self,
method _xywh2xyxy (line 464) | def _xywh2xyxy(xywh):
class AgnosticNMS (line 470) | class AgnosticNMS(keras.layers.Layer):
method call (line 472) | def call(self, input, topk_all, iou_thres, conf_thres):
method _nms (line 480) | def _nms(x, topk_all=100, iou_thres=0.45, conf_thres=0.25): # agnosti...
function activations (line 508) | def activations(act=nn.SiLU):
function representative_dataset_gen (line 520) | def representative_dataset_gen(dataset, ncalib=100):
function run (line 531) | def run(
function parse_opt (line 556) | def parse_opt():
function main (line 568) | def main(opt):
FILE: module/detect/models/yolo.py
class Detect (line 38) | class Detect(nn.Module):
method __init__ (line 43) | def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detecti...
method forward (line 55) | def forward(self, x):
method _make_grid (line 79) | def _make_grid(self, nx=20, ny=20, i=0):
class Model (line 93) | class Model(nn.Module):
method __init__ (line 95) | def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): ...
method forward (line 133) | def forward(self, x, augment=False, profile=False, visualize=False):
method _forward_augment (line 138) | def _forward_augment(self, x):
method _forward_once (line 152) | def _forward_once(self, x, profile=False, visualize=False):
method _descale_pred (line 165) | def _descale_pred(self, p, flips, scale, img_size):
method _clip_augmented (line 182) | def _clip_augmented(self, y):
method _profile_one_layer (line 193) | def _profile_one_layer(self, m, x, dt):
method _initialize_biases (line 206) | def _initialize_biases(self, cf=None): # initialize biases into Detec...
method _print_biases (line 216) | def _print_biases(self):
method fuse (line 229) | def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
method info (line 239) | def info(self, verbose=False, img_size=640): # print model information
method _apply (line 242) | def _apply(self, fn):
function parse_model (line 254) | def parse_model(d, ch): # model_dict, input_channels(3)
FILE: module/detect/utils/__init__.py
function notebook_init (line 7) | def notebook_init(verbose=True):
FILE: module/detect/utils/activations.py
class SiLU (line 11) | class SiLU(nn.Module):
method forward (line 14) | def forward(x):
class Hardswish (line 18) | class Hardswish(nn.Module):
method forward (line 21) | def forward(x):
class Mish (line 26) | class Mish(nn.Module):
method forward (line 29) | def forward(x):
class MemoryEfficientMish (line 33) | class MemoryEfficientMish(nn.Module):
class F (line 35) | class F(torch.autograd.Function):
method forward (line 38) | def forward(ctx, x):
method backward (line 43) | def backward(ctx, grad_output):
method forward (line 49) | def forward(self, x):
class FReLU (line 53) | class FReLU(nn.Module):
method __init__ (line 55) | def __init__(self, c1, k=3): # ch_in, kernel
method forward (line 60) | def forward(self, x):
class AconC (line 64) | class AconC(nn.Module):
method __init__ (line 70) | def __init__(self, c1):
method forward (line 76) | def forward(self, x):
class MetaAconC (line 81) | class MetaAconC(nn.Module):
method __init__ (line 87) | def __init__(self, c1, k=1, s=1, r=16): # ch_in, kernel, stride, r
method forward (line 97) | def forward(self, x):
FILE: module/detect/utils/augmentations.py
class Albumentations (line 15) | class Albumentations:
method __init__ (line 17) | def __init__(self):
method __call__ (line 39) | def __call__(self, im, labels, p=1.0):
function augment_hsv (line 46) | def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
function hist_equalize (line 62) | def hist_equalize(im, clahe=True, bgr=False):
function replicate (line 73) | def replicate(im, labels):
function letterbox (line 90) | def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True...
function random_perspective (line 123) | def random_perspective(
function copy_paste (line 221) | def copy_paste(im, labels, segments, p=0.5):
function cutout (line 245) | def cutout(im, labels, p=0.5):
function mixup (line 272) | def mixup(im, labels, im2, labels2):
function box_candidates (line 280) | def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1...
FILE: module/detect/utils/autoanchor.py
function check_anchor_order (line 18) | def check_anchor_order(m):
function check_anchors (line 28) | def check_anchors(dataset, model, thr=4.0, imgsz=640):
function kmean_anchors (line 68) | def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=...
FILE: module/detect/utils/autobatch.py
function check_train_batch_size (line 15) | def check_train_batch_size(model, imgsz=640, amp=True):
function autobatch (line 21) | def autobatch(model, imgsz=640, fraction=0.9, batch_size=16):
FILE: module/detect/utils/benchmarks.py
function run (line 49) | def run(
function test (line 102) | def test(
function parse_opt (line 134) | def parse_opt():
function main (line 151) | def main(opt):
FILE: module/detect/utils/callbacks.py
class Callbacks (line 7) | class Callbacks:
method __init__ (line 12) | def __init__(self):
method register_action (line 36) | def register_action(self, hook, name='', callback=None):
method get_registered_actions (line 49) | def get_registered_actions(self, hook=None):
method run (line 58) | def run(self, hook, *args, **kwargs):
FILE: module/detect/utils/dataloaders.py
function get_hash (line 46) | def get_hash(paths):
function exif_size (line 54) | def exif_size(img):
function exif_transpose (line 67) | def exif_transpose(image):
function seed_worker (line 93) | def seed_worker(worker_id):
function create_dataloader (line 100) | def create_dataloader(
class InfiniteDataLoader (line 157) | class InfiniteDataLoader(dataloader.DataLoader):
method __init__ (line 163) | def __init__(self, *args, **kwargs):
method __len__ (line 168) | def __len__(self):
method __iter__ (line 171) | def __iter__(self):
class _RepeatSampler (line 176) | class _RepeatSampler:
method __init__ (line 183) | def __init__(self, sampler):
method __iter__ (line 186) | def __iter__(self):
class LoadImages (line 191) | class LoadImages:
method __init__ (line 193) | def __init__(self, path, img_size=640, stride=32, auto=True):
method __iter__ (line 224) | def __iter__(self):
method __next__ (line 228) | def __next__(self):
method new_video (line 265) | def new_video(self, path):
method __len__ (line 270) | def __len__(self):
class LoadWebcam (line 274) | class LoadWebcam: # for inference
method __init__ (line 276) | def __init__(self, pipe='0', img_size=640, stride=32):
method __iter__ (line 283) | def __iter__(self):
method __next__ (line 287) | def __next__(self):
method __len__ (line 312) | def __len__(self):
class LoadStreams (line 316) | class LoadStreams:
method __init__ (line 318) | def __init__(self, sources='streams.txt', img_size=640, stride=32, aut...
method update (line 364) | def update(self, i, cap, stream):
method __iter__ (line 381) | def __iter__(self):
method __next__ (line 385) | def __next__(self):
method __len__ (line 404) | def __len__(self):
function img2label_paths (line 408) | def img2label_paths(img_paths):
class LoadImagesAndLabels (line 414) | class LoadImagesAndLabels(Dataset):
method __init__ (line 419) | def __init__(
method cache_labels (line 555) | def cache_labels(self, path=Path('./labels.cache'), prefix=''):
method __len__ (line 595) | def __len__(self):
method __getitem__ (line 604) | def __getitem__(self, index):
method load_image (line 680) | def load_image(self, i):
method cache_images_to_disk (line 698) | def cache_images_to_disk(self, i):
method load_mosaic (line 704) | def load_mosaic(self, index):
method load_mosaic9 (line 764) | def load_mosaic9(self, index):
method collate_fn (line 843) | def collate_fn(batch):
method collate_fn4 (line 850) | def collate_fn4(batch):
function create_folder (line 879) | def create_folder(path='./new'):
function flatten_recursive (line 886) | def flatten_recursive(path=DATASETS_DIR / 'coco128'):
function extract_boxes (line 894) | def extract_boxes(path=DATASETS_DIR / 'coco128'): # from utils.dataload...
function autosplit (line 928) | def autosplit(path=DATASETS_DIR / 'coco128/images', weights=(0.9, 0.1, 0...
function verify_image_label (line 952) | def verify_image_label(args):
function dataset_stats (line 1004) | def dataset_stats(path='coco128.yaml', autodownload=False, verbose=False...
FILE: module/detect/utils/downloads.py
function is_url (line 19) | def is_url(url):
function gsutil_getsize (line 28) | def gsutil_getsize(url=''):
function safe_download (line 34) | def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):
function attempt_download (line 55) | def attempt_download(file, repo='ultralytics/yolov5', release='v6.1'):
function gdrive_download (line 107) | def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zi...
function get_token (line 142) | def get_token(cookie="./cookie"):
FILE: module/detect/utils/flask_rest_api/restapi.py
function predict (line 19) | def predict():
FILE: module/detect/utils/general.py
function is_kaggle (line 57) | def is_kaggle():
function is_writeable (line 67) | def is_writeable(dir, test=False):
function set_logging (line 81) | def set_logging(name=None, verbose=VERBOSE):
function user_config_dir (line 100) | def user_config_dir(dir='Ultralytics', env_var='YOLOV5_CONFIG_DIR'):
class Profile (line 116) | class Profile(contextlib.ContextDecorator):
method __enter__ (line 118) | def __enter__(self):
method __exit__ (line 121) | def __exit__(self, type, value, traceback):
class Timeout (line 125) | class Timeout(contextlib.ContextDecorator):
method __init__ (line 127) | def __init__(self, seconds, *, timeout_msg='', suppress_timeout_errors...
method _timeout_handler (line 132) | def _timeout_handler(self, signum, frame):
method __enter__ (line 135) | def __enter__(self):
method __exit__ (line 140) | def __exit__(self, exc_type, exc_val, exc_tb):
class WorkingDirectory (line 147) | class WorkingDirectory(contextlib.ContextDecorator):
method __init__ (line 149) | def __init__(self, new_dir):
method __enter__ (line 153) | def __enter__(self):
method __exit__ (line 156) | def __exit__(self, exc_type, exc_val, exc_tb):
function try_except (line 160) | def try_except(func):
function threaded (line 171) | def threaded(func):
function methods (line 181) | def methods(instance):
function print_args (line 186) | def print_args(args: Optional[dict] = None, show_file=True, show_fcn=Fal...
function init_seeds (line 197) | def init_seeds(seed=0, deterministic=False):
function intersect_dicts (line 215) | def intersect_dicts(da, db, exclude=()):
function get_latest_run (line 220) | def get_latest_run(search_dir='.'):
function is_docker (line 226) | def is_docker():
function is_colab (line 231) | def is_colab():
function is_pip (line 240) | def is_pip():
function is_ascii (line 245) | def is_ascii(s=''):
function is_chinese (line 251) | def is_chinese(s='人工智能'):
function emojis (line 256) | def emojis(str=''):
function file_age (line 261) | def file_age(path=__file__):
function file_date (line 267) | def file_date(path=__file__):
function file_size (line 273) | def file_size(path):
function check_online (line 285) | def check_online():
function git_describe (line 295) | def git_describe(path=ROOT): # path must be a directory
function check_git_status (line 306) | def check_git_status():
function check_python (line 325) | def check_python(minimum='3.7.0'):
function check_version (line 330) | def check_version(current='0.0.0', minimum='0.0.0', name='version ', pin...
function check_requirements (line 343) | def check_requirements(requirements=ROOT / 'requirements.txt', exclude=(...
function check_img_size (line 379) | def check_img_size(imgsz, s=32, floor=0):
function check_imshow (line 391) | def check_imshow():
function check_suffix (line 406) | def check_suffix(file='yolov5s.pt', suffix=('.pt',), msg=''):
function check_yaml (line 417) | def check_yaml(file, suffix=('.yaml', '.yml')):
function check_file (line 422) | def check_file(file, suffix=''):
function check_font (line 447) | def check_font(font=FONT, progress=False):
function check_dataset (line 457) | def check_dataset(data, autodownload=True):
function check_amp (line 517) | def check_amp(model):
function url2file (line 545) | def url2file(url):
function download (line 551) | def download(url, dir='.', unzip=True, delete=True, curl=False, threads=...
function make_divisible (line 597) | def make_divisible(x, divisor):
function clean_str (line 604) | def clean_str(s):
function one_cycle (line 609) | def one_cycle(y1=0.0, y2=1.0, steps=100):
function colorstr (line 614) | def colorstr(*input):
function labels_to_class_weights (line 640) | def labels_to_class_weights(labels, nc=80):
function labels_to_image_weights (line 659) | def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
function coco80_to_coco91_class (line 666) | def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index...
function xyxy2xywh (line 678) | def xyxy2xywh(x):
function xywh2xyxy (line 688) | def xywh2xyxy(x):
function xywhn2xyxy (line 698) | def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
function xyxy2xywhn (line 708) | def xyxy2xywhn(x, w=640, h=640, clip=False, eps=0.0):
function xyn2xy (line 720) | def xyn2xy(x, w=640, h=640, padw=0, padh=0):
function segment2box (line 728) | def segment2box(segment, width=640, height=640):
function segments2boxes (line 736) | def segments2boxes(segments):
function resample_segments (line 745) | def resample_segments(segments, n=1000):
function scale_coords (line 755) | def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
function clip_coords (line 771) | def clip_coords(boxes, shape):
function non_max_suppression (line 783) | def non_max_suppression(
function strip_optimizer (line 887) | def strip_optimizer(f='best.pt', s=''): # from utils.general import *; ...
function print_mutation (line 903) | def print_mutation(results, hyp, save_dir, bucket, prefix=colorstr('evol...
function apply_classifier (line 949) | def apply_classifier(x, model, img, im0):
function increment_path (line 984) | def increment_path(path, exist_ok=False, sep='', mkdir=False):
function imread (line 1014) | def imread(path, flags=cv2.IMREAD_COLOR):
function imwrite (line 1018) | def imwrite(path, im):
function imshow (line 1026) | def imshow(path, im):
FILE: module/detect/utils/loggers/__init__.py
class Loggers (line 35) | class Loggers():
method __init__ (line 37) | def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, lo...
method on_train_start (line 90) | def on_train_start(self):
method on_pretrain_routine_end (line 94) | def on_pretrain_routine_end(self):
method on_train_batch_end (line 100) | def on_train_batch_end(self, ni, model, imgs, targets, paths, plots):
method on_train_epoch_end (line 115) | def on_train_epoch_end(self, epoch):
method on_val_image_end (line 120) | def on_val_image_end(self, pred, predn, path, names, im):
method on_val_end (line 125) | def on_val_end(self):
method on_fit_epoch_end (line 131) | def on_fit_epoch_end(self, vals, epoch, best_fitness, fi):
method on_model_save (line 153) | def on_model_save(self, last, epoch, final_epoch, best_fitness, fi):
method on_train_end (line 159) | def on_train_end(self, last, best, plots, epoch, results):
method on_params_update (line 184) | def on_params_update(self, params):
FILE: module/detect/utils/loggers/wandb/log_dataset.py
function create_dataset_artifact (line 10) | def create_dataset_artifact(opt):
FILE: module/detect/utils/loggers/wandb/sweep.py
function sweep (line 17) | def sweep():
FILE: module/detect/utils/loggers/wandb/wandb_utils.py
function remove_prefix (line 32) | def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX):
function check_wandb_config_file (line 36) | def check_wandb_config_file(data_config_file):
function check_wandb_dataset (line 43) | def check_wandb_dataset(data_file):
function get_run_info (line 63) | def get_run_info(run_path):
function check_wandb_resume (line 72) | def check_wandb_resume(opt):
function process_wandb_config_ddp_mode (line 86) | def process_wandb_config_ddp_mode(opt):
class WandbLogger (line 110) | class WandbLogger():
method __init__ (line 124) | def __init__(self, opt, run_id=None, job_type='Training'):
method check_and_upload_dataset (line 199) | def check_and_upload_dataset(self, opt):
method setup_training (line 218) | def setup_training(self, opt):
method download_dataset_artifact (line 273) | def download_dataset_artifact(self, path, alias):
method download_model_artifact (line 293) | def download_model_artifact(self, opt):
method log_model (line 311) | def log_model(self, path, opt, epoch, fitness_score, best_model=False):
method log_dataset_artifact (line 340) | def log_dataset_artifact(self, data_file, single_cls, project, overwri...
method map_val_table_path (line 402) | def map_val_table_path(self):
method create_dataset_table (line 412) | def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_...
method log_training_progress (line 464) | def log_training_progress(self, predn, path, names):
method val_one_image (line 511) | def val_one_image(self, pred, predn, path, names, im):
method log (line 539) | def log(self, log_dict):
method end_epoch (line 550) | def end_epoch(self, best_result=False):
method finish_run (line 587) | def finish_run(self):
function all_logging_disabled (line 599) | def all_logging_disabled(highest_level=logging.CRITICAL):
FILE: module/detect/utils/loss.py
function smooth_BCE (line 13) | def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues...
class BCEBlurWithLogitsLoss (line 18) | class BCEBlurWithLogitsLoss(nn.Module):
method __init__ (line 20) | def __init__(self, alpha=0.05):
method forward (line 25) | def forward(self, pred, true):
class FocalLoss (line 35) | class FocalLoss(nn.Module):
method __init__ (line 37) | def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
method forward (line 45) | def forward(self, pred, true):
class QFocalLoss (line 65) | class QFocalLoss(nn.Module):
method __init__ (line 67) | def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
method forward (line 75) | def forward(self, pred, true):
class ComputeLoss (line 91) | class ComputeLoss:
method __init__ (line 95) | def __init__(self, model, autobalance=False):
method __call__ (line 121) | def __call__(self, p, targets): # predictions, targets
method build_targets (line 177) | def build_targets(self, p, targets):
FILE: module/detect/utils/metrics.py
function fitness (line 15) | def fitness(x):
function smooth (line 21) | def smooth(y, f=0.05):
function ap_per_class (line 29) | def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='....
function compute_ap (line 96) | def compute_ap(recall, precision):
class ConfusionMatrix (line 124) | class ConfusionMatrix:
method __init__ (line 126) | def __init__(self, nc, conf=0.25, iou_thres=0.45):
method process_batch (line 132) | def process_batch(self, detections, labels):
method matrix (line 172) | def matrix(self):
method tp_fp (line 175) | def tp_fp(self):
method plot (line 181) | def plot(self, normalize=True, save_dir='', names=()):
method print (line 211) | def print(self):
function bbox_iou (line 216) | def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, ...
function box_area (line 257) | def box_area(box):
function box_iou (line 262) | def box_iou(box1, box2, eps=1e-7):
function bbox_ioa (line 283) | def bbox_ioa(box1, box2, eps=1e-7):
function wh_iou (line 305) | def wh_iou(wh1, wh2, eps=1e-7):
function plot_pr_curve (line 316) | def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()):
function plot_mc_curve (line 337) | def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabe...
FILE: module/detect/utils/plots.py
class Colors (line 30) | class Colors:
method __init__ (line 32) | def __init__(self):
method __call__ (line 39) | def __call__(self, i, bgr=False):
method hex2rgb (line 44) | def hex2rgb(h): # rgb order (PIL)
function check_pil_font (line 51) | def check_pil_font(font=FONT, size=10):
class Annotator (line 67) | class Annotator:
method __init__ (line 69) | def __init__(self, im, line_width=None, font_size=None, font='Arial.tt...
method box_label (line 84) | def box_label(self, box, label='', color=(128, 128, 128), txt_color=(2...
method rectangle (line 117) | def rectangle(self, xy, fill=None, outline=None, width=1):
method text (line 121) | def text(self, xy, text, txt_color=(255, 255, 255)):
method result (line 126) | def result(self):
function feature_visualization (line 131) | def feature_visualization(x, module_type, stage, n=32, save_dir=Path('ru...
function hist2d (line 159) | def hist2d(x, y, n=100):
function butter_lowpass_filtfilt (line 168) | def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):
function output_to_target (line 181) | def output_to_target(output):
function plot_images (line 191) | def plot_images(images, targets, paths=None, fname='images.jpg', names=N...
function plot_lr_scheduler (line 252) | def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):
function plot_val_txt (line 269) | def plot_val_txt(): # from utils.plots import *; plot_val()
function plot_targets_txt (line 286) | def plot_targets_txt(): # from utils.plots import *; plot_targets_txt()
function plot_val_study (line 299) | def plot_val_study(file='', dir='', x=None): # from utils.plots import ...
function plot_labels (line 350) | def plot_labels(labels, names=(), save_dir=Path('')):
function plot_evolve (line 397) | def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots im...
function plot_results (line 424) | def plot_results(file='path/to/results.csv', dir=''):
function profile_idetection (line 450) | def profile_idetection(start=0, stop=0, labels=(), save_dir=''):
function save_one_box (line 481) | def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, squar...
FILE: module/detect/utils/torch_utils.py
function smart_DDP (line 36) | def smart_DDP(model):
function torch_distributed_zero_first (line 48) | def torch_distributed_zero_first(local_rank: int):
function device_count (line 57) | def device_count():
function select_device (line 67) | def select_device(device='', batch_size=0, newline=True):
function time_sync (line 103) | def time_sync():
function profile (line 110) | def profile(input, ops, n=10, device=None):
function is_parallel (line 164) | def is_parallel(model):
function de_parallel (line 169) | def de_parallel(model):
function initialize_weights (line 174) | def initialize_weights(model):
function find_modules (line 186) | def find_modules(model, mclass=nn.Conv2d):
function sparsity (line 191) | def sparsity(model):
function prune (line 200) | def prune(model, amount=0.3):
function fuse_conv_and_bn (line 211) | def fuse_conv_and_bn(conv, bn):
function model_info (line 236) | def model_info(model, verbose=False, img_size=640):
function scale_img (line 263) | def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,...
function copy_attr (line 275) | def copy_attr(a, b, include=(), exclude=()):
function smart_optimizer (line 284) | def smart_optimizer(model, name='Adam', lr=0.001, momentum=0.9, weight_d...
class EarlyStopping (line 316) | class EarlyStopping:
method __init__ (line 318) | def __init__(self, patience=30):
method __call__ (line 324) | def __call__(self, epoch, fitness):
class ModelEMA (line 341) | class ModelEMA:
method __init__ (line 347) | def __init__(self, model, decay=0.9999, tau=2000, updates=0):
method update (line 357) | def update(self, model):
method update_attr (line 369) | def update_attr(self, model, include=(), exclude=('process_group', 're...
FILE: module/fuse/discriminator.py
class Discriminator (line 4) | class Discriminator(nn.Module):
method __init__ (line 9) | def __init__(self, dim: int = 32, size: tuple[int, int] = (224, 224)):
method forward (line 30) | def forward(self, x: Tensor) -> Tensor:
FILE: module/fuse/generator.py
class Generator (line 6) | class Generator(nn.Module):
method __init__ (line 12) | def __init__(self, dim: int = 32, depth: int = 3):
method forward (line 51) | def forward(self, ir: Tensor, vi: Tensor) -> Tensor:
FILE: module/saliency/u2net.py
class REBNCONV (line 10) | class REBNCONV(nn.Module):
method __init__ (line 11) | def __init__(self, in_ch=3, out_ch=3, dirate=1):
method forward (line 18) | def forward(self, x):
function _upsample_like (line 26) | def _upsample_like(src, tar):
class RSU7 (line 32) | class RSU7(nn.Module): # UNet07DRES(nn.Module):
method __init__ (line 34) | def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
method forward (line 65) | def forward(self, x):
class RSU6 (line 109) | class RSU6(nn.Module): # UNet06DRES(nn.Module):
method __init__ (line 111) | def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
method forward (line 138) | def forward(self, x):
class RSU5 (line 177) | class RSU5(nn.Module): # UNet05DRES(nn.Module):
method __init__ (line 179) | def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
method forward (line 202) | def forward(self, x):
class RSU4 (line 235) | class RSU4(nn.Module): # UNet04DRES(nn.Module):
method __init__ (line 237) | def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
method forward (line 256) | def forward(self, x):
class RSU4F (line 283) | class RSU4F(nn.Module): # UNet04FRES(nn.Module):
method __init__ (line 285) | def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
method forward (line 300) | def forward(self, x):
class U2NET (line 319) | class U2NET(nn.Module):
method __init__ (line 321) | def __init__(self, in_ch=3, out_ch=1):
method forward (line 357) | def forward(self, x):
class U2NETP (line 423) | class U2NETP(nn.Module):
method __init__ (line 425) | def __init__(self, in_ch=3, out_ch=1):
method forward (line 461) | def forward(self, x):
FILE: pipeline/detect.py
class Detect (line 22) | class Detect:
method __init__ (line 27) | def __init__(self, config, mode: Literal['train', 'inference'], nc: in...
method load_ckpt (line 79) | def load_ckpt(self, ckpt: dict):
method load_ckpt_fuse (line 83) | def load_ckpt_fuse(self, ckpt: dict):
method save_ckpt (line 102) | def save_ckpt(self) -> dict:
method forward (line 106) | def forward(self, imgs: Tensor) -> Tensor:
method eval (line 112) | def eval(self, imgs: Tensor, targets: Tensor, stats: List, preview: bo...
method inference (line 165) | def inference(self, imgs: Tensor) -> Tensor:
method criterion (line 175) | def criterion(self, imgs: Tensor, targets: Tensor) -> Tuple[Tensor, Li...
method preview (line 192) | def preview(imgs: Tensor, preds: Tensor, conf_th: float = 0.6):
method param_groups (line 226) | def param_groups(self) -> tuple[List, List, List]:
method process_batch (line 234) | def process_batch(detections, labels, iou_v):
FILE: pipeline/fuse.py
class Fuse (line 20) | class Fuse:
method __init__ (line 25) | def __init__(self, config, mode: Literal['train', 'inference']):
method load_ckpt (line 81) | def load_ckpt(self, ckpt: dict):
method save_ckpt (line 101) | def save_ckpt(self) -> dict:
method forward (line 107) | def forward(self, ir: Tensor, vi: Tensor) -> Tensor:
method eval (line 113) | def eval(self, ir: Tensor, vi: Tensor) -> Tensor:
method inference (line 119) | def inference(self, ir: Tensor, vi: Tensor) -> Tensor:
method criterion_dis_t (line 125) | def criterion_dis_t(self, ir: Tensor, vi: Tensor, mk: Tensor) -> Tensor:
method criterion_dis_d (line 151) | def criterion_dis_d(self, ir: Tensor, vi: Tensor, mk: Tensor) -> Tensor:
method criterion_generator (line 179) | def criterion_generator(self, ir: Tensor, vi: Tensor, mk: Tensor, w1: ...
method gradient (line 202) | def gradient(x: Tensor, eps: float = 1e-8) -> Tensor:
method src_loss (line 208) | def src_loss(self, x: Tensor, y: Tensor) -> Tensor:
method adv_loss (line 220) | def adv_loss(self, fus: Tensor, mk: Tensor) -> Tuple[Tensor, number, n...
method param_groups (line 233) | def param_groups(self, key: Optional[Literal['g', 'd']] = None) -> tup...
method g_params (line 247) | def g_params(self) -> tuple[List, List, List]:
method d_params (line 250) | def d_params(self) -> tuple[List, List, List]:
FILE: pipeline/iqa.py
class IQA (line 16) | class IQA:
method __init__ (line 21) | def __init__(self, url: str):
method inference (line 53) | def inference(self, src: str | Path, dst: str | Path):
method modality_inference (line 58) | def modality_inference(self, src: str | Path, dst: str | Path, modalit...
method extractor_inference (line 78) | def extractor_inference(self, x: Tensor) -> Tensor:
method _imread (line 93) | def _imread(img_p: str | Path):
FILE: pipeline/saliency.py
class Saliency (line 16) | class Saliency:
method __init__ (line 21) | def __init__(self, url: str):
method inference (line 52) | def inference(self, src: str | Path, dst: str | Path):
method _imread (line 75) | def _imread(img_p: str | Path):
FILE: pipeline/train.py
class Train (line 23) | class Train:
method __init__ (line 28) | def __init__(self, environment_probe: EnvironmentProbe, config: dict):
method train_dis_target (line 71) | def train_dis_target(self, ir: Tensor, vi: Tensor, mk: Tensor) -> Tensor:
method train_dis_detail (line 101) | def train_dis_detail(self, ir: Tensor, vi: Tensor, mk: Tensor) -> Tensor:
method gradient (line 131) | def gradient(self, x: Tensor, eps: float = 1e-6) -> Tensor:
method train_generator (line 137) | def train_generator(self, ir: Tensor, vi: Tensor, mk: Tensor, s1: Tens...
method run (line 182) | def run(self):
method save (line 207) | def save(self, epoch: int):
FILE: scripts/infer_f.py
class InferF (line 16) | class InferF:
method __init__ (line 17) | def __init__(self, config: str | Path | ConfigDict, save_dir: str | Pa...
method run (line 55) | def run(self):
FILE: scripts/infer_fd.py
class InferFD (line 17) | class InferFD:
method __init__ (line 18) | def __init__(self, config: str | Path | ConfigDict, save_dir: str | Pa...
method run (line 64) | def run(self):
FILE: scripts/train_f.py
class TrainF (line 22) | class TrainF:
method __init__ (line 23) | def __init__(self, config: str | Path | ConfigDict, wandb_key: str):
method run (line 101) | def run(self):
method optim (line 169) | def optim(self, loss: Tensor):
FILE: scripts/train_fd.py
class TrainFD (line 28) | class TrainFD:
method __init__ (line 29) | def __init__(self, config: str | Path | ConfigDict, wandb_key: str):
method run (line 109) | def run(self):
FILE: scripts/utils/smart_optimizer.py
function smart_optimizer (line 8) | def smart_optimizer(config: ConfigDict, param_group: Tuple[List, List, L...
FILE: tools/choose_images.py
function choose_images (line 9) | def choose_images(root: str | Path, mode: str = Literal['train', 'val', ...
FILE: tools/convert_to_png.py
function convert_to_png (line 9) | def convert_to_png(src: str | Path, color: bool):
FILE: tools/data_preview.py
function data_preview (line 16) | def data_preview(img_f: str | Path, lbl_f: str | Path, dst_f: str | Path...
FILE: tools/dict_to_device.py
function dict_to_device (line 7) | def dict_to_device(d: Dict, device: Device) -> Dict | None:
FILE: tools/environment_probe.py
class EnvironmentProbe (line 7) | class EnvironmentProbe:
method __init__ (line 12) | def __init__(self):
method memory_status (line 21) | def memory_status(self):
FILE: tools/generate_mask.py
function generate_mask (line 7) | def generate_mask(url: str, src: str, dst: str):
FILE: tools/scenario_reader.py
function scenario_counter (line 7) | def scenario_counter(src: str | Path):
function generate_meta (line 35) | def generate_meta(root: str | Path):
Condensed preview — 121 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (616K chars).
[
{
"path": ".github/workflows/sync.yml",
"chars": 878,
"preview": "name: Mirror to DUT DIMT\n\non: [ push, delete, create ]\n\njobs:\n git-mirror:\n runs-on: ubuntu-latest\n steps:\n "
},
{
"path": ".gitignore",
"chars": 386,
"preview": "# project config file (contain sensitive: server information)\n.idea/*\n\n# fuse results (contain images that can be reprod"
},
{
"path": "CITATION.cff",
"chars": 434,
"preview": "@inproceedings{liu2022target,\n title={Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchm"
},
{
"path": "LICENSE",
"chars": 35149,
"preview": " GNU GENERAL PUBLIC LICENSE\n Version 3, 29 June 2007\n\n Copyright (C) 2007 Free "
},
{
"path": "README.md",
"chars": 8464,
"preview": "# TarDAL \n\n[](https://colab.research.google.co"
},
{
"path": "assets/sample/s1/meta/pred.txt",
"chars": 40,
"preview": "M3FD_00471.png\nROAD_040.jpg\nTNO_028.bmp\n"
},
{
"path": "config/__init__.py",
"chars": 274,
"preview": "class ConfigDict(dict):\n __setattr__ = dict.__setitem__\n __getattr__ = dict.__getitem__\n\n\ndef from_dict(obj) -> Co"
},
{
"path": "config/default.yaml",
"chars": 4783,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/exp/i-tardal-dt.yaml",
"chars": 4581,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/exp/t-tardal-ct.yaml",
"chars": 4649,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/official/colab.yaml",
"chars": 4616,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/official/infer/tardal-ct.yaml",
"chars": 4603,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/official/infer/tardal-dt.yaml",
"chars": 4603,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/official/infer/tardal-tt.yaml",
"chars": 4653,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/official/train/tardal-ct.yaml",
"chars": 4892,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/official/train/tardal-dt.yaml",
"chars": 4574,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "config/official/train/tardal-tt.yaml",
"chars": 4846,
"preview": "# base settings\ndevice : cuda # device used for training and evaluation (cpu, cuda, cuda0, cuda1, ...)\nsave_dir : 'cac"
},
{
"path": "data/README.md",
"chars": 865,
"preview": "# Dataset Configure Reference \n\n## Official Supported Datasets\n\n* TNO: fuse\n* RoadScene: fuse\n* MultiSpectral: fuse + de"
},
{
"path": "functions/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "functions/div_loss.py",
"chars": 690,
"preview": "import logging\n\nimport torch\nimport torch.autograd as autograd\n\n\ndef div_loss(disc, real_x, fake_x, wp: int = 6, eps: fl"
},
{
"path": "functions/get_param_groups.py",
"chars": 634,
"preview": "from typing import List\n\nfrom torch import nn\n\n\ndef get_param_groups(module) -> tuple[List, List, List]:\n group = [],"
},
{
"path": "infer.py",
"chars": 2082,
"preview": "import argparse\nimport logging\nfrom pathlib import Path\n\nimport torch.backends.cudnn\nimport yaml\n\nimport scripts\nfrom co"
},
{
"path": "loader/__init__.py",
"chars": 135,
"preview": "from loader.m3fd import M3FD\nfrom loader.roadscene import RoadScene\nfrom loader.tno import TNO\n\n__all__ = ['TNO', 'RoadS"
},
{
"path": "loader/m3fd.py",
"chars": 8343,
"preview": "import logging\nimport random\nfrom pathlib import Path\nfrom typing import Literal, List, Optional\n\nimport torch\nfrom korn"
},
{
"path": "loader/roadscene.py",
"chars": 4175,
"preview": "import logging\nfrom pathlib import Path\nfrom typing import Literal, List\n\nimport torch\nfrom kornia.geometry import resiz"
},
{
"path": "loader/tno.py",
"chars": 4081,
"preview": "import logging\nfrom pathlib import Path\nfrom typing import Literal, List\n\nimport torch\nfrom kornia.geometry import resiz"
},
{
"path": "loader/utils/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "loader/utils/checker.py",
"chars": 2911,
"preview": "import logging\nimport sys\nfrom pathlib import Path\nfrom typing import List\n\nfrom torch import Tensor, Size\nfrom tqdm imp"
},
{
"path": "loader/utils/reader.py",
"chars": 1468,
"preview": "from pathlib import Path\nfrom typing import Tuple\n\nimport cv2\nimport numpy\nimport torch\nfrom kornia import image_to_tens"
},
{
"path": "module/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "module/detect/README.md",
"chars": 96,
"preview": "# Detect\n\nBased on YOLOv5.\n\nReference: [YOLOv5 official](https://github.com/ultralytics/yolov5)\n"
},
{
"path": "module/detect/models/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "module/detect/models/common.py",
"chars": 36147,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nCommon modules\n\"\"\"\n\nimport json\nimport os\nimport platform\nimport sys\nimpo"
},
{
"path": "module/detect/models/experimental.py",
"chars": 4143,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nExperimental modules\n\"\"\"\nimport math\n\nimport numpy as np\nimport torch\nimp"
},
{
"path": "module/detect/models/hub/anchors.yaml",
"chars": 3332,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Default anchors for COCO data\n\n\n# P5 --------------------------------------"
},
{
"path": "module/detect/models/hub/yolov3-spp.yaml",
"chars": 1564,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov3-tiny.yaml",
"chars": 1229,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov3.yaml",
"chars": 1555,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov5-bifpn.yaml",
"chars": 1420,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov5-fpn.yaml",
"chars": 1211,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov5-p2.yaml",
"chars": 1684,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov5-p34.yaml",
"chars": 1346,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.33 # model depth"
},
{
"path": "module/detect/models/hub/yolov5-p6.yaml",
"chars": 1738,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov5-p7.yaml",
"chars": 2119,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov5-panet.yaml",
"chars": 1404,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov5l6.yaml",
"chars": 1817,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/hub/yolov5m6.yaml",
"chars": 1819,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.67 # model depth"
},
{
"path": "module/detect/models/hub/yolov5n6.yaml",
"chars": 1819,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.33 # model depth"
},
{
"path": "module/detect/models/hub/yolov5s-ghost.yaml",
"chars": 1480,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.33 # model depth"
},
{
"path": "module/detect/models/hub/yolov5s-transformer.yaml",
"chars": 1438,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.33 # model depth"
},
{
"path": "module/detect/models/hub/yolov5s6.yaml",
"chars": 1819,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.33 # model depth"
},
{
"path": "module/detect/models/hub/yolov5x6.yaml",
"chars": 1819,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.33 # model depth"
},
{
"path": "module/detect/models/tf.py",
"chars": 25499,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nTensorFlow, Keras and TFLite versions of YOLOv5\nAuthored by https://githu"
},
{
"path": "module/detect/models/yolo.py",
"chars": 15377,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nYOLO-specific modules\n\nUsage:\n $ python path/to/models/yolo.py --cfg y"
},
{
"path": "module/detect/models/yolov5l.yaml",
"chars": 1398,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.0 # model depth "
},
{
"path": "module/detect/models/yolov5m.yaml",
"chars": 1400,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.67 # model depth"
},
{
"path": "module/detect/models/yolov5n.yaml",
"chars": 1400,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.33 # model depth"
},
{
"path": "module/detect/models/yolov5s.yaml",
"chars": 1400,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 0.33 # model depth"
},
{
"path": "module/detect/models/yolov5x.yaml",
"chars": 1400,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\n# Parameters\nnc: 80 # number of classes\ndepth_multiple: 1.33 # model depth"
},
{
"path": "module/detect/requirements.txt",
"chars": 1087,
"preview": "# YOLOv5 requirements\n# Usage: pip install -r requirements.txt\n\n# Base ----------------------------------------\nmatplotl"
},
{
"path": "module/detect/utils/__init__.py",
"chars": 1088,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\r\n\"\"\"\r\nutils/initialization\r\n\"\"\"\r\n\r\n\r\ndef notebook_init(verbose=True):\r\n # "
},
{
"path": "module/detect/utils/activations.py",
"chars": 3446,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nActivation functions\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch"
},
{
"path": "module/detect/utils/augmentations.py",
"chars": 11805,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nImage augmentation functions\n\"\"\"\n\nimport math\nimport random\n\nimport cv2\ni"
},
{
"path": "module/detect/utils/autoanchor.py",
"chars": 7428,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nAutoAnchor utils\n\"\"\"\n\nimport random\n\nimport numpy as np\nimport torch\nimpo"
},
{
"path": "module/detect/utils/autobatch.py",
"chars": 2595,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nAuto-batch utils\n\"\"\"\n\nfrom copy import deepcopy\n\nimport numpy as np\nimpor"
},
{
"path": "module/detect/utils/aws/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "module/detect/utils/aws/mime.sh",
"chars": 780,
"preview": "# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/\n#"
},
{
"path": "module/detect/utils/aws/resume.py",
"chars": 1202,
"preview": "# Resume all interrupted trainings in yolov5/ dir including DDP trainings\n# Usage: $ python utils/aws/resume.py\n\nimport "
},
{
"path": "module/detect/utils/aws/userdata.sh",
"chars": 1247,
"preview": "#!/bin/bash\n# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html\n# This "
},
{
"path": "module/detect/utils/benchmarks.py",
"chars": 6955,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nRun YOLOv5 benchmarks on all supported export formats\n\nFormat "
},
{
"path": "module/detect/utils/callbacks.py",
"chars": 2401,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nCallback utils\n\"\"\"\n\n\nclass Callbacks:\n \"\"\"\"\n Handles all registered"
},
{
"path": "module/detect/utils/dataloaders.py",
"chars": 47401,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nDataloaders and dataset utils\n\"\"\"\n\nimport glob\nimport hashlib\nimport json"
},
{
"path": "module/detect/utils/docker/Dockerfile",
"chars": 2434,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Builds ultralytics/yolov5:latest image on DockerHub https://hub.docker.com/"
},
{
"path": "module/detect/utils/docker/Dockerfile-arm64",
"chars": 1643,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Builds ultralytics/yolov5:latest-arm64 image on DockerHub https://hub.docke"
},
{
"path": "module/detect/utils/docker/Dockerfile-cpu",
"chars": 1618,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n# Builds ultralytics/yolov5:latest-cpu image on DockerHub https://hub.docker."
},
{
"path": "module/detect/utils/downloads.py",
"chars": 7108,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nDownload utils\n\"\"\"\n\nimport logging\nimport os\nimport platform\nimport subpr"
},
{
"path": "module/detect/utils/flask_rest_api/README.md",
"chars": 1710,
"preview": "# Flask REST API\n\n[REST](https://en.wikipedia.org/wiki/Representational_state_transfer) [API](https://en.wikipedia.org/w"
},
{
"path": "module/detect/utils/flask_rest_api/example_request.py",
"chars": 365,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPerform test request\n\"\"\"\n\nimport pprint\n\nimport requests\n\nDETECTION_URL ="
},
{
"path": "module/detect/utils/flask_rest_api/restapi.py",
"chars": 1407,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nRun a Flask REST API exposing a YOLOv5s model\n\"\"\"\n\nimport argparse\nimport"
},
{
"path": "module/detect/utils/general.py",
"chars": 42096,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nGeneral utils\n\"\"\"\n\nimport contextlib\nimport glob\nimport inspect\nimport lo"
},
{
"path": "module/detect/utils/google_app_engine/Dockerfile",
"chars": 821,
"preview": "FROM gcr.io/google-appengine/python\n\n# Create a virtualenv for dependencies. This isolates these packages from\n# system-"
},
{
"path": "module/detect/utils/google_app_engine/additional_requirements.txt",
"chars": 105,
"preview": "# add these requirements in your app on top of the existing ones\npip==21.1\nFlask==1.0.2\ngunicorn==19.9.0\n"
},
{
"path": "module/detect/utils/google_app_engine/app.yaml",
"chars": 174,
"preview": "runtime: custom\nenv: flex\n\nservice: yolov5app\n\nliveness_check:\n initial_delay_sec: 600\n\nmanual_scaling:\n instances: 1\n"
},
{
"path": "module/detect/utils/loggers/__init__.py",
"chars": 8105,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nLogging utils\n\"\"\"\n\nimport os\nimport warnings\n\nimport pkg_resources as pkg"
},
{
"path": "module/detect/utils/loggers/wandb/README.md",
"chars": 10827,
"preview": "📚 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 🚀. UPDATED 29 September 2021.\n\n- [About Weights "
},
{
"path": "module/detect/utils/loggers/wandb/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "module/detect/utils/loggers/wandb/log_dataset.py",
"chars": 1032,
"preview": "import argparse\n\nfrom wandb_utils import WandbLogger\n\nfrom utils.general import LOGGER\n\nWANDB_ARTIFACT_PREFIX = 'wandb-a"
},
{
"path": "module/detect/utils/loggers/wandb/sweep.py",
"chars": 1213,
"preview": "import sys\nfrom pathlib import Path\n\nimport wandb\n\nFILE = Path(__file__).resolve()\nROOT = FILE.parents[3] # YOLOv5 root"
},
{
"path": "module/detect/utils/loggers/wandb/sweep.yaml",
"chars": 2463,
"preview": "# Hyperparameters for training\n# To set range-\n# Provide min and max values as:\n# parameter:\n#\n# min: scala"
},
{
"path": "module/detect/utils/loggers/wandb/wandb_utils.py",
"chars": 27500,
"preview": "\"\"\"Utilities and tools for tracking runs with Weights & Biases.\"\"\"\n\nimport logging\nimport os\nimport sys\nfrom contextlib "
},
{
"path": "module/detect/utils/loss.py",
"chars": 9916,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nLoss functions\n\"\"\"\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.metric"
},
{
"path": "module/detect/utils/metrics.py",
"chars": 14394,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nModel validation metrics\n\"\"\"\n\nimport math\nimport warnings\nfrom pathlib im"
},
{
"path": "module/detect/utils/plots.py",
"chars": 21022,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPlotting utils\n\"\"\"\n\nimport math\nimport os\nfrom copy import copy\nfrom path"
},
{
"path": "module/detect/utils/torch_utils.py",
"chars": 16194,
"preview": "# YOLOv5 🚀 by Ultralytics, GPL-3.0 license\n\"\"\"\nPyTorch utils\n\"\"\"\n\nimport math\nimport os\nimport platform\nimport subproces"
},
{
"path": "module/fuse/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "module/fuse/discriminator.py",
"chars": 987,
"preview": "from torch import nn, Tensor\n\n\nclass Discriminator(nn.Module):\n \"\"\"\n Use to discriminate fused images and source i"
},
{
"path": "module/fuse/generator.py",
"chars": 1613,
"preview": "import torch\nimport torch.nn as nn\nfrom torch import Tensor\n\n\nclass Generator(nn.Module):\n r\"\"\"\n Use to generate f"
},
{
"path": "module/saliency/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "module/saliency/u2net.py",
"chars": 15305,
"preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\n# U^2-Net: Going Deeper with Nested U-Structure for"
},
{
"path": "pipeline/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "pipeline/detect.py",
"chars": 10273,
"preview": "import logging\nimport sys\nfrom pathlib import Path\nfrom typing import Literal, List, Tuple\n\nimport numpy\nimport torch\nim"
},
{
"path": "pipeline/fuse.py",
"chars": 9465,
"preview": "import logging\nimport sys\nfrom pathlib import Path\nfrom typing import Literal, List, Tuple, Optional\n\nimport torch\nimpor"
},
{
"path": "pipeline/iqa.py",
"chars": 3572,
"preview": "import logging\nimport socket\nimport sys\nfrom pathlib import Path\nfrom typing import Literal\n\nimport cv2\nimport torch.cud"
},
{
"path": "pipeline/saliency.py",
"chars": 2814,
"preview": "import logging\nimport socket\nimport sys\nimport warnings\nfrom pathlib import Path\n\nimport cv2\nimport torch.hub\nfrom korni"
},
{
"path": "pipeline/train.py",
"chars": 8276,
"preview": "import logging\nfrom functools import reduce\nfrom pathlib import Path\n\nimport torch\nimport wandb\nfrom kornia.filters impo"
},
{
"path": "requirements.txt",
"chars": 328,
"preview": "# TarDAL requirements\n# Usage: pip install -r requirements.txt\n\n# Base ----------------------------------------\nnumpy>=1"
},
{
"path": "scripts/__init__.py",
"chars": 198,
"preview": "from scripts.infer_f import InferF\nfrom scripts.infer_fd import InferFD\nfrom scripts.train_f import TrainF\nfrom scripts."
},
{
"path": "scripts/infer_f.py",
"chars": 2476,
"preview": "import logging\nfrom pathlib import Path\n\nimport torch\nimport yaml\nfrom kornia.color import ycbcr_to_rgb\nfrom torch.utils"
},
{
"path": "scripts/infer_fd.py",
"chars": 3213,
"preview": "import logging\nfrom pathlib import Path\n\nimport torch\nimport yaml\nfrom kornia.color import ycbcr_to_rgb\nfrom torch.utils"
},
{
"path": "scripts/train_f.py",
"chars": 7839,
"preview": "import argparse\nimport logging\nfrom functools import reduce\nfrom pathlib import Path\n\nimport torch\nimport wandb\nimport y"
},
{
"path": "scripts/train_fd.py",
"chars": 15593,
"preview": "import argparse\nimport logging\nimport sys\nfrom functools import reduce\nfrom itertools import chain\nfrom pathlib import P"
},
{
"path": "scripts/utils/smart_optimizer.py",
"chars": 1053,
"preview": "from typing import Tuple, List, Optional\n\nfrom torch.optim import Optimizer, AdamW, Adam, SGD\n\nfrom config import Config"
},
{
"path": "tools/choose_images.py",
"chars": 898,
"preview": "from functools import reduce\nfrom pathlib import Path\nfrom typing import Literal\n\nimport cv2\nimport numpy\n\n\ndef choose_i"
},
{
"path": "tools/convert_to_png.py",
"chars": 942,
"preview": "import argparse\nimport logging\nfrom pathlib import Path\n\nimport cv2\nfrom tqdm import tqdm\n\n\ndef convert_to_png(src: str "
},
{
"path": "tools/data_preview.py",
"chars": 2041,
"preview": "import argparse\nfrom pathlib import Path\nfrom typing import Optional\n\nimport cv2\nimport torch\nfrom kornia import image_t"
},
{
"path": "tools/dict_to_device.py",
"chars": 290,
"preview": "from typing import Dict\n\nfrom torch import Tensor\nfrom torch.types import Device\n\n\ndef dict_to_device(d: Dict, device: D"
},
{
"path": "tools/environment_probe.py",
"chars": 1121,
"preview": "import logging\nimport sys\n\nimport torch\n\n\nclass EnvironmentProbe:\n \"\"\"\n Detects the configuration of the environme"
},
{
"path": "tools/generate_mask.py",
"chars": 720,
"preview": "import argparse\nimport logging\n\nfrom pipeline.saliency import Saliency\n\n\ndef generate_mask(url: str, src: str, dst: str)"
},
{
"path": "tools/scenario_reader.py",
"chars": 2213,
"preview": "import json\nimport logging\nfrom functools import reduce\nfrom pathlib import Path\n\n\ndef scenario_counter(src: str | Path)"
},
{
"path": "train.py",
"chars": 1520,
"preview": "import argparse\nimport logging\nfrom pathlib import Path\n\nimport torch.backends.cudnn\nimport yaml\n\nimport scripts\nfrom co"
},
{
"path": "tutorial.ipynb",
"chars": 3076,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"source\": [\n \"# TarDAL online tutorial | CVPR 2022\\n\",\n \"\\n\",\n "
}
]
About this extraction
This page contains the full source code of the dlut-dimt/TarDAL GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 121 files (578.2 KB), approximately 175.5k tokens, and a symbol index with 609 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.