[
  {
    "path": ".gitignore",
    "content": "# ignore map, miou, datasets\nmap_out/\nmiou_out/\nVOCdevkit/\ndatasets/\nMedical_Datasets/\nlfw/\nlogs/\n.temp_map_out/\n2007_train.txt\n2007_val.txt\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n2007_train.txt\n"
  },
  {
    "path": "LICENSE",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU General Public License is a free, copyleft license for\nsoftware and other kinds of works.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nthe GNU General Public License is intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.  We, the Free Software Foundation, use the\nGNU General Public License for most of our software; it applies also to\nany other work released this way by its authors.  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  To protect your rights, we need to prevent others from denying you\nthese rights or asking you to surrender the rights.  Therefore, you have\ncertain responsibilities if you distribute copies of the software, or if\nyou modify it: responsibilities to respect the freedom of others.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must pass on to the recipients the same\nfreedoms that you received.  You must make sure that they, too, receive\nor can get the source code.  And you must show them these terms so they\nknow their rights.\n\n  Developers that use the GNU GPL protect your rights with two steps:\n(1) assert copyright on the software, and (2) offer you this License\ngiving you legal permission to copy, distribute and/or modify it.\n\n  For the developers' and authors' protection, the GPL clearly explains\nthat there is no warranty for this free software.  For both users' and\nauthors' sake, the GPL requires that modified versions be marked as\nchanged, so that their problems will not be attributed erroneously to\nauthors of previous versions.\n\n  Some devices are designed to deny users access to install or run\nmodified versions of the software inside them, although the manufacturer\ncan do so.  This is fundamentally incompatible with the aim of\nprotecting users' freedom to change the software.  The systematic\npattern of such abuse occurs in the area of products for individuals to\nuse, which is precisely where it is most unacceptable.  Therefore, we\nhave designed this version of the GPL to prohibit the practice for those\nproducts.  If such problems arise substantially in other domains, we\nstand ready to extend this provision to those domains in future versions\nof the GPL, as needed to protect the freedom of users.\n\n  Finally, every program is threatened constantly by software patents.\nStates should not allow patents to restrict development and use of\nsoftware on general-purpose computers, but in those that do, we wish to\navoid the special danger that patents applied to a free program could\nmake it effectively proprietary.  To prevent this, the GPL assures that\npatents cannot be used to render the program non-free.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Use with the GNU Affero General Public License.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU Affero General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the special requirements of the GNU Affero General Public License,\nsection 13, concerning interaction through a network will apply to the\ncombination as such.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If the program does terminal interaction, make it output a short\nnotice like this when it starts in an interactive mode:\n\n    <program>  Copyright (C) <year>  <name of author>\n    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, your program's commands\nmight be different; for a GUI interface, you would use an \"about box\".\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU GPL, see\n<https://www.gnu.org/licenses/>.\n\n  The GNU General Public License does not permit incorporating your program\ninto proprietary programs.  If your program is a subroutine library, you\nmay consider it more useful to permit linking proprietary applications with\nthe library.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.  But first, please read\n<https://www.gnu.org/licenses/why-not-lgpl.html>.\n"
  },
  {
    "path": "README.md",
    "content": "## YOLOV7-OBB：You Only Look Once OBB旋转目标检测模型在pytorch当中的实现\n---\n\n## 目录\n1. [仓库更新 Top News](#仓库更新)\n2. [相关仓库 Related code](#相关仓库)\n3. [性能情况 Performance](#性能情况)\n4. [所需环境 Environment](#所需环境)\n5. [文件下载 Download](#文件下载)\n6. [训练步骤 How2train](#训练步骤)\n7. [预测步骤 How2predict](#预测步骤)\n8. [评估步骤 How2eval](#评估步骤)\n9. [参考资料 Reference](#Reference)\n\n## Top News\n**`2023-02`**:**仓库创建，支持step、cos学习率下降法、支持adam、sgd优化器选择、支持学习率根据batch_size自适应调整、新增图片裁剪、支持多GPU训练、支持各个种类目标数量计算、支持heatmap、支持EMA。**  \n\n## 相关仓库\n| 目标检测模型 | 路径 |\n| :----- | :----- |\nYoloV7-OBB | https://github.com/Egrt/yolov7-obb\nYoloV7-Tiny-OBB | https://github.com/Egrt/yolov7-tiny-obb\n\n## 性能情况\n| 训练数据集 | 权值文件名称\t| 测试数据集 | 输入图片大小 | mAP 0.5 |\n| :-----: | :------: | :------: | :------: | :------: |\n| SSDD | [yolov7_obb_ssdd.pth](https://github.com/Egrt/yolov7-obb/releases/download/V1.0.0/yolov7_obb_ssdd.pth) | SSDD-Val | 640x640 | 95.22\n### 预测结果展示\n![预测结果](img/test.jpg)\n## 所需环境\ntorch==1.10.1\ntorchvision==0.11.2\n为了使用amp混合精度，推荐使用torch1.7.1以上的版本。\n\n## 文件下载\n\nSSDD数据集下载地址如下，里面已经包括了训练集、测试集、验证集（与测试集一样），无需再次划分：  \n链接: https://pan.baidu.com/s/1Lpg28ZvMSgNXq00abHMZ5Q\n提取码: 2021\n\n## 训练步骤\n### a、训练VOC07+12数据集\n1. 数据集的准备   \n**本文使用VOC格式进行训练，训练前需要下载好VOC07+12的数据集，解压后放在根目录**  \n\n2. 数据集的处理   \n修改voc_annotation.py里面的annotation_mode=2，运行voc_annotation.py生成根目录下的2007_train.txt和2007_val.txt。   \n生成的数据集格式为image_path, x1, y1, x2, y2, x3, y3, x4, y4(polygon), class。 \n\n3. 开始网络训练   \ntrain.py的默认参数用于训练VOC数据集，直接运行train.py即可开始训练。   \n\n4. 训练结果预测   \n训练结果预测需要用到两个文件，分别是yolo.py和predict.py。我们首先需要去yolo.py里面修改model_path以及classes_path，这两个参数必须要修改。   \n**model_path指向训练好的权值文件，在logs文件夹里。   \nclasses_path指向检测类别所对应的txt。**   \n完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。   \n\n### b、训练自己的数据集\n1. 数据集的准备  \n**本文使用VOC格式进行训练，训练前需要自己制作好数据集，**    \n训练前将标签文件放在VOCdevkit文件夹下的VOC2007文件夹下的Annotation中。   \n训练前将图片文件放在VOCdevkit文件夹下的VOC2007文件夹下的JPEGImages中。   \n\n2. 数据集的处理  \n在完成数据集的摆放之后，我们需要利用voc_annotation.py获得训练用的2007_train.txt和2007_val.txt。   \n修改voc_annotation.py里面的参数。第一次训练可以仅修改classes_path，classes_path用于指向检测类别所对应的txt。   \n训练自己的数据集时，可以自己建立一个cls_classes.txt，里面写自己所需要区分的类别。   \nmodel_data/cls_classes.txt文件内容为：      \n```python\ncat\ndog\n...\n```\n修改voc_annotation.py中的classes_path，使其对应cls_classes.txt，并运行voc_annotation.py。  \n\n3. 开始网络训练  \n**训练的参数较多，均在train.py中，大家可以在下载库后仔细看注释，其中最重要的部分依然是train.py里的classes_path。**  \n**classes_path用于指向检测类别所对应的txt，这个txt和voc_annotation.py里面的txt一样！训练自己的数据集必须要修改！**  \n修改完classes_path后就可以运行train.py开始训练了，在训练多个epoch后，权值会生成在logs文件夹中。  \n\n4. 训练结果预测  \n训练结果预测需要用到两个文件，分别是yolo.py和predict.py。在yolo.py里面修改model_path以及classes_path。  \n**model_path指向训练好的权值文件，在logs文件夹里。  \nclasses_path指向检测类别所对应的txt。**  \n完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。  \n\n## 预测步骤\n### a、使用预训练权重\n1. 下载完库后解压，在百度网盘下载权值，放入model_data，运行predict.py，输入  \n```python\nimg/street.jpg\n```\n2. 在predict.py里面进行设置可以进行fps测试和video视频检测。  \n### b、使用自己训练的权重\n1. 按照训练步骤训练。  \n2. 在yolo.py文件里面，在如下部分修改model_path和classes_path使其对应训练好的文件；**model_path对应logs文件夹下面的权值文件，classes_path是model_path对应分的类**。  \n```python\n_defaults = {\n    #--------------------------------------------------------------------------#\n    #   使用自己训练好的模型进行预测一定要修改model_path和classes_path！\n    #   model_path指向logs文件夹下的权值文件，classes_path指向model_data下的txt\n    #\n    #   训练好后logs文件夹下存在多个权值文件，选择验证集损失较低的即可。\n    #   验证集损失较低不代表mAP较高，仅代表该权值在验证集上泛化性能较好。\n    #   如果出现shape不匹配，同时要注意训练时的model_path和classes_path参数的修改\n    #--------------------------------------------------------------------------#\n    \"model_path\"        : 'model_data/yolov7_weights.pth',\n    \"classes_path\"      : 'model_data/coco_classes.txt',\n    #---------------------------------------------------------------------#\n    #   anchors_path代表先验框对应的txt文件，一般不修改。\n    #   anchors_mask用于帮助代码找到对应的先验框，一般不修改。\n    #---------------------------------------------------------------------#\n    \"anchors_path\"      : 'model_data/yolo_anchors.txt',\n    \"anchors_mask\"      : [[6, 7, 8], [3, 4, 5], [0, 1, 2]],\n    #---------------------------------------------------------------------#\n    #   输入图片的大小，必须为32的倍数。\n    #---------------------------------------------------------------------#\n    \"input_shape\"       : [640, 640],\n    #------------------------------------------------------#\n    #   所使用到的yolov7的版本，本仓库一共提供两个：\n    #   l : 对应yolov7\n    #   x : 对应yolov7_x\n    #------------------------------------------------------#\n    \"phi\"               : 'l',\n    #---------------------------------------------------------------------#\n    #   只有得分大于置信度的预测框会被保留下来\n    #---------------------------------------------------------------------#\n    \"confidence\"        : 0.5,\n    #---------------------------------------------------------------------#\n    #   非极大抑制所用到的nms_iou大小\n    #---------------------------------------------------------------------#\n    \"nms_iou\"           : 0.3,\n    #---------------------------------------------------------------------#\n    #   该变量用于控制是否使用letterbox_image对输入图像进行不失真的resize，\n    #   在多次测试后，发现关闭letterbox_image直接resize的效果更好\n    #---------------------------------------------------------------------#\n    \"letterbox_image\"   : True,\n    #-------------------------------#\n    #   是否使用Cuda\n    #   没有GPU可以设置成False\n    #-------------------------------#\n    \"cuda\"              : True,\n}\n```\n3. 运行predict.py，输入  \n```python\nimg/street.jpg\n```\n4. 在predict.py里面进行设置可以进行fps测试和video视频检测。  \n\n## 评估步骤 \n### a、评估VOC07+12的测试集\n1. 本文使用VOC格式进行评估。VOC07+12已经划分好了测试集，无需利用voc_annotation.py生成ImageSets文件夹下的txt。\n2. 在yolo.py里面修改model_path以及classes_path。**model_path指向训练好的权值文件，在logs文件夹里。classes_path指向检测类别所对应的txt。**  \n3. 运行get_map.py即可获得评估结果，评估结果会保存在map_out文件夹中。\n\n### b、评估自己的数据集\n1. 本文使用VOC格式进行评估。  \n2. 如果在训练前已经运行过voc_annotation.py文件，代码会自动将数据集划分成训练集、验证集和测试集。如果想要修改测试集的比例，可以修改voc_annotation.py文件下的trainval_percent。trainval_percent用于指定(训练集+验证集)与测试集的比例，默认情况下 (训练集+验证集):测试集 = 9:1。train_percent用于指定(训练集+验证集)中训练集与验证集的比例，默认情况下 训练集:验证集 = 9:1。\n3. 利用voc_annotation.py划分测试集后，前往get_map.py文件修改classes_path，classes_path用于指向检测类别所对应的txt，这个txt和训练时的txt一样。评估自己的数据集必须要修改。\n4. 在yolo.py里面修改model_path以及classes_path。**model_path指向训练好的权值文件，在logs文件夹里。classes_path指向检测类别所对应的txt。**  \n5. 运行get_map.py即可获得评估结果，评估结果会保存在map_out文件夹中。\n\n## Citation\n如果该项目对你有所帮助，可以引用我们的论文：\n```\n@Article{app132011402,\nAUTHOR = {Ye, Zixun and Zhang, Hongying and Gu, Jingliang and Li, Xue},\nTITLE = {YOLOv7-3D: A Monocular 3D Traffic Object Detection Method from a Roadside Perspective},\nJOURNAL = {Applied Sciences},\nVOLUME = {13},\nYEAR = {2023},\nNUMBER = {20},\nARTICLE-NUMBER = {11402},\nURL = {https://www.mdpi.com/2076-3417/13/20/11402},\nISSN = {2076-3417},\nDOI = {10.3390/app132011402}\n}\n```\n## Reference\nhttps://github.com/WongKinYiu/yolov7\n\nhttps://github.com/bubbliiiing/yolov7-pytorch\n"
  },
  {
    "path": "get_map.py",
    "content": "import os\nimport xml.etree.ElementTree as ET\nimport cv2\nfrom PIL import Image\nfrom tqdm import tqdm\nimport numpy as np\nfrom utils.utils import get_classes\nfrom utils.utils_map import get_coco_map, get_map\nfrom yolo import YOLO\n\nif __name__ == \"__main__\":\n    '''\n    Recall和Precision不像AP是一个面积的概念，因此在门限值（Confidence）不同时，网络的Recall和Precision值是不同的。\n    默认情况下，本代码计算的Recall和Precision代表的是当门限值（Confidence）为0.5时，所对应的Recall和Precision值。\n\n    受到mAP计算原理的限制，网络在计算mAP时需要获得近乎所有的预测框，这样才可以计算不同门限条件下的Recall和Precision值\n    因此，本代码获得的map_out/detection-results/里面的txt的框的数量一般会比直接predict多一些，目的是列出所有可能的预测框，\n    '''\n    #------------------------------------------------------------------------------------------------------------------#\n    #   map_mode用于指定该文件运行时计算的内容\n    #   map_mode为0代表整个map计算流程，包括获得预测结果、获得真实框、计算VOC_map。\n    #   map_mode为1代表仅仅获得预测结果。\n    #   map_mode为2代表仅仅获得真实框。\n    #   map_mode为3代表仅仅计算VOC_map。\n    #   map_mode为4代表利用COCO工具箱计算当前数据集的0.50:0.95map。需要获得预测结果、获得真实框后并安装pycocotools才行\n    #-------------------------------------------------------------------------------------------------------------------#\n    map_mode        = 0\n    #--------------------------------------------------------------------------------------#\n    #   此处的classes_path用于指定需要测量VOC_map的类别\n    #   一般情况下与训练和预测所用的classes_path一致即可\n    #--------------------------------------------------------------------------------------#\n    classes_path    = 'model_data/ssdd_classes.txt'\n    #--------------------------------------------------------------------------------------#\n    #   MINOVERLAP用于指定想要获得的mAP0.x，mAP0.x的意义是什么请同学们百度一下。\n    #   比如计算mAP0.75，可以设定MINOVERLAP = 0.75。\n    #\n    #   当某一预测框与真实框重合度大于MINOVERLAP时，该预测框被认为是正样本，否则为负样本。\n    #   因此MINOVERLAP的值越大，预测框要预测的越准确才能被认为是正样本，此时算出来的mAP值越低，\n    #--------------------------------------------------------------------------------------#\n    MINOVERLAP      = 0.5\n    #--------------------------------------------------------------------------------------#\n    #   受到mAP计算原理的限制，网络在计算mAP时需要获得近乎所有的预测框，这样才可以计算mAP\n    #   因此，confidence的值应当设置的尽量小进而获得全部可能的预测框。\n    #   \n    #   该值一般不调整。因为计算mAP需要获得近乎所有的预测框，此处的confidence不能随便更改。\n    #   想要获得不同门限值下的Recall和Precision值，请修改下方的score_threhold。\n    #--------------------------------------------------------------------------------------#\n    confidence      = 0.001\n    #--------------------------------------------------------------------------------------#\n    #   预测时使用到的非极大抑制值的大小，越大表示非极大抑制越不严格。\n    #   \n    #   该值一般不调整。\n    #--------------------------------------------------------------------------------------#\n    nms_iou         = 0.5\n    #---------------------------------------------------------------------------------------------------------------#\n    #   Recall和Precision不像AP是一个面积的概念，因此在门限值不同时，网络的Recall和Precision值是不同的。\n    #   \n    #   默认情况下，本代码计算的Recall和Precision代表的是当门限值为0.5（此处定义为score_threhold）时所对应的Recall和Precision值。\n    #   因为计算mAP需要获得近乎所有的预测框，上面定义的confidence不能随便更改。\n    #   这里专门定义一个score_threhold用于代表门限值，进而在计算mAP时找到门限值对应的Recall和Precision值。\n    #---------------------------------------------------------------------------------------------------------------#\n    score_threhold  = 0.5\n    #-------------------------------------------------------#\n    #   map_vis用于指定是否开启VOC_map计算的可视化\n    #-------------------------------------------------------#\n    map_vis         = False\n    #-------------------------------------------------------#\n    #   指向VOC数据集所在的文件夹\n    #   默认指向根目录下的VOC数据集\n    #-------------------------------------------------------#\n    VOCdevkit_path  = 'VOCdevkit'\n    #-------------------------------------------------------#\n    #   结果输出的文件夹，默认为map_out\n    #-------------------------------------------------------#\n    map_out_path    = 'map_out'\n\n    image_ids = open(os.path.join(VOCdevkit_path, \"VOC2007/ImageSets/Main/test.txt\")).read().strip().split()\n\n    if not os.path.exists(map_out_path):\n        os.makedirs(map_out_path)\n    if not os.path.exists(os.path.join(map_out_path, 'ground-truth')):\n        os.makedirs(os.path.join(map_out_path, 'ground-truth'))\n    if not os.path.exists(os.path.join(map_out_path, 'detection-results')):\n        os.makedirs(os.path.join(map_out_path, 'detection-results'))\n    if not os.path.exists(os.path.join(map_out_path, 'images-optional')):\n        os.makedirs(os.path.join(map_out_path, 'images-optional'))\n\n    class_names, _ = get_classes(classes_path)\n\n    if map_mode == 0 or map_mode == 1:\n        print(\"Load model.\")\n        yolo = YOLO(confidence = confidence, nms_iou = nms_iou)\n        print(\"Load model done.\")\n\n        print(\"Get predict result.\")\n        for image_id in tqdm(image_ids):\n            image_path  = os.path.join(VOCdevkit_path, \"VOC2007/JPEGImages/\"+image_id+\".jpg\")\n            image       = Image.open(image_path)\n            if map_vis:\n                image.save(os.path.join(map_out_path, \"images-optional/\" + image_id + \".jpg\"))\n            yolo.get_map_txt(image_id, image, class_names, map_out_path)\n        print(\"Get predict result done.\")\n        \n    if map_mode == 0 or map_mode == 2:\n        print(\"Get ground truth result.\")\n        for image_id in tqdm(image_ids):\n            with open(os.path.join(map_out_path, \"ground-truth/\"+image_id+\".txt\"), \"w\") as new_f:\n                root = ET.parse(os.path.join(VOCdevkit_path, \"VOC2007/Annotations/\"+image_id+\".xml\")).getroot()\n                for obj in root.findall('object'):\n                    difficult_flag = False\n                    if obj.find('difficult')!=None:\n                        difficult = obj.find('difficult').text\n                        if int(difficult)==1:\n                            difficult_flag = True\n                    obj_name = obj.find('name').text\n                    if obj_name not in class_names:\n                        continue\n                    bndbox  = obj.find('rotated_bndbox')\n                    x1      = bndbox.find('x1').text\n                    y1      = bndbox.find('y1').text\n                    x2      = bndbox.find('x2').text\n                    y2      = bndbox.find('y2').text\n                    x3      = bndbox.find('x3').text\n                    y3      = bndbox.find('y3').text\n                    x4      = bndbox.find('x4').text\n                    y4      = bndbox.find('y4').text\n                    poly    = np.array([[x1, y1, x2, y2, x3, y3, x4, y4]], dtype=np.int32)\n                    poly    = poly.reshape(4, 2)\n                    (x, y), (w, h), angle = cv2.minAreaRect(poly)  # θ ∈ [0， 90]\n                    if difficult_flag:\n                        new_f.write(\"%s %s %s %s %s %s difficult\\n\" % (obj_name, int(x), int(y), int(w), int(h),angle))\n                    else:\n                        new_f.write(\"%s %s %s %s %s %s\\n\" % (obj_name, int(x), int(y), int(w), int(h),angle))\n        print(\"Get ground truth result done.\")\n\n    if map_mode == 0 or map_mode == 3:\n        print(\"Get map.\")\n        get_map(MINOVERLAP, True, score_threhold = score_threhold, path = map_out_path)\n        print(\"Get map done.\")\n\n    if map_mode == 4:\n        print(\"Get map.\")\n        get_coco_map(class_names = class_names, path = map_out_path)\n        print(\"Get map done.\")\n"
  },
  {
    "path": "hrsc_annotation.py",
    "content": "import os\nimport random\nimport xml.etree.ElementTree as ET\n\nimport numpy as np\nfrom utils.utils_rbox import *\nfrom utils.utils import get_classes\n\n#--------------------------------------------------------------------------------------------------------------------------------#\n#   annotation_mode用于指定该文件运行时计算的内容\n#   annotation_mode为0代表整个标签处理过程，包括获得VOCdevkit/VOC2007/ImageSets里面的txt以及训练用的2007_train.txt、2007_val.txt\n#   annotation_mode为1代表获得VOCdevkit/VOC2007/ImageSets里面的txt\n#   annotation_mode为2代表获得训练用的2007_train.txt、2007_val.txt\n#--------------------------------------------------------------------------------------------------------------------------------#\nannotation_mode     = 0\n#-------------------------------------------------------------------#\n#   必须要修改，用于生成2007_train.txt、2007_val.txt的目标信息\n#   与训练和预测所用的classes_path一致即可\n#   如果生成的2007_train.txt里面没有目标信息\n#   那么就是因为classes没有设定正确\n#   仅在annotation_mode为0和2的时候有效\n#-------------------------------------------------------------------#\nclasses_path        = 'model_data/hrsc_classes.txt'\n#--------------------------------------------------------------------------------------------------------------------------------#\n#   trainval_percent用于指定(训练集+验证集)与测试集的比例，默认情况下 (训练集+验证集):测试集 = 9:1\n#   train_percent用于指定(训练集+验证集)中训练集与验证集的比例，默认情况下 训练集:验证集 = 9:1\n#   仅在annotation_mode为0和1的时候有效\n#--------------------------------------------------------------------------------------------------------------------------------#\ntrainval_percent    = 0.9\ntrain_percent       = 0.9\n#-------------------------------------------------------#\n#   指向VOC数据集所在的文件夹\n#   默认指向根目录下的VOC数据集\n#-------------------------------------------------------#\nVOCdevkit_path  = 'VOCdevkit'\n\nVOCdevkit_sets  = [('2007_HRSC', 'train'), ('2007_HRSC', 'val')]\nclasses, _      = get_classes(classes_path)\n\n#-------------------------------------------------------#\n#   统计目标数量\n#-------------------------------------------------------#\nphoto_nums  = np.zeros(len(VOCdevkit_sets))\nnums        = np.zeros(len(classes))\ndef convert_annotation(year, image_id, list_file):\n    in_file = open(os.path.join(VOCdevkit_path, 'VOC%s/Annotations/%s.xml'%(year, image_id)), encoding='utf-8')\n    tree=ET.parse(in_file)\n    root = tree.getroot().find('HRSC_Objects')\n\n    for obj in root.iter('HRSC_Object'):\n        difficult = 0 \n        if obj.find('difficult')!=None:\n            difficult = obj.find('difficult').text\n        cls = obj.find('name').text\n        if cls not in classes or int(difficult)==1:\n            continue\n        if obj.find('mbox_cx')==None:\n            continue\n        cls_id = classes.index(cls)\n        cx = float(obj.find('mbox_cx').text)\n        cy = float(obj.find('mbox_cy').text)\n        w  = float(obj.find('mbox_w').text)\n        h  = float(obj.find('mbox_h').text)\n        angle = float(obj.find('mbox_ang').text)\n        b = np.array([[cx, cy, w, h, angle]], dtype=np.float32)\n        b = rbox2poly(b)[0]\n        b = (b[0], b[1], b[2], b[3], b[4], b[5], b[6], b[7])\n        list_file.write(\" \" + \",\".join([str(a) for a in b]) + ',' + str(cls_id))\n        \n        nums[classes.index(cls)] = nums[classes.index(cls)] + 1\n        \nif __name__ == \"__main__\":\n    random.seed(0)\n    if \" \" in os.path.abspath(VOCdevkit_path):\n        raise ValueError(\"数据集存放的文件夹路径与图片名称中不可以存在空格，否则会影响正常的模型训练，请注意修改。\")\n\n    if annotation_mode == 0 or annotation_mode == 1:\n        print(\"Generate txt in ImageSets.\")\n        xmlfilepath     = os.path.join(VOCdevkit_path, 'VOC2007_HRSC/Annotations')\n        saveBasePath    = os.path.join(VOCdevkit_path, 'VOC2007_HRSC/ImageSets/Main')\n        temp_xml        = os.listdir(xmlfilepath)\n        total_xml       = []\n        for xml in temp_xml:\n            if xml.endswith(\".xml\"):\n                total_xml.append(xml)\n\n        num     = len(total_xml)  \n        list    = range(num)  \n        tv      = int(num*trainval_percent)  \n        tr      = int(tv*train_percent)  \n        trainval= random.sample(list,tv)  \n        train   = random.sample(trainval,tr)  \n        \n        print(\"train and val size\",tv)\n        print(\"train size\",tr)\n        ftrainval   = open(os.path.join(saveBasePath,'trainval.txt'), 'w')  \n        ftest       = open(os.path.join(saveBasePath,'test.txt'), 'w')  \n        ftrain      = open(os.path.join(saveBasePath,'train.txt'), 'w')  \n        fval        = open(os.path.join(saveBasePath,'val.txt'), 'w')  \n        \n        for i in list:  \n            name=total_xml[i][:-4]+'\\n'  \n            if i in trainval:  \n                ftrainval.write(name)  \n                if i in train:  \n                    ftrain.write(name)  \n                else:  \n                    fval.write(name)  \n            else:  \n                ftest.write(name)  \n        \n        ftrainval.close()  \n        ftrain.close()  \n        fval.close()  \n        ftest.close()\n        print(\"Generate txt in ImageSets done.\")\n\n    if annotation_mode == 0 or annotation_mode == 2:\n        print(\"Generate 2007_train.txt and 2007_val.txt for train.\")\n        type_index = 0\n        for year, image_set in VOCdevkit_sets:\n            image_ids = open(os.path.join(VOCdevkit_path, 'VOC%s/ImageSets/Main/%s.txt'%(year, image_set)), encoding='utf-8').read().strip().split()\n            list_file = open('%s_%s.txt'%(year, image_set), 'w', encoding='utf-8')\n            for image_id in image_ids:\n                list_file.write('%s/VOC%s/JPEGImages/%s.bmp'%(os.path.abspath(VOCdevkit_path), year, image_id))\n\n                convert_annotation(year, image_id, list_file)\n                list_file.write('\\n')\n            photo_nums[type_index] = len(image_ids)\n            type_index += 1\n            list_file.close()\n        print(\"Generate 2007_train.txt and 2007_val.txt for train done.\")\n        \n        def printTable(List1, List2):\n            for i in range(len(List1[0])):\n                print(\"|\", end=' ')\n                for j in range(len(List1)):\n                    print(List1[j][i].rjust(int(List2[j])), end=' ')\n                    print(\"|\", end=' ')\n                print()\n\n        str_nums = [str(int(x)) for x in nums]\n        tableData = [\n            classes, str_nums\n        ]\n        colWidths = [0]*len(tableData)\n        len1 = 0\n        for i in range(len(tableData)):\n            for j in range(len(tableData[i])):\n                if len(tableData[i][j]) > colWidths[i]:\n                    colWidths[i] = len(tableData[i][j])\n        printTable(tableData, colWidths)\n\n        if photo_nums[0] <= 500:\n            print(\"训练集数量小于500，属于较小的数据量，请注意设置较大的训练世代（Epoch）以满足足够的梯度下降次数（Step）。\")\n\n        if np.sum(nums) == 0:\n            print(\"在数据集中并未获得任何目标，请注意修改classes_path对应自己的数据集，并且保证标签名字正确，否则训练将会没有任何效果！\")\n            print(\"在数据集中并未获得任何目标，请注意修改classes_path对应自己的数据集，并且保证标签名字正确，否则训练将会没有任何效果！\")\n            print(\"在数据集中并未获得任何目标，请注意修改classes_path对应自己的数据集，并且保证标签名字正确，否则训练将会没有任何效果！\")\n            print(\"（重要的事情说三遍）。\")\n"
  },
  {
    "path": "kmeans_for_anchors.py",
    "content": "#-------------------------------------------------------------------------------------------------------#\n#   kmeans虽然会对数据集中的框进行聚类，但是很多数据集由于框的大小相近，聚类出来的9个框相差不大，\n#   这样的框反而不利于模型的训练。因为不同的特征层适合不同大小的先验框，shape越小的特征层适合越大的先验框\n#   原始网络的先验框已经按大中小比例分配好了，不进行聚类也会有非常好的效果。\n#-------------------------------------------------------------------------------------------------------#\nimport glob\nimport xml.etree.ElementTree as ET\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom tqdm import tqdm\n\n\ndef cas_ratio(box,cluster):\n    ratios_of_box_cluster = box / cluster\n    ratios_of_cluster_box = cluster / box\n    ratios = np.concatenate([ratios_of_box_cluster, ratios_of_cluster_box], axis = -1)\n\n    return np.max(ratios, -1)\n\ndef avg_ratio(box,cluster):\n    return np.mean([np.min(cas_ratio(box[i],cluster)) for i in range(box.shape[0])])\n\ndef kmeans(box,k):\n    #-------------------------------------------------------------#\n    #   取出一共有多少框\n    #-------------------------------------------------------------#\n    row = box.shape[0]\n    \n    #-------------------------------------------------------------#\n    #   每个框各个点的位置\n    #-------------------------------------------------------------#\n    distance = np.empty((row,k))\n    \n    #-------------------------------------------------------------#\n    #   最后的聚类位置\n    #-------------------------------------------------------------#\n    last_clu = np.zeros((row,))\n\n    np.random.seed()\n\n    #-------------------------------------------------------------#\n    #   随机选5个当聚类中心\n    #-------------------------------------------------------------#\n    cluster = box[np.random.choice(row,k,replace = False)]\n\n    iter = 0\n    while True:\n        #-------------------------------------------------------------#\n        #   计算当前框和先验框的宽高比例\n        #-------------------------------------------------------------#\n        for i in range(row):\n            distance[i] = cas_ratio(box[i],cluster)\n        \n        #-------------------------------------------------------------#\n        #   取出最小点\n        #-------------------------------------------------------------#\n        near = np.argmin(distance,axis=1)\n\n        if (last_clu == near).all():\n            break\n        \n        #-------------------------------------------------------------#\n        #   求每一个类的中位点\n        #-------------------------------------------------------------#\n        for j in range(k):\n            cluster[j] = np.median(\n                box[near == j],axis=0)\n\n        last_clu = near\n        if iter % 5 == 0:\n            print('iter: {:d}. avg_ratio:{:.2f}'.format(iter, avg_ratio(box,cluster)))\n        iter += 1\n\n    return cluster, near\n\ndef load_data(path):\n    data = []\n    #-------------------------------------------------------------#\n    #   对于每一个xml都寻找box\n    #-------------------------------------------------------------#\n    for xml_file in tqdm(glob.glob('{}/*xml'.format(path))):\n        tree = ET.parse(xml_file)\n        height = int(tree.findtext('./size/height'))\n        width = int(tree.findtext('./size/width'))\n        if height<=0 or width<=0:\n            continue\n        \n        #-------------------------------------------------------------#\n        #   对于每一个目标都获得它的宽高\n        #-------------------------------------------------------------#\n        for obj in tree.iter('object'):\n            xmin = int(float(obj.findtext('bndbox/xmin'))) / width\n            ymin = int(float(obj.findtext('bndbox/ymin'))) / height\n            xmax = int(float(obj.findtext('bndbox/xmax'))) / width\n            ymax = int(float(obj.findtext('bndbox/ymax'))) / height\n\n            xmin = np.float64(xmin)\n            ymin = np.float64(ymin)\n            xmax = np.float64(xmax)\n            ymax = np.float64(ymax)\n            # 得到宽高\n            data.append([xmax-xmin,ymax-ymin])\n    return np.array(data)\n\nif __name__ == '__main__':\n    np.random.seed(0)\n    #-------------------------------------------------------------#\n    #   运行该程序会计算'./VOCdevkit/VOC2007/Annotations'的xml\n    #   会生成yolo_anchors.txt\n    #-------------------------------------------------------------#\n    input_shape = [640, 640]\n    anchors_num = 9\n    #-------------------------------------------------------------#\n    #   载入数据集，可以使用VOC的xml\n    #-------------------------------------------------------------#\n    path        = 'VOCdevkit/VOC2007/Annotations'\n    \n    #-------------------------------------------------------------#\n    #   载入所有的xml\n    #   存储格式为转化为比例后的width,height\n    #-------------------------------------------------------------#\n    print('Load xmls.')\n    data = load_data(path)\n    print('Load xmls done.')\n    \n    #-------------------------------------------------------------#\n    #   使用k聚类算法\n    #-------------------------------------------------------------#\n    print('K-means boxes.')\n    cluster, near   = kmeans(data, anchors_num)\n    print('K-means boxes done.')\n    data            = data * np.array([input_shape[1], input_shape[0]])\n    cluster         = cluster * np.array([input_shape[1], input_shape[0]])\n\n    #-------------------------------------------------------------#\n    #   绘图\n    #-------------------------------------------------------------#\n    for j in range(anchors_num):\n        plt.scatter(data[near == j][:,0], data[near == j][:,1])\n        plt.scatter(cluster[j][0], cluster[j][1], marker='x', c='black')\n    plt.savefig(\"kmeans_for_anchors.jpg\")\n    plt.show()\n    print('Save kmeans_for_anchors.jpg in root dir.')\n\n    cluster = cluster[np.argsort(cluster[:, 0] * cluster[:, 1])]\n    print('avg_ratio:{:.2f}'.format(avg_ratio(data, cluster)))\n    print(cluster)\n\n    f = open(\"yolo_anchors.txt\", 'w')\n    row = np.shape(cluster)[0]\n    for i in range(row):\n        if i == 0:\n            x_y = \"%d,%d\" % (cluster[i][0], cluster[i][1])\n        else:\n            x_y = \", %d,%d\" % (cluster[i][0], cluster[i][1])\n        f.write(x_y)\n    f.close()\n"
  },
  {
    "path": "model_data/coco_classes.txt",
    "content": "person\nbicycle\ncar\nmotorbike\naeroplane\nbus\ntrain\ntruck\nboat\ntraffic light\nfire hydrant\nstop sign\nparking meter\nbench\nbird\ncat\ndog\nhorse\nsheep\ncow\nelephant\nbear\nzebra\ngiraffe\nbackpack\numbrella\nhandbag\ntie\nsuitcase\nfrisbee\nskis\nsnowboard\nsports ball\nkite\nbaseball bat\nbaseball glove\nskateboard\nsurfboard\ntennis racket\nbottle\nwine glass\ncup\nfork\nknife\nspoon\nbowl\nbanana\napple\nsandwich\norange\nbroccoli\ncarrot\nhot dog\npizza\ndonut\ncake\nchair\nsofa\npottedplant\nbed\ndiningtable\ntoilet\ntvmonitor\nlaptop\nmouse\nremote\nkeyboard\ncell phone\nmicrowave\noven\ntoaster\nsink\nrefrigerator\nbook\nclock\nvase\nscissors\nteddy bear\nhair drier\ntoothbrush\n"
  },
  {
    "path": "model_data/ssdd_classes.txt",
    "content": "ship\n"
  },
  {
    "path": "model_data/voc_classes.txt",
    "content": "aeroplane\nbicycle\nbird\nboat\nbottle\nbus\ncar\ncat\nchair\ncow\ndiningtable\ndog\nhorse\nmotorbike\nperson\npottedplant\nsheep\nsofa\ntrain\ntvmonitor"
  },
  {
    "path": "model_data/yolo_anchors.txt",
    "content": "12, 16,  19, 36,  40, 28,  36, 75,  76, 55,  72, 146,  142, 110,  192, 243,  459, 401"
  },
  {
    "path": "nets/__init__.py",
    "content": "#"
  },
  {
    "path": "nets/backbone.py",
    "content": "import torch\nimport torch.nn as nn\n\n\ndef autopad(k, p=None):\n    if p is None:\n        p = k // 2 if isinstance(k, int) else [x // 2 for x in k] \n    return p\n\nclass SiLU(nn.Module):  \n    @staticmethod\n    def forward(x):\n        return x * torch.sigmoid(x)\n    \nclass Conv(nn.Module):\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=SiLU()):  # ch_in, ch_out, kernel, stride, padding, groups\n        super(Conv, self).__init__()\n        self.conv   = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)\n        self.bn     = nn.BatchNorm2d(c2, eps=0.001, momentum=0.03)\n        self.act    = nn.LeakyReLU(0.1, inplace=True) if act is True else (act if isinstance(act, nn.Module) else nn.Identity())\n\n    def forward(self, x):\n        return self.act(self.bn(self.conv(x)))\n\n    def fuseforward(self, x):\n        return self.act(self.conv(x))\n    \nclass Multi_Concat_Block(nn.Module):\n    def __init__(self, c1, c2, c3, n=4, e=1, ids=[0]):\n        super(Multi_Concat_Block, self).__init__()\n        c_ = int(c2 * e)\n        \n        self.ids = ids\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = Conv(c1, c_, 1, 1)\n        self.cv3 = nn.ModuleList(\n            [Conv(c_ if i ==0 else c2, c2, 3, 1) for i in range(n)]\n        )\n        self.cv4 = Conv(c_ * 2 + c2 * (len(ids) - 2), c3, 1, 1)\n\n    def forward(self, x):\n        x_1 = self.cv1(x)\n        x_2 = self.cv2(x)\n        \n        x_all = [x_1, x_2]\n        # [-1, -3, -5, -6] => [5, 3, 1, 0]\n        for i in range(len(self.cv3)):\n            x_2 = self.cv3[i](x_2)\n            x_all.append(x_2)\n            \n        out = self.cv4(torch.cat([x_all[id] for id in self.ids], 1))\n        return out\n\nclass MP(nn.Module):\n    def __init__(self, k=2):\n        super(MP, self).__init__()\n        self.m = nn.MaxPool2d(kernel_size=k, stride=k)\n\n    def forward(self, x):\n        return self.m(x)\n    \nclass Transition_Block(nn.Module):\n    def __init__(self, c1, c2):\n        super(Transition_Block, self).__init__()\n        self.cv1 = Conv(c1, c2, 1, 1)\n        self.cv2 = Conv(c1, c2, 1, 1)\n        self.cv3 = Conv(c2, c2, 3, 2)\n        \n        self.mp  = MP()\n\n    def forward(self, x):\n        # 160, 160, 256 => 80, 80, 256 => 80, 80, 128\n        x_1 = self.mp(x)\n        x_1 = self.cv1(x_1)\n        \n        # 160, 160, 256 => 160, 160, 128 => 80, 80, 128\n        x_2 = self.cv2(x)\n        x_2 = self.cv3(x_2)\n        \n        # 80, 80, 128 cat 80, 80, 128 => 80, 80, 256\n        return torch.cat([x_2, x_1], 1)\n    \nclass Backbone(nn.Module):\n    def __init__(self, transition_channels, block_channels, n, phi, pretrained=False):\n        super().__init__()\n        #-----------------------------------------------#\n        #   输入图片是640, 640, 3\n        #-----------------------------------------------#\n        ids = {\n            'l' : [-1, -3, -5, -6],\n            'x' : [-1, -3, -5, -7, -8], \n        }[phi]\n        # 640, 640, 3 => 640, 640, 32 => 320, 320, 64\n        self.stem = nn.Sequential(\n            Conv(3, transition_channels, 3, 1),\n            Conv(transition_channels, transition_channels * 2, 3, 2),\n            Conv(transition_channels * 2, transition_channels * 2, 3, 1),\n        )\n        # 320, 320, 64 => 160, 160, 128 => 160, 160, 256\n        self.dark2 = nn.Sequential(\n            Conv(transition_channels * 2, transition_channels * 4, 3, 2),\n            Multi_Concat_Block(transition_channels * 4, block_channels * 2, transition_channels * 8, n=n, ids=ids),\n        )\n        # 160, 160, 256 => 80, 80, 256 => 80, 80, 512\n        self.dark3 = nn.Sequential(\n            Transition_Block(transition_channels * 8, transition_channels * 4),\n            Multi_Concat_Block(transition_channels * 8, block_channels * 4, transition_channels * 16, n=n, ids=ids),\n        )\n        # 80, 80, 512 => 40, 40, 512 => 40, 40, 1024\n        self.dark4 = nn.Sequential(\n            Transition_Block(transition_channels * 16, transition_channels * 8),\n            Multi_Concat_Block(transition_channels * 16, block_channels * 8, transition_channels * 32, n=n, ids=ids),\n        )\n        # 40, 40, 1024 => 20, 20, 1024 => 20, 20, 1024\n        self.dark5 = nn.Sequential(\n            Transition_Block(transition_channels * 32, transition_channels * 16),\n            Multi_Concat_Block(transition_channels * 32, block_channels * 8, transition_channels * 32, n=n, ids=ids),\n        )\n        \n        if pretrained:\n            url = {\n                \"l\" : 'https://github.com/bubbliiiing/yolov7-pytorch/releases/download/v1.0/yolov7_backbone_weights.pth',\n                \"x\" : 'https://github.com/bubbliiiing/yolov7-pytorch/releases/download/v1.0/yolov7_x_backbone_weights.pth',\n            }[phi]\n            checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location=\"cpu\", model_dir=\"./model_data\")\n            self.load_state_dict(checkpoint, strict=False)\n            print(\"Load weights from \" + url.split('/')[-1])\n\n    def forward(self, x):\n        x = self.stem(x)\n        x = self.dark2(x)\n        #-----------------------------------------------#\n        #   dark3的输出为80, 80, 512，是一个有效特征层\n        #-----------------------------------------------#\n        x = self.dark3(x)\n        feat1 = x\n        #-----------------------------------------------#\n        #   dark4的输出为40, 40, 1024，是一个有效特征层\n        #-----------------------------------------------#\n        x = self.dark4(x)\n        feat2 = x\n        #-----------------------------------------------#\n        #   dark5的输出为20, 20, 1024，是一个有效特征层\n        #-----------------------------------------------#\n        x = self.dark5(x)\n        feat3 = x\n        return feat1, feat2, feat3\n"
  },
  {
    "path": "nets/yolo.py",
    "content": "import numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom nets.backbone import Backbone, Multi_Concat_Block, Conv, SiLU, Transition_Block, autopad\n\n\nclass SPPCSPC(nn.Module):\n    # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks\n    def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):\n        super(SPPCSPC, self).__init__()\n        c_ = int(2 * c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = Conv(c1, c_, 1, 1)\n        self.cv3 = Conv(c_, c_, 3, 1)\n        self.cv4 = Conv(c_, c_, 1, 1)\n        self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])\n        self.cv5 = Conv(4 * c_, c_, 1, 1)\n        self.cv6 = Conv(c_, c_, 3, 1)\n        # 输出通道数为c2\n        self.cv7 = Conv(2 * c_, c2, 1, 1)\n\n    def forward(self, x):\n        x1 = self.cv4(self.cv3(self.cv1(x)))\n        y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1)))\n        y2 = self.cv2(x)\n        return self.cv7(torch.cat((y1, y2), dim=1))\n\nclass RepConv(nn.Module):\n    # Represented convolution\n    # https://arxiv.org/abs/2101.03697\n    def __init__(self, c1, c2, k=3, s=1, p=None, g=1, act=SiLU(), deploy=False):\n        super(RepConv, self).__init__()\n        self.deploy         = deploy\n        self.groups         = g\n        self.in_channels    = c1\n        self.out_channels   = c2\n        \n        assert k == 3\n        assert autopad(k, p) == 1\n\n        padding_11  = autopad(k, p) - k // 2\n        self.act    = nn.LeakyReLU(0.1, inplace=True) if act is True else (act if isinstance(act, nn.Module) else nn.Identity())\n\n        if deploy:\n            self.rbr_reparam    = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=True)\n        else:\n            self.rbr_identity   = (nn.BatchNorm2d(num_features=c1, eps=0.001, momentum=0.03) if c2 == c1 and s == 1 else None)\n            self.rbr_dense      = nn.Sequential(\n                nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False),\n                nn.BatchNorm2d(num_features=c2, eps=0.001, momentum=0.03),\n            )\n            self.rbr_1x1        = nn.Sequential(\n                nn.Conv2d( c1, c2, 1, s, padding_11, groups=g, bias=False),\n                nn.BatchNorm2d(num_features=c2, eps=0.001, momentum=0.03),\n            )\n\n    def forward(self, inputs):\n        if hasattr(self, \"rbr_reparam\"):\n            return self.act(self.rbr_reparam(inputs))\n        if self.rbr_identity is None:\n            id_out = 0\n        else:\n            id_out = self.rbr_identity(inputs)\n        return self.act(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)\n    \n    def get_equivalent_kernel_bias(self):\n        kernel3x3, bias3x3  = self._fuse_bn_tensor(self.rbr_dense)\n        kernel1x1, bias1x1  = self._fuse_bn_tensor(self.rbr_1x1)\n        kernelid, biasid    = self._fuse_bn_tensor(self.rbr_identity)\n        return (\n            kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid,\n            bias3x3 + bias1x1 + biasid,\n        )\n\n    def _pad_1x1_to_3x3_tensor(self, kernel1x1):\n        if kernel1x1 is None:\n            return 0\n        else:\n            return nn.functional.pad(kernel1x1, [1, 1, 1, 1])\n\n    def _fuse_bn_tensor(self, branch):\n        if branch is None:\n            return 0, 0\n        if isinstance(branch, nn.Sequential):\n            kernel      = branch[0].weight\n            running_mean = branch[1].running_mean\n            running_var = branch[1].running_var\n            gamma       = branch[1].weight\n            beta        = branch[1].bias\n            eps         = branch[1].eps\n        else:\n            assert isinstance(branch, nn.BatchNorm2d)\n            if not hasattr(self, \"id_tensor\"):\n                input_dim = self.in_channels // self.groups\n                kernel_value = np.zeros(\n                    (self.in_channels, input_dim, 3, 3), dtype=np.float32\n                )\n                for i in range(self.in_channels):\n                    kernel_value[i, i % input_dim, 1, 1] = 1\n                self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)\n            kernel      = self.id_tensor\n            running_mean = branch.running_mean\n            running_var = branch.running_var\n            gamma       = branch.weight\n            beta        = branch.bias\n            eps         = branch.eps\n        std = (running_var + eps).sqrt()\n        t   = (gamma / std).reshape(-1, 1, 1, 1)\n        return kernel * t, beta - running_mean * gamma / std\n\n    def repvgg_convert(self):\n        kernel, bias = self.get_equivalent_kernel_bias()\n        return (\n            kernel.detach().cpu().numpy(),\n            bias.detach().cpu().numpy(),\n        )\n\n    def fuse_conv_bn(self, conv, bn):\n        std     = (bn.running_var + bn.eps).sqrt()\n        bias    = bn.bias - bn.running_mean * bn.weight / std\n\n        t       = (bn.weight / std).reshape(-1, 1, 1, 1)\n        weights = conv.weight * t\n\n        bn      = nn.Identity()\n        conv    = nn.Conv2d(in_channels = conv.in_channels,\n                              out_channels = conv.out_channels,\n                              kernel_size = conv.kernel_size,\n                              stride=conv.stride,\n                              padding = conv.padding,\n                              dilation = conv.dilation,\n                              groups = conv.groups,\n                              bias = True,\n                              padding_mode = conv.padding_mode)\n\n        conv.weight = torch.nn.Parameter(weights)\n        conv.bias   = torch.nn.Parameter(bias)\n        return conv\n\n    def fuse_repvgg_block(self):    \n        if self.deploy:\n            return\n        print(f\"RepConv.fuse_repvgg_block\")\n        self.rbr_dense  = self.fuse_conv_bn(self.rbr_dense[0], self.rbr_dense[1])\n        \n        self.rbr_1x1    = self.fuse_conv_bn(self.rbr_1x1[0], self.rbr_1x1[1])\n        rbr_1x1_bias    = self.rbr_1x1.bias\n        weight_1x1_expanded = torch.nn.functional.pad(self.rbr_1x1.weight, [1, 1, 1, 1])\n        \n        # Fuse self.rbr_identity\n        if (isinstance(self.rbr_identity, nn.BatchNorm2d) or isinstance(self.rbr_identity, nn.modules.batchnorm.SyncBatchNorm)):\n            identity_conv_1x1 = nn.Conv2d(\n                    in_channels=self.in_channels,\n                    out_channels=self.out_channels,\n                    kernel_size=1,\n                    stride=1,\n                    padding=0,\n                    groups=self.groups, \n                    bias=False)\n            identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.to(self.rbr_1x1.weight.data.device)\n            identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.squeeze().squeeze()\n            identity_conv_1x1.weight.data.fill_(0.0)\n            identity_conv_1x1.weight.data.fill_diagonal_(1.0)\n            identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.unsqueeze(2).unsqueeze(3)\n\n            identity_conv_1x1           = self.fuse_conv_bn(identity_conv_1x1, self.rbr_identity)\n            bias_identity_expanded      = identity_conv_1x1.bias\n            weight_identity_expanded    = torch.nn.functional.pad(identity_conv_1x1.weight, [1, 1, 1, 1])            \n        else:\n            bias_identity_expanded      = torch.nn.Parameter( torch.zeros_like(rbr_1x1_bias) )\n            weight_identity_expanded    = torch.nn.Parameter( torch.zeros_like(weight_1x1_expanded) )            \n        \n        self.rbr_dense.weight   = torch.nn.Parameter(self.rbr_dense.weight + weight_1x1_expanded + weight_identity_expanded)\n        self.rbr_dense.bias     = torch.nn.Parameter(self.rbr_dense.bias + rbr_1x1_bias + bias_identity_expanded)\n                \n        self.rbr_reparam    = self.rbr_dense\n        self.deploy         = True\n\n        if self.rbr_identity is not None:\n            del self.rbr_identity\n            self.rbr_identity = None\n\n        if self.rbr_1x1 is not None:\n            del self.rbr_1x1\n            self.rbr_1x1 = None\n\n        if self.rbr_dense is not None:\n            del self.rbr_dense\n            self.rbr_dense = None\n            \ndef fuse_conv_and_bn(conv, bn):\n    fusedconv = nn.Conv2d(conv.in_channels,\n                          conv.out_channels,\n                          kernel_size=conv.kernel_size,\n                          stride=conv.stride,\n                          padding=conv.padding,\n                          groups=conv.groups,\n                          bias=True).requires_grad_(False).to(conv.weight.device)\n\n    w_conv  = conv.weight.clone().view(conv.out_channels, -1)\n    w_bn    = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))\n    # fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))\n    fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape).detach())\n\n    b_conv  = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias\n    b_bn    = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))\n    # fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)\n    fusedconv.bias.copy_((torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn).detach())\n    return fusedconv\n\n#---------------------------------------------------#\n#   yolo_body\n#---------------------------------------------------#\nclass YoloBody(nn.Module):\n    def __init__(self, anchors_mask, num_classes, phi, pretrained=False):\n        super(YoloBody, self).__init__()\n        #-----------------------------------------------#\n        #   定义了不同yolov7版本的参数\n        #-----------------------------------------------#\n        transition_channels = {'l' : 32, 'x' : 40}[phi]\n        block_channels      = 32\n        panet_channels      = {'l' : 32, 'x' : 64}[phi]\n        e       = {'l' : 2, 'x' : 1}[phi]\n        n       = {'l' : 4, 'x' : 6}[phi]\n        ids     = {'l' : [-1, -2, -3, -4, -5, -6], 'x' : [-1, -3, -5, -7, -8]}[phi]\n        conv    = {'l' : RepConv, 'x' : Conv}[phi]\n        #-----------------------------------------------#\n        #   输入图片是640, 640, 3\n        #-----------------------------------------------#\n\n        #---------------------------------------------------#   \n        #   生成主干模型\n        #   获得三个有效特征层，他们的shape分别是：\n        #   80, 80, 512\n        #   40, 40, 1024\n        #   20, 20, 1024\n        #---------------------------------------------------#\n        self.backbone   = Backbone(transition_channels, block_channels, n, phi, pretrained=pretrained)\n\n        #------------------------加强特征提取网络------------------------# \n        self.upsample   = nn.Upsample(scale_factor=2, mode=\"nearest\")\n\n        # 20, 20, 1024 => 20, 20, 512\n        self.sppcspc                = SPPCSPC(transition_channels * 32, transition_channels * 16)\n        # 20, 20, 512 => 20, 20, 256 => 40, 40, 256\n        self.conv_for_P5            = Conv(transition_channels * 16, transition_channels * 8)\n        # 40, 40, 1024 => 40, 40, 256\n        self.conv_for_feat2         = Conv(transition_channels * 32, transition_channels * 8)\n        # 40, 40, 512 => 40, 40, 256\n        self.conv3_for_upsample1    = Multi_Concat_Block(transition_channels * 16, panet_channels * 4, transition_channels * 8, e=e, n=n, ids=ids)\n\n        # 40, 40, 256 => 40, 40, 128 => 80, 80, 128\n        self.conv_for_P4            = Conv(transition_channels * 8, transition_channels * 4)\n        # 80, 80, 512 => 80, 80, 128\n        self.conv_for_feat1         = Conv(transition_channels * 16, transition_channels * 4)\n        # 80, 80, 256 => 80, 80, 128\n        self.conv3_for_upsample2    = Multi_Concat_Block(transition_channels * 8, panet_channels * 2, transition_channels * 4, e=e, n=n, ids=ids)\n\n        # 80, 80, 128 => 40, 40, 256\n        self.down_sample1           = Transition_Block(transition_channels * 4, transition_channels * 4)\n        # 40, 40, 512 => 40, 40, 256\n        self.conv3_for_downsample1  = Multi_Concat_Block(transition_channels * 16, panet_channels * 4, transition_channels * 8, e=e, n=n, ids=ids)\n\n        # 40, 40, 256 => 20, 20, 512\n        self.down_sample2           = Transition_Block(transition_channels * 8, transition_channels * 8)\n        # 20, 20, 1024 => 20, 20, 512\n        self.conv3_for_downsample2  = Multi_Concat_Block(transition_channels * 32, panet_channels * 8, transition_channels * 16, e=e, n=n, ids=ids)\n        #------------------------加强特征提取网络------------------------# \n\n        # 80, 80, 128 => 80, 80, 256\n        self.rep_conv_1 = conv(transition_channels * 4, transition_channels * 8, 3, 1)\n        # 40, 40, 256 => 40, 40, 512\n        self.rep_conv_2 = conv(transition_channels * 8, transition_channels * 16, 3, 1)\n        # 20, 20, 512 => 20, 20, 1024\n        self.rep_conv_3 = conv(transition_channels * 16, transition_channels * 32, 3, 1)\n\n        # 4 + 1 + num_classes\n        # 80, 80, 256 => 80, 80, 3 * 25 (4 + 1 + 20) & 85 (4 + 1 + 80)\n        self.yolo_head_P3 = nn.Conv2d(transition_channels * 8, len(anchors_mask[2]) * (5 + 1 + num_classes), 1)\n        # 40, 40, 512 => 40, 40, 3 * 25 & 85\n        self.yolo_head_P4 = nn.Conv2d(transition_channels * 16, len(anchors_mask[1]) * (5 + 1 + num_classes), 1)\n        # 20, 20, 512 => 20, 20, 3 * 25 & 85\n        self.yolo_head_P5 = nn.Conv2d(transition_channels * 32, len(anchors_mask[0]) * (5 + 1 + num_classes), 1)\n\n    def fuse(self):\n        print('Fusing layers... ')\n        for m in self.modules():\n            if isinstance(m, RepConv):\n                m.fuse_repvgg_block()\n            elif type(m) is Conv and hasattr(m, 'bn'):\n                m.conv = fuse_conv_and_bn(m.conv, m.bn)\n                delattr(m, 'bn')\n                m.forward = m.fuseforward\n        return self\n    \n    def forward(self, x):\n        #  backbone\n        feat1, feat2, feat3 = self.backbone.forward(x)\n        \n        #------------------------加强特征提取网络------------------------# \n        # 20, 20, 1024 => 20, 20, 512\n        P5          = self.sppcspc(feat3)\n        # 20, 20, 512 => 20, 20, 256\n        P5_conv     = self.conv_for_P5(P5)\n        # 20, 20, 256 => 40, 40, 256\n        P5_upsample = self.upsample(P5_conv)\n        # 40, 40, 256 cat 40, 40, 256 => 40, 40, 512\n        P4          = torch.cat([self.conv_for_feat2(feat2), P5_upsample], 1)\n        # 40, 40, 512 => 40, 40, 256\n        P4          = self.conv3_for_upsample1(P4)\n\n        # 40, 40, 256 => 40, 40, 128\n        P4_conv     = self.conv_for_P4(P4)\n        # 40, 40, 128 => 80, 80, 128\n        P4_upsample = self.upsample(P4_conv)\n        # 80, 80, 128 cat 80, 80, 128 => 80, 80, 256\n        P3          = torch.cat([self.conv_for_feat1(feat1), P4_upsample], 1)\n        # 80, 80, 256 => 80, 80, 128\n        P3          = self.conv3_for_upsample2(P3)\n\n        # 80, 80, 128 => 40, 40, 256\n        P3_downsample = self.down_sample1(P3)\n        # 40, 40, 256 cat 40, 40, 256 => 40, 40, 512\n        P4 = torch.cat([P3_downsample, P4], 1)\n        # 40, 40, 512 => 40, 40, 256\n        P4 = self.conv3_for_downsample1(P4)\n\n        # 40, 40, 256 => 20, 20, 512\n        P4_downsample = self.down_sample2(P4)\n        # 20, 20, 512 cat 20, 20, 512 => 20, 20, 1024\n        P5 = torch.cat([P4_downsample, P5], 1)\n        # 20, 20, 1024 => 20, 20, 512\n        P5 = self.conv3_for_downsample2(P5)\n        #------------------------加强特征提取网络------------------------# \n        # P3 80, 80, 128 \n        # P4 40, 40, 256\n        # P5 20, 20, 512\n        \n        P3 = self.rep_conv_1(P3)\n        P4 = self.rep_conv_2(P4)\n        P5 = self.rep_conv_3(P5)\n        #---------------------------------------------------#\n        #   第三个特征层\n        #   y3=(batch_size, 75, 80, 80)\n        #---------------------------------------------------#\n        out2 = self.yolo_head_P3(P3)\n        #---------------------------------------------------#\n        #   第二个特征层\n        #   y2=(batch_size, 75, 40, 40)\n        #---------------------------------------------------#\n        out1 = self.yolo_head_P4(P4)\n        #---------------------------------------------------#\n        #   第一个特征层\n        #   y1=(batch_size, 75, 20, 20)\n        #---------------------------------------------------#\n        out0 = self.yolo_head_P5(P5)\n\n        return [out0, out1, out2]\n"
  },
  {
    "path": "nets/yolo_training.py",
    "content": "import math\nfrom copy import deepcopy\nfrom functools import partial\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom utils.kld_loss import compute_kld_loss, KLDloss\n\ndef smooth_BCE(eps=0.1):  # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441\n    # return positive, negative label smoothing BCE targets\n    return 1.0 - 0.5 * eps, 0.5 * eps\n\nclass YOLOLoss(nn.Module):\n    def __init__(self, anchors, num_classes, input_shape, anchors_mask = [[6,7,8], [3,4,5], [0,1,2]], label_smoothing = 0):\n        super(YOLOLoss, self).__init__()\n        #-----------------------------------------------------------#\n        #   13x13的特征层对应的anchor是[142, 110],[192, 243],[459, 401]\n        #   26x26的特征层对应的anchor是[36, 75],[76, 55],[72, 146]\n        #   52x52的特征层对应的anchor是[12, 16],[19, 36],[40, 28]\n        #-----------------------------------------------------------#\n        self.anchors        = [anchors[mask] for mask in anchors_mask]\n        self.num_classes    = num_classes\n        self.input_shape    = input_shape\n        self.anchors_mask   = anchors_mask\n\n        self.balance        = [0.4, 1.0, 4]\n        self.stride         = [32, 16, 8]\n        \n        self.box_ratio      = 0.05\n        self.obj_ratio      = 1 * (input_shape[0] * input_shape[1]) / (640 ** 2)\n        self.cls_ratio      = 0.5 * (num_classes / 80)\n        self.threshold      = 4\n\n        self.cp, self.cn                    = smooth_BCE(eps=label_smoothing)  \n        self.BCEcls, self.BCEobj, self.gr   = nn.BCEWithLogitsLoss(), nn.BCEWithLogitsLoss(), 1\n        self.kldbbox = KLDloss(taf=1.0, fun='sqrt')\n\n    def bbox_iou(self, box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):\n        box2 = box2.T\n\n        if x1y1x2y2:\n            b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]\n            b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]\n        else:\n            b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2\n            b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2\n            b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2\n            b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2\n\n        inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \\\n                (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)\n\n        w1, h1  = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps\n        w2, h2  = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps\n        union   = w1 * h1 + w2 * h2 - inter + eps\n\n        iou = inter / union\n\n        if GIoU or DIoU or CIoU:\n            cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1)  # convex (smallest enclosing box) width\n            ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1)  # convex height\n            if CIoU or DIoU:  # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1\n                c2 = cw ** 2 + ch ** 2 + eps  # convex diagonal squared\n                rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +\n                        (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4  # center distance squared\n                if DIoU:\n                    return iou - rho2 / c2  # DIoU\n                elif CIoU:  # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47\n                    v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)\n                    with torch.no_grad():\n                        alpha = v / (v - iou + (1 + eps))\n                    return iou - (rho2 / c2 + v * alpha)  # CIoU\n            else:  # GIoU https://arxiv.org/pdf/1902.09630.pdf\n                c_area = cw * ch + eps  # convex area\n                return iou - (c_area - union) / c_area  # GIoU\n        else:\n            return iou  # IoU\n    \n    def __call__(self, predictions, targets, imgs): \n        #-------------------------------------------#\n        #   对输入进来的预测结果进行reshape\n        #   bs, 255, 20, 20 => bs, 3, 20, 20, 85\n        #   bs, 255, 40, 40 => bs, 3, 40, 40, 85\n        #   bs, 255, 80, 80 => bs, 3, 80, 80, 85\n        #-------------------------------------------#\n        for i in range(len(predictions)):\n            bs, _, h, w = predictions[i].size()\n            predictions[i] = predictions[i].view(bs, len(self.anchors_mask[i]), -1, h, w).permute(0, 1, 3, 4, 2).contiguous()\n            \n        #-------------------------------------------#\n        #   获得工作的设备\n        #-------------------------------------------#\n        device              = targets.device\n        #-------------------------------------------#\n        #   初始化三个部分的损失\n        #-------------------------------------------#\n        cls_loss, box_loss, obj_loss    = torch.zeros(1, device = device), torch.zeros(1, device = device), torch.zeros(1, device = device)\n        \n        #-------------------------------------------#\n        #   进行正样本的匹配\n        #-------------------------------------------#\n        bs, as_, gjs, gis, targets, anchors = self.build_targets(predictions, targets, imgs)\n        #-------------------------------------------#\n        #   计算获得对应特征层的高宽\n        #-------------------------------------------#\n        feature_map_sizes = [torch.tensor(prediction.shape, device=device)[[3, 2, 3, 2]].type_as(prediction) for prediction in predictions] \n    \n        #-------------------------------------------#\n        #   计算损失，对三个特征层各自进行处理\n        #-------------------------------------------#\n        for i, prediction in enumerate(predictions): \n            #-------------------------------------------#\n            #   image, anchor, gridy, gridx\n            #-------------------------------------------#\n            b, a, gj, gi    = bs[i], as_[i], gjs[i], gis[i]\n            tobj            = torch.zeros_like(prediction[..., 0], device=device)  # target obj\n\n            #-------------------------------------------#\n            #   获得目标数量，如果目标大于0\n            #   则开始计算种类损失和回归损失\n            #-------------------------------------------#\n            n = b.shape[0]\n            if n:\n                prediction_pos = prediction[b, a, gj, gi]  # prediction subset corresponding to targets\n                # prediction_pos [xywh angle conf cls ]\n                #-------------------------------------------#\n                #   计算匹配上的正样本的回归损失\n                #-------------------------------------------#\n                #-------------------------------------------#\n                #   grid 获得正样本的x、y轴坐标\n                #-------------------------------------------#\n                grid    = torch.stack([gi, gj], dim=1)\n                #-------------------------------------------#\n                #   进行解码，获得预测结果\n                #-------------------------------------------#\n                xy      = prediction_pos[:, :2].sigmoid() * 2. - 0.5\n                wh      = (prediction_pos[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]\n                angle   = (prediction_pos[:, 4:5].sigmoid() - 0.5) * math.pi\n                box_theta = torch.cat((xy, wh, angle), 1)\n                #-------------------------------------------#\n                #   对真实框进行处理，映射到特征层上\n                #-------------------------------------------#\n                selected_tbox           = targets[i][:, 2:6] * feature_map_sizes[i]\n                selected_tbox[:, :2]    -= grid.type_as(prediction)\n                theta                   = targets[i][:, 6:7]\n                selected_tbox_theta     = torch.cat((selected_tbox, theta),1)\n                #-------------------------------------------#\n                #   计算预测框和真实框的回归损失\n                #-------------------------------------------#\n                kldloss                 = self.kldbbox(box_theta, selected_tbox_theta)\n                box_loss                += kldloss.mean()\n                #-------------------------------------------#\n                #   根据预测结果的iou获得置信度损失的gt\n                #-------------------------------------------#\n                tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * (1 - kldloss).detach().clamp(0).type(tobj.dtype)  # iou ratio\n\n                #-------------------------------------------#\n                #   计算匹配上的正样本的分类损失\n                #-------------------------------------------#\n                selected_tcls               = targets[i][:, 1].long()\n                t                           = torch.full_like(prediction_pos[:, 6:], self.cn, device=device)  # targets\n                t[range(n), selected_tcls]  = self.cp\n                cls_loss                    += self.BCEcls(prediction_pos[:, 6:], t)  # BCE\n\n            #-------------------------------------------#\n            #   计算目标是否存在的置信度损失\n            #   并且乘上每个特征层的比例\n            #-------------------------------------------#\n            obj_loss += self.BCEobj(prediction[..., 5], tobj) * self.balance[i]  # obj loss\n            \n        #-------------------------------------------#\n        #   将各个部分的损失乘上比例\n        #   全加起来后，乘上batch_size\n        #-------------------------------------------#\n        box_loss    *= self.box_ratio\n        obj_loss    *= self.obj_ratio\n        cls_loss    *= self.cls_ratio\n        bs          = tobj.shape[0]\n        \n        loss    = box_loss + obj_loss + cls_loss\n        return loss\n        \n    def xywh2xyxy(self, x):\n        # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2]\n        y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)\n        y[:, 0] = x[:, 0] - x[:, 2] / 2  # top left x\n        y[:, 1] = x[:, 1] - x[:, 3] / 2  # top left y\n        y[:, 2] = x[:, 0] + x[:, 2] / 2  # bottom right x\n        y[:, 3] = x[:, 1] + x[:, 3] / 2  # bottom right y\n        return y\n    \n    def box_iou(self, box1, box2):\n        # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py\n        \"\"\"\n        Return intersection-over-union (Jaccard index) of boxes.\n        Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n        Arguments:\n            box1 (Tensor[N, 4])\n            box2 (Tensor[M, 4])\n        Returns:\n            iou (Tensor[N, M]): the NxM matrix containing the pairwise\n                IoU values for every element in boxes1 and boxes2\n        \"\"\"\n        def box_area(box):\n            # box = 4xn\n            return (box[2] - box[0]) * (box[3] - box[1])\n\n        area1 = box_area(box1.T)\n        area2 = box_area(box2.T)\n\n        # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)\n        inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)\n        return inter / (area1[:, None] + area2 - inter)  # iou = inter / (area1 + area2 - inter)\n\n    def build_targets(self, predictions, targets, imgs):\n        #-------------------------------------------#\n        #   匹配正样本\n        #-------------------------------------------#\n        indices, anch       = self.find_3_positive(predictions, targets)\n\n        matching_bs         = [[] for _ in predictions]\n        matching_as         = [[] for _ in predictions]\n        matching_gjs        = [[] for _ in predictions]\n        matching_gis        = [[] for _ in predictions]\n        matching_targets    = [[] for _ in predictions]\n        matching_anchs      = [[] for _ in predictions]\n        \n        #-------------------------------------------#\n        #   一共三层\n        #-------------------------------------------#\n        num_layer = len(predictions)\n        #-------------------------------------------#\n        #   对batch_size进行循环，进行OTA匹配\n        #   在batch_size循环中对layer进行循环\n        #-------------------------------------------#\n        for batch_idx in range(predictions[0].shape[0]):\n            #-------------------------------------------#\n            #   先判断匹配上的真实框哪些属于该图片\n            #-------------------------------------------#\n            b_idx       = targets[:, 0]==batch_idx\n            this_target = targets[b_idx]\n            #  targets (tensor): (n_gt_all_batch, [img_index clsid cx cy l s theta ])\n            #-------------------------------------------#\n            #   如果没有真实框属于该图片则continue\n            #-------------------------------------------#\n            if this_target.shape[0] == 0:\n                continue\n            \n            #-------------------------------------------#\n            #   真实框的坐标进行缩放\n            #-------------------------------------------#\n            txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]\n            #-------------------------------------------#\n            #   从中心宽高到左上角右下角\n            #-------------------------------------------#\n            txyxy = torch.cat((txywh, this_target[:,6:]), dim=-1)\n\n            pxyxys      = []\n            p_cls       = []\n            p_obj       = []\n            from_which_layer = []\n            all_b       = []\n            all_a       = []\n            all_gj      = []\n            all_gi      = []\n            all_anch    = []\n            \n            #-------------------------------------------#\n            #   对三个layer进行循环\n            #-------------------------------------------#\n            for i, prediction in enumerate(predictions):\n                #-------------------------------------------#\n                #   b代表第几张图片 a代表第几个先验框\n                #   gj代表y轴，gi代表x轴\n                #-------------------------------------------#\n                b, a, gj, gi    = indices[i]\n                idx             = (b == batch_idx)\n                b, a, gj, gi    = b[idx], a[idx], gj[idx], gi[idx]       \n                       \n                all_b.append(b)\n                all_a.append(a)\n                all_gj.append(gj)\n                all_gi.append(gi)\n                all_anch.append(anch[i][idx])\n                from_which_layer.append(torch.ones(size=(len(b),)) * i)\n                \n                #-------------------------------------------#\n                #   取出这个真实框对应的预测结果\n                #-------------------------------------------#\n                fg_pred = prediction[b, a, gj, gi]                \n                p_obj.append(fg_pred[:, 5:6]) # [4:5] = theta\n                p_cls.append(fg_pred[:, 6:])\n                \n                #-------------------------------------------#\n                #   获得网格后，进行解码\n                #-------------------------------------------#\n                grid    = torch.stack([gi, gj], dim=1).type_as(fg_pred)\n                pxy     = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i]\n                pwh     = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i]\n                pangle  = (fg_pred[:, 4:5].sigmoid() - 0.5) * math.pi\n                pxywh   = torch.cat([pxy, pwh, pangle], dim=-1)\n                pxyxys.append(pxywh)\n            \n            #-------------------------------------------#\n            #   判断是否存在对应的预测框，不存在则跳过\n            #-------------------------------------------#\n            pxyxys = torch.cat(pxyxys, dim=0)\n            if pxyxys.shape[0] == 0:\n                continue\n            \n            #-------------------------------------------#\n            #   进行堆叠\n            #-------------------------------------------#\n            p_obj       = torch.cat(p_obj, dim=0)\n            p_cls       = torch.cat(p_cls, dim=0)\n            from_which_layer = torch.cat(from_which_layer, dim=0)\n            all_b       = torch.cat(all_b, dim=0)\n            all_a       = torch.cat(all_a, dim=0)\n            all_gj      = torch.cat(all_gj, dim=0)\n            all_gi      = torch.cat(all_gi, dim=0)\n            all_anch    = torch.cat(all_anch, dim=0)\n        \n            #-------------------------------------------------------------#\n            #   计算当前图片中，真实框与预测框的重合程度\n            #   iou的范围为0-1，取-log后为0~inf\n            #   重合程度越大，取-log后越小\n            #   因此，真实框与预测框重合度越大，pair_wise_iou_loss越小\n            #-------------------------------------------------------------#\n            pair_wise_iou_loss = compute_kld_loss(txyxy, pxyxys, taf=1.0, fun='sqrt')\n            pair_wise_iou      = 1 - pair_wise_iou_loss\n\n            #-------------------------------------------#\n            #   最多二十个预测框与真实框的重合程度\n            #   然后求和，找到每个真实框对应几个预测框\n            #-------------------------------------------#\n            top_k, _    = torch.topk(pair_wise_iou, min(20, pair_wise_iou.shape[1]), dim=1)\n            dynamic_ks  = torch.clamp(top_k.sum(1).int(), min=1)\n\n            #-------------------------------------------#\n            #   gt_cls_per_image    种类的真实信息\n            #-------------------------------------------#\n            gt_cls_per_image = F.one_hot(this_target[:, 1].to(torch.int64), self.num_classes).float().unsqueeze(1).repeat(1, pxyxys.shape[0], 1)\n            \n            #-------------------------------------------#\n            #   cls_preds_  种类置信度的预测信息\n            #               cls_preds_越接近于1，y越接近于1\n            #               y / (1 - y)越接近于无穷大\n            #               也就是种类置信度预测的越准\n            #               pair_wise_cls_loss越小\n            #-------------------------------------------#\n            num_gt              = this_target.shape[0]\n            cls_preds_          = p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()\n            y                   = cls_preds_.sqrt_()\n            pair_wise_cls_loss  = F.binary_cross_entropy_with_logits(torch.log(y / (1 - y)), gt_cls_per_image, reduction=\"none\").sum(-1)\n            del cls_preds_\n        \n            #-------------------------------------------#\n            #   求cost的总和\n            #-------------------------------------------#\n            cost = (\n                pair_wise_cls_loss\n                + 3.0 * pair_wise_iou_loss\n            )\n\n            #-------------------------------------------#\n            #   求cost最小的k个预测框\n            #-------------------------------------------#\n            matching_matrix = torch.zeros_like(cost)\n            for gt_idx in range(num_gt):\n                _, pos_idx = torch.topk(cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False)\n                matching_matrix[gt_idx][pos_idx] = 1.0\n\n            del top_k, dynamic_ks\n\n            #-------------------------------------------#\n            #   如果一个预测框对应多个真实框\n            #   只使用这个预测框最对应的真实框\n            #-------------------------------------------#\n            anchor_matching_gt = matching_matrix.sum(0)\n            if (anchor_matching_gt > 1).sum() > 0:\n                _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)\n                matching_matrix[:, anchor_matching_gt > 1]          *= 0.0\n                matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0\n            fg_mask_inboxes = matching_matrix.sum(0) > 0.0\n            matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)\n\n            #-------------------------------------------#\n            #   取出符合条件的框\n            #-------------------------------------------#\n            from_which_layer    = from_which_layer.to(fg_mask_inboxes.device)[fg_mask_inboxes]\n            all_b               = all_b[fg_mask_inboxes]\n            all_a               = all_a[fg_mask_inboxes]\n            all_gj              = all_gj[fg_mask_inboxes]\n            all_gi              = all_gi[fg_mask_inboxes]\n            all_anch            = all_anch[fg_mask_inboxes]\n            this_target         = this_target[matched_gt_inds]\n        \n            for i in range(num_layer):\n                layer_idx = from_which_layer == i\n                matching_bs[i].append(all_b[layer_idx])\n                matching_as[i].append(all_a[layer_idx])\n                matching_gjs[i].append(all_gj[layer_idx])\n                matching_gis[i].append(all_gi[layer_idx])\n                matching_targets[i].append(this_target[layer_idx])\n                matching_anchs[i].append(all_anch[layer_idx])\n\n        for i in range(num_layer):\n            matching_bs[i]      = torch.cat(matching_bs[i], dim=0) if len(matching_bs[i]) != 0 else torch.Tensor(matching_bs[i])\n            matching_as[i]      = torch.cat(matching_as[i], dim=0) if len(matching_as[i]) != 0 else torch.Tensor(matching_as[i])\n            matching_gjs[i]     = torch.cat(matching_gjs[i], dim=0) if len(matching_gjs[i]) != 0 else torch.Tensor(matching_gjs[i])\n            matching_gis[i]     = torch.cat(matching_gis[i], dim=0) if len(matching_gis[i]) != 0 else torch.Tensor(matching_gis[i])\n            matching_targets[i] = torch.cat(matching_targets[i], dim=0) if len(matching_targets[i]) != 0 else torch.Tensor(matching_targets[i])\n            matching_anchs[i]   = torch.cat(matching_anchs[i], dim=0) if len(matching_anchs[i]) != 0 else torch.Tensor(matching_anchs[i])\n\n        return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs\n\n    def find_3_positive(self, predictions, targets):\n        #------------------------------------#\n        #   获得每个特征层先验框的数量\n        #   与真实框的数量\n        #------------------------------------#\n        num_anchor, num_gt  = len(self.anchors_mask[0]), targets.shape[0] \n        #------------------------------------#\n        #   创建空列表存放indices和anchors\n        #------------------------------------#\n        indices, anchors    = [], []\n        #------------------------------------#\n        #   创建7个1\n        #   序号0,1为1\n        #   序号2:6为特征层的高宽\n        #   序号6为1\n        #------------------------------------#\n        gain    = torch.ones(8, device=targets.device)\n        #------------------------------------#\n        #   ai      [num_anchor, num_gt]\n        #   targets [num_gt, 6] => [num_anchor, num_gt, 8]\n        #------------------------------------#\n        ai      = torch.arange(num_anchor, device=targets.device).float().view(num_anchor, 1).repeat(1, num_gt)\n        targets = torch.cat((targets.repeat(num_anchor, 1, 1), ai[:, :, None]), 2)  # append anchor indices\n        # targets (tensor): (na, n_gt_all_batch, [img_index, clsid, cx, cy, l, s, theta, anchor_index]])\n        g   = 0.5 # offsets\n        off = torch.tensor([\n            [0, 0],\n            [1, 0], [0, 1], [-1, 0], [0, -1],  # j,k,l,m\n            # [1, 1], [1, -1], [-1, 1], [-1, -1],  # jk,jm,lk,lm\n        ], device=targets.device).float() * g \n\n        for i in range(len(predictions)):\n            #----------------------------------------------------#\n            #   将先验框除以stride，获得相对于特征层的先验框。\n            #   anchors_i [num_anchor, 2]\n            #----------------------------------------------------#\n            anchors_i = torch.from_numpy(self.anchors[i] / self.stride[i]).type_as(predictions[i])\n            anchors_i, shape = torch.from_numpy(self.anchors[i] / self.stride[i]).type_as(predictions[i]), predictions[i].shape\n            #-------------------------------------------#\n            #   计算获得对应特征层的高宽\n            #-------------------------------------------#\n            gain[2:6] = torch.tensor(predictions[i].shape)[[3, 2, 3, 2]]\n            \n            #-------------------------------------------#\n            #   将真实框乘上gain，\n            #   其实就是将真实框映射到特征层上\n            #-------------------------------------------#\n            t = targets * gain\n            if num_gt:\n                #-------------------------------------------#\n                #   计算真实框与先验框高宽的比值\n                #   然后根据比值大小进行判断，\n                #   判断结果用于取出，获得所有先验框对应的真实框\n                #   r   [num_anchor, num_gt, 2]\n                #   t   [num_anchor, num_gt, 7] => [num_matched_anchor, 7]\n                #-------------------------------------------#\n                r = t[:, :, 4:6] / anchors_i[:, None]\n                j = torch.max(r, 1. / r).max(2)[0] < self.threshold\n                t = t[j]  # filter\n                \n                #-------------------------------------------#\n                #   gxy 获得所有先验框对应的真实框的x轴y轴坐标\n                #   gxi 取相对于该特征层的右小角的坐标\n                #-------------------------------------------#\n                gxy     = t[:, 2:4] # grid xy\n                gxi     = gain[[2, 3]] - gxy # inverse\n                j, k    = ((gxy % 1. < g) & (gxy > 1.)).T\n                l, m    = ((gxi % 1. < g) & (gxi > 1.)).T\n                j       = torch.stack((torch.ones_like(j), j, k, l, m))\n                \n                #-------------------------------------------#\n                #   t   重复5次，使用满足条件的j进行框的提取\n                #   j   一共五行，代表当前特征点在五个\n                #       [0, 0], [1, 0], [0, 1], [-1, 0], [0, -1]\n                #       方向是否存在\n                #-------------------------------------------#\n                t       = t.repeat((5, 1, 1))[j]\n                offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]\n            else:\n                t = targets[0]\n                offsets = 0\n\n            #-------------------------------------------#\n            #   b   代表属于第几个图片\n            #   gxy 代表该真实框所处的x、y中心坐标\n            #   gwh 代表该真实框的wh坐标\n            #   gij 代表真实框所属的特征点坐标\n            #-------------------------------------------#\n            b, c    = t[:, :2].long().T  # image, class\n            gxy     = t[:, 2:4]  # grid xy\n            gwh     = t[:, 4:6]  # grid wh\n            gij     = (gxy - offsets).long()\n            gi, gj  = gij.T  # grid xy indices\n\n            #-------------------------------------------#\n            #   gj、gi不能超出特征层范围\n            #   a代表属于该特征点的第几个先验框\n            #-------------------------------------------#\n            a = t[:, -1].long()  # anchor indices\n            indices.append((b, a, gj.clamp_(0, shape[2] - 1), gi.clamp_(0, shape[3] - 1)))  # image, anchor, grid indices\n            anchors.append(anchors_i[a])  # anchors\n\n        return indices, anchors\n\ndef is_parallel(model):\n    # Returns True if model is of type DP or DDP\n    return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)\n\ndef de_parallel(model):\n    # De-parallelize a model: returns single-GPU model if model is of type DP or DDP\n    return model.module if is_parallel(model) else model\n    \ndef copy_attr(a, b, include=(), exclude=()):\n    # Copy attributes from b to a, options to only include [...] and to exclude [...]\n    for k, v in b.__dict__.items():\n        if (len(include) and k not in include) or k.startswith('_') or k in exclude:\n            continue\n        else:\n            setattr(a, k, v)\n\nclass ModelEMA:\n    \"\"\" Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models\n    Keeps a moving average of everything in the model state_dict (parameters and buffers)\n    For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage\n    \"\"\"\n\n    def __init__(self, model, decay=0.9999, tau=2000, updates=0):\n        # Create EMA\n        self.ema = deepcopy(de_parallel(model)).eval()  # FP32 EMA\n        # if next(model.parameters()).device.type != 'cpu':\n        #     self.ema.half()  # FP16 EMA\n        self.updates = updates  # number of EMA updates\n        self.decay = lambda x: decay * (1 - math.exp(-x / tau))  # decay exponential ramp (to help early epochs)\n        for p in self.ema.parameters():\n            p.requires_grad_(False)\n\n    def update(self, model):\n        # Update EMA parameters\n        with torch.no_grad():\n            self.updates += 1\n            d = self.decay(self.updates)\n\n            msd = de_parallel(model).state_dict()  # model state_dict\n            for k, v in self.ema.state_dict().items():\n                if v.dtype.is_floating_point:\n                    v *= d\n                    v += (1 - d) * msd[k].detach()\n\n    def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):\n        # Update EMA attributes\n        copy_attr(self.ema, model, include, exclude)\n\ndef weights_init(net, init_type='normal', init_gain = 0.02):\n    def init_func(m):\n        classname = m.__class__.__name__\n        if hasattr(m, 'weight') and classname.find('Conv') != -1:\n            if init_type == 'normal':\n                torch.nn.init.normal_(m.weight.data, 0.0, init_gain)\n            elif init_type == 'xavier':\n                torch.nn.init.xavier_normal_(m.weight.data, gain=init_gain)\n            elif init_type == 'kaiming':\n                torch.nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')\n            elif init_type == 'orthogonal':\n                torch.nn.init.orthogonal_(m.weight.data, gain=init_gain)\n            else:\n                raise NotImplementedError('initialization method [%s] is not implemented' % init_type)\n        elif classname.find('BatchNorm2d') != -1:\n            torch.nn.init.normal_(m.weight.data, 1.0, 0.02)\n            torch.nn.init.constant_(m.bias.data, 0.0)\n    print('initialize network with %s type' % init_type)\n    net.apply(init_func)\n\ndef get_lr_scheduler(lr_decay_type, lr, min_lr, total_iters, warmup_iters_ratio = 0.05, warmup_lr_ratio = 0.1, no_aug_iter_ratio = 0.05, step_num = 10):\n    def yolox_warm_cos_lr(lr, min_lr, total_iters, warmup_total_iters, warmup_lr_start, no_aug_iter, iters):\n        if iters <= warmup_total_iters:\n            # lr = (lr - warmup_lr_start) * iters / float(warmup_total_iters) + warmup_lr_start\n            lr = (lr - warmup_lr_start) * pow(iters / float(warmup_total_iters), 2\n            ) + warmup_lr_start\n        elif iters >= total_iters - no_aug_iter:\n            lr = min_lr\n        else:\n            lr = min_lr + 0.5 * (lr - min_lr) * (\n                1.0\n                + math.cos(\n                    math.pi\n                    * (iters - warmup_total_iters)\n                    / (total_iters - warmup_total_iters - no_aug_iter)\n                )\n            )\n        return lr\n\n    def step_lr(lr, decay_rate, step_size, iters):\n        if step_size < 1:\n            raise ValueError(\"step_size must above 1.\")\n        n       = iters // step_size\n        out_lr  = lr * decay_rate ** n\n        return out_lr\n\n    if lr_decay_type == \"cos\":\n        warmup_total_iters  = min(max(warmup_iters_ratio * total_iters, 1), 3)\n        warmup_lr_start     = max(warmup_lr_ratio * lr, 1e-6)\n        no_aug_iter         = min(max(no_aug_iter_ratio * total_iters, 1), 15)\n        func = partial(yolox_warm_cos_lr ,lr, min_lr, total_iters, warmup_total_iters, warmup_lr_start, no_aug_iter)\n    else:\n        decay_rate  = (min_lr / lr) ** (1 / (step_num - 1))\n        step_size   = total_iters / step_num\n        func = partial(step_lr, lr, decay_rate, step_size)\n\n    return func\n\ndef set_optimizer_lr(optimizer, lr_scheduler_func, epoch):\n    lr = lr_scheduler_func(epoch)\n    for param_group in optimizer.param_groups:\n        param_group['lr'] = lr\n"
  },
  {
    "path": "predict.py",
    "content": "#-----------------------------------------------------------------------#\n#   predict.py将单张图片预测、摄像头检测、FPS测试和目录遍历检测等功能\n#   整合到了一个py文件中，通过指定mode进行模式的修改。\n#-----------------------------------------------------------------------#\nimport time\n\nimport cv2\nimport numpy as np\nfrom PIL import Image\n\nfrom yolo import YOLO\n\nif __name__ == \"__main__\":\n    yolo = YOLO()\n    #----------------------------------------------------------------------------------------------------------#\n    #   mode用于指定测试的模式：\n    #   'predict'           表示单张图片预测，如果想对预测过程进行修改，如保存图片，截取对象等，可以先看下方详细的注释\n    #   'video'             表示视频检测，可调用摄像头或者视频进行检测，详情查看下方注释。\n    #   'fps'               表示测试fps，使用的图片是img里面的street.jpg，详情查看下方注释。\n    #   'dir_predict'       表示遍历文件夹进行检测并保存。默认遍历img文件夹，保存img_out文件夹，详情查看下方注释。\n    #   'heatmap'           表示进行预测结果的热力图可视化，详情查看下方注释。\n    #   'export_onnx'       表示将模型导出为onnx，需要pytorch1.7.1以上。\n    #----------------------------------------------------------------------------------------------------------#\n    mode = \"predict\"\n    #-------------------------------------------------------------------------#\n    #   crop                指定了是否在单张图片预测后对目标进行截取\n    #   count               指定了是否进行目标的计数\n    #   crop、count仅在mode='predict'时有效\n    #-------------------------------------------------------------------------#\n    crop            = False\n    count           = False\n    #----------------------------------------------------------------------------------------------------------#\n    #   video_path          用于指定视频的路径，当video_path=0时表示检测摄像头\n    #                       想要检测视频，则设置如video_path = \"xxx.mp4\"即可，代表读取出根目录下的xxx.mp4文件。\n    #   video_save_path     表示视频保存的路径，当video_save_path=\"\"时表示不保存\n    #                       想要保存视频，则设置如video_save_path = \"yyy.mp4\"即可，代表保存为根目录下的yyy.mp4文件。\n    #   video_fps           用于保存的视频的fps\n    #\n    #   video_path、video_save_path和video_fps仅在mode='video'时有效\n    #   保存视频时需要ctrl+c退出或者运行到最后一帧才会完成完整的保存步骤。\n    #----------------------------------------------------------------------------------------------------------#\n    video_path      = 0\n    video_save_path = \"\"\n    video_fps       = 25.0\n    #----------------------------------------------------------------------------------------------------------#\n    #   test_interval       用于指定测量fps的时候，图片检测的次数。理论上test_interval越大，fps越准确。\n    #   fps_image_path      用于指定测试的fps图片\n    #   \n    #   test_interval和fps_image_path仅在mode='fps'有效\n    #----------------------------------------------------------------------------------------------------------#\n    test_interval   = 100\n    fps_image_path  = \"img/street.jpg\"\n    #-------------------------------------------------------------------------#\n    #   dir_origin_path     指定了用于检测的图片的文件夹路径\n    #   dir_save_path       指定了检测完图片的保存路径\n    #   \n    #   dir_origin_path和dir_save_path仅在mode='dir_predict'时有效\n    #-------------------------------------------------------------------------#\n    dir_origin_path = \"img/\"\n    dir_save_path   = \"img_out/\"\n    #-------------------------------------------------------------------------#\n    #   heatmap_save_path   热力图的保存路径，默认保存在model_data下\n    #   \n    #   heatmap_save_path仅在mode='heatmap'有效\n    #-------------------------------------------------------------------------#\n    heatmap_save_path = \"model_data/heatmap_vision.png\"\n    #-------------------------------------------------------------------------#\n    #   simplify            使用Simplify onnx\n    #   onnx_save_path      指定了onnx的保存路径\n    #-------------------------------------------------------------------------#\n    simplify        = True\n    onnx_save_path  = \"model_data/models.onnx\"\n\n    if mode == \"predict\":\n        '''\n        1、如果想要进行检测完的图片的保存，利用r_image.save(\"img.jpg\")即可保存，直接在predict.py里进行修改即可。 \n        2、如果想要获得预测框的坐标，可以进入yolo.detect_image函数，在绘图部分读取top，left，bottom，right这四个值。\n        3、如果想要利用预测框截取下目标，可以进入yolo.detect_image函数，在绘图部分利用获取到的top，left，bottom，right这四个值\n        在原图上利用矩阵的方式进行截取。\n        4、如果想要在预测图上写额外的字，比如检测到的特定目标的数量，可以进入yolo.detect_image函数，在绘图部分对predicted_class进行判断，\n        比如判断if predicted_class == 'car': 即可判断当前目标是否为车，然后记录数量即可。利用draw.text即可写字。\n        '''\n        while True:\n            img = input('Input image filename:')\n            try:\n                image = Image.open(img)\n            except:\n                print('Open Error! Try again!')\n                continue\n            else:\n                r_image = yolo.detect_image(image, crop = crop, count=count)\n                r_image.show()\n\n    elif mode == \"video\":\n        capture = cv2.VideoCapture(video_path)\n        if video_save_path!=\"\":\n            fourcc  = cv2.VideoWriter_fourcc(*'XVID')\n            size    = (int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT)))\n            out     = cv2.VideoWriter(video_save_path, fourcc, video_fps, size)\n\n        ref, frame = capture.read()\n        if not ref:\n            raise ValueError(\"未能正确读取摄像头（视频），请注意是否正确安装摄像头（是否正确填写视频路径）。\")\n\n        fps = 0.0\n        while(True):\n            t1 = time.time()\n            # 读取某一帧\n            ref, frame = capture.read()\n            if not ref:\n                break\n            # 格式转变，BGRtoRGB\n            frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)\n            # 转变成Image\n            frame = Image.fromarray(np.uint8(frame))\n            # 进行检测\n            frame = np.array(yolo.detect_image(frame))\n            # RGBtoBGR满足opencv显示格式\n            frame = cv2.cvtColor(frame,cv2.COLOR_RGB2BGR)\n            \n            fps  = ( fps + (1./(time.time()-t1)) ) / 2\n            print(\"fps= %.2f\"%(fps))\n            frame = cv2.putText(frame, \"fps= %.2f\"%(fps), (0, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)\n            \n            cv2.imshow(\"video\",frame)\n            c= cv2.waitKey(1) & 0xff \n            if video_save_path!=\"\":\n                out.write(frame)\n\n            if c==27:\n                capture.release()\n                break\n\n        print(\"Video Detection Done!\")\n        capture.release()\n        if video_save_path!=\"\":\n            print(\"Save processed video to the path :\" + video_save_path)\n            out.release()\n        cv2.destroyAllWindows()\n        \n    elif mode == \"fps\":\n        img = Image.open(fps_image_path)\n        tact_time = yolo.get_FPS(img, test_interval)\n        print(str(tact_time) + ' seconds, ' + str(1/tact_time) + 'FPS, @batch_size 1')\n\n    elif mode == \"dir_predict\":\n        import os\n\n        from tqdm import tqdm\n\n        img_names = os.listdir(dir_origin_path)\n        for img_name in tqdm(img_names):\n            if img_name.lower().endswith(('.bmp', '.dib', '.png', '.jpg', '.jpeg', '.pbm', '.pgm', '.ppm', '.tif', '.tiff')):\n                image_path  = os.path.join(dir_origin_path, img_name)\n                image       = Image.open(image_path)\n                r_image     = yolo.detect_image(image)\n                if not os.path.exists(dir_save_path):\n                    os.makedirs(dir_save_path)\n                r_image.save(os.path.join(dir_save_path, img_name.replace(\".jpg\", \".png\")), quality=95, subsampling=0)\n\n    elif mode == \"heatmap\":\n        while True:\n            img = input('Input image filename:')\n            try:\n                image = Image.open(img)\n            except:\n                print('Open Error! Try again!')\n                continue\n            else:\n                yolo.detect_heatmap(image, heatmap_save_path)\n                \n    elif mode == \"export_onnx\":\n        yolo.convert_to_onnx(simplify, onnx_save_path)\n        \n    else:\n        raise AssertionError(\"Please specify the correct mode: 'predict', 'video', 'fps', 'heatmap', 'export_onnx', 'dir_predict'.\")\n"
  },
  {
    "path": "requirements.txt",
    "content": "scipy==1.9.1\nnumpy==1.23.1\nmatplotlib==3.4.3\nopencv_python==4.7.0\ntorch==1.10.1\ntorchvision==0.11.2\ntqdm==4.62.2\nPillow==9.3.0\nh5py==2.10.0\n"
  },
  {
    "path": "summary.py",
    "content": "#--------------------------------------------#\n#   该部分代码用于看网络结构\n#--------------------------------------------#\nimport torch\nfrom thop import clever_format, profile\n\nfrom nets.yolo import YoloBody\n\nif __name__ == \"__main__\":\n    input_shape     = [640, 640]\n    anchors_mask    = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]\n    num_classes     = 80\n    phi             = 'l'\n    \n    device  = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n    m       = YoloBody(anchors_mask, num_classes, phi, False).to(device)\n    for i in m.children():\n        print(i)\n        print('==============================')\n    \n    dummy_input     = torch.randn(1, 3, input_shape[0], input_shape[1]).to(device)\n    flops, params   = profile(m.to(device), (dummy_input, ), verbose=False)\n    #--------------------------------------------------------#\n    #   flops * 2是因为profile没有将卷积作为两个operations\n    #   有些论文将卷积算乘法、加法两个operations。此时乘2\n    #   有些论文只考虑乘法的运算次数，忽略加法。此时不乘2\n    #   本代码选择乘2，参考YOLOX。\n    #--------------------------------------------------------#\n    flops           = flops * 2\n    flops, params   = clever_format([flops, params], \"%.3f\")\n    print('Total GFLOPS: %s' % (flops))\n    print('Total params: %s' % (params))\n"
  },
  {
    "path": "train.py",
    "content": "#-------------------------------------#\n#       对数据集进行训练\n#-------------------------------------#\nimport datetime\nimport os\n\nimport numpy as np\nimport torch\nimport torch.backends.cudnn as cudnn\nimport torch.distributed as dist\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\n\nfrom nets.yolo import YoloBody\nfrom nets.yolo_training import (ModelEMA, YOLOLoss, get_lr_scheduler,\n                                set_optimizer_lr, weights_init)\nfrom utils.callbacks import EvalCallback, LossHistory\nfrom utils.dataloader import YoloDataset, yolo_dataset_collate\nfrom utils.utils import download_weights, get_anchors, get_classes, show_config\nfrom utils.utils_fit import fit_one_epoch\n\n'''\n训练自己的目标检测模型一定需要注意以下几点：\n1、训练前仔细检查自己的格式是否满足要求，该库要求数据集格式为VOC格式，需要准备好的内容有输入图片和标签\n   输入图片为.jpg图片，无需固定大小，传入训练前会自动进行resize。\n   灰度图会自动转成RGB图片进行训练，无需自己修改。\n   输入图片如果后缀非jpg，需要自己批量转成jpg后再开始训练。\n\n   标签为.xml格式，文件中会有需要检测的目标信息，标签文件和输入图片文件相对应。\n\n2、损失值的大小用于判断是否收敛，比较重要的是有收敛的趋势，即验证集损失不断下降，如果验证集损失基本上不改变的话，模型基本上就收敛了。\n   损失值的具体大小并没有什么意义，大和小只在于损失的计算方式，并不是接近于0才好。如果想要让损失好看点，可以直接到对应的损失函数里面除上10000。\n   训练过程中的损失值会保存在logs文件夹下的loss_%Y_%m_%d_%H_%M_%S文件夹中\n   \n3、训练好的权值文件保存在logs文件夹中，每个训练世代（Epoch）包含若干训练步长（Step），每个训练步长（Step）进行一次梯度下降。\n   如果只是训练了几个Step是不会保存的，Epoch和Step的概念要捋清楚一下。\n'''\nif __name__ == \"__main__\":\n    #---------------------------------#\n    #   Cuda    是否使用Cuda\n    #           没有GPU可以设置成False\n    #---------------------------------#\n    Cuda            = False\n    #---------------------------------------------------------------------#\n    #   distributed     用于指定是否使用单机多卡分布式运行\n    #                   终端指令仅支持Ubuntu。CUDA_VISIBLE_DEVICES用于在Ubuntu下指定显卡。\n    #                   Windows系统下默认使用DP模式调用所有显卡，不支持DDP。\n    #   DP模式：\n    #       设置            distributed = False\n    #       在终端中输入    CUDA_VISIBLE_DEVICES=0,1 python train.py\n    #   DDP模式：\n    #       设置            distributed = True\n    #       在终端中输入    CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py\n    #---------------------------------------------------------------------#\n    distributed     = False\n    #---------------------------------------------------------------------#\n    #   sync_bn     是否使用sync_bn，DDP模式多卡可用\n    #---------------------------------------------------------------------#\n    sync_bn         = False\n    #---------------------------------------------------------------------#\n    #   fp16        是否使用混合精度训练\n    #               可减少约一半的显存、需要pytorch1.7.1以上\n    #---------------------------------------------------------------------#\n    fp16            = False\n    #---------------------------------------------------------------------#\n    #   classes_path    指向model_data下的txt，与自己训练的数据集相关 \n    #                   训练前一定要修改classes_path，使其对应自己的数据集\n    #---------------------------------------------------------------------#\n    classes_path    = 'model_data/ssdd_classes.txt'\n    #---------------------------------------------------------------------#\n    #   anchors_path    代表先验框对应的txt文件，一般不修改。\n    #   anchors_mask    用于帮助代码找到对应的先验框，一般不修改。\n    #---------------------------------------------------------------------#\n    anchors_path    = 'model_data/yolo_anchors.txt'\n    anchors_mask    = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]\n    #----------------------------------------------------------------------------------------------------------------------------#\n    #   权值文件的下载请看README，可以通过网盘下载。模型的 预训练权重 对不同数据集是通用的，因为特征是通用的。\n    #   模型的 预训练权重 比较重要的部分是 主干特征提取网络的权值部分，用于进行特征提取。\n    #   预训练权重对于99%的情况都必须要用，不用的话主干部分的权值太过随机，特征提取效果不明显，网络训练的结果也不会好\n    #\n    #   如果训练过程中存在中断训练的操作，可以将model_path设置成logs文件夹下的权值文件，将已经训练了一部分的权值再次载入。\n    #   同时修改下方的 冻结阶段 或者 解冻阶段 的参数，来保证模型epoch的连续性。\n    #   \n    #   当model_path = ''的时候不加载整个模型的权值。\n    #\n    #   此处使用的是整个模型的权重，因此是在train.py进行加载的。\n    #   如果想要让模型从0开始训练，则设置model_path = ''，下面的Freeze_Train = Fasle，此时从0开始训练，且没有冻结主干的过程。\n    #   \n    #   一般来讲，网络从0开始的训练效果会很差，因为权值太过随机，特征提取效果不明显，因此非常、非常、非常不建议大家从0开始训练！\n    #   从0开始训练有两个方案：\n    #   1、得益于Mosaic数据增强方法强大的数据增强能力，将UnFreeze_Epoch设置的较大（300及以上）、batch较大（16及以上）、数据较多（万以上）的情况下，\n    #      可以设置mosaic=True，直接随机初始化参数开始训练，但得到的效果仍然不如有预训练的情况。（像COCO这样的大数据集可以这样做）\n    #   2、了解imagenet数据集，首先训练分类模型，获得网络的主干部分权值，分类模型的 主干部分 和该模型通用，基于此进行训练。\n    #----------------------------------------------------------------------------------------------------------------------------#\n    model_path      = ''\n    #------------------------------------------------------#\n    #   input_shape     输入的shape大小，一定要是32的倍数\n    #------------------------------------------------------#\n    input_shape     = [640, 640]\n    #------------------------------------------------------#\n    #   phi             所使用到的yolov7的版本，本仓库一共提供两个：\n    #                   l : 对应yolov7\n    #                   x : 对应yolov7_x\n    #------------------------------------------------------#\n    phi             = 'l'\n    #----------------------------------------------------------------------------------------------------------------------------#\n    #   pretrained      是否使用主干网络的预训练权重，此处使用的是主干的权重，因此是在模型构建的时候进行加载的。\n    #                   如果设置了model_path，则主干的权值无需加载，pretrained的值无意义。\n    #                   如果不设置model_path，pretrained = True，此时仅加载主干开始训练。\n    #                   如果不设置model_path，pretrained = False，Freeze_Train = Fasle，此时从0开始训练，且没有冻结主干的过程。\n    #----------------------------------------------------------------------------------------------------------------------------#\n    pretrained      = True\n    #------------------------------------------------------------------#\n    #   mosaic              马赛克数据增强。\n    #   mosaic_prob         每个step有多少概率使用mosaic数据增强，默认50%。\n    #\n    #   mixup               是否使用mixup数据增强，仅在mosaic=True时有效。\n    #                       只会对mosaic增强后的图片进行mixup的处理。\n    #   mixup_prob          有多少概率在mosaic后使用mixup数据增强，默认50%。\n    #                       总的mixup概率为mosaic_prob * mixup_prob。\n    #\n    #   special_aug_ratio   参考YoloX，由于Mosaic生成的训练图片，远远脱离自然图片的真实分布。\n    #                       当mosaic=True时，本代码会在special_aug_ratio范围内开启mosaic。\n    #                       默认为前70%个epoch，100个世代会开启70个世代。\n    #------------------------------------------------------------------#\n    mosaic              = True\n    mosaic_prob         = 0.5\n    mixup               = False\n    mixup_prob          = 0.5\n    special_aug_ratio   = 0.7\n    #------------------------------------------------------------------#\n    #   label_smoothing     标签平滑。一般0.01以下。如0.01、0.005。\n    #------------------------------------------------------------------#\n    label_smoothing     = 0\n\n    #----------------------------------------------------------------------------------------------------------------------------#\n    #   训练分为两个阶段，分别是冻结阶段和解冻阶段。设置冻结阶段是为了满足机器性能不足的同学的训练需求。\n    #   冻结训练需要的显存较小，显卡非常差的情况下，可设置Freeze_Epoch等于UnFreeze_Epoch，Freeze_Train = True，此时仅仅进行冻结训练。\n    #      \n    #   在此提供若干参数设置建议，各位训练者根据自己的需求进行灵活调整：\n    #   （一）从整个模型的预训练权重开始训练： \n    #       Adam：\n    #           Init_Epoch = 0，Freeze_Epoch = 50，UnFreeze_Epoch = 100，Freeze_Train = True，optimizer_type = 'adam'，Init_lr = 1e-3，weight_decay = 0。（冻结）\n    #           Init_Epoch = 0，UnFreeze_Epoch = 100，Freeze_Train = False，optimizer_type = 'adam'，Init_lr = 1e-3，weight_decay = 0。（不冻结）\n    #       SGD：\n    #           Init_Epoch = 0，Freeze_Epoch = 50，UnFreeze_Epoch = 300，Freeze_Train = True，optimizer_type = 'sgd'，Init_lr = 1e-2，weight_decay = 5e-4。（冻结）\n    #           Init_Epoch = 0，UnFreeze_Epoch = 300，Freeze_Train = False，optimizer_type = 'sgd'，Init_lr = 1e-2，weight_decay = 5e-4。（不冻结）\n    #       其中：UnFreeze_Epoch可以在100-300之间调整。\n    #   （二）从0开始训练：\n    #       Init_Epoch = 0，UnFreeze_Epoch >= 300，Unfreeze_batch_size >= 16，Freeze_Train = False（不冻结训练）\n    #       其中：UnFreeze_Epoch尽量不小于300。optimizer_type = 'sgd'，Init_lr = 1e-2，mosaic = True。\n    #   （三）batch_size的设置：\n    #       在显卡能够接受的范围内，以大为好。显存不足与数据集大小无关，提示显存不足（OOM或者CUDA out of memory）请调小batch_size。\n    #       受到BatchNorm层影响，batch_size最小为2，不能为1。\n    #       正常情况下Freeze_batch_size建议为Unfreeze_batch_size的1-2倍。不建议设置的差距过大，因为关系到学习率的自动调整。\n    #----------------------------------------------------------------------------------------------------------------------------#\n    #------------------------------------------------------------------#\n    #   冻结阶段训练参数\n    #   此时模型的主干被冻结了，特征提取网络不发生改变\n    #   占用的显存较小，仅对网络进行微调\n    #   Init_Epoch          模型当前开始的训练世代，其值可以大于Freeze_Epoch，如设置：\n    #                       Init_Epoch = 60、Freeze_Epoch = 50、UnFreeze_Epoch = 100\n    #                       会跳过冻结阶段，直接从60代开始，并调整对应的学习率。\n    #                       （断点续练时使用）\n    #   Freeze_Epoch        模型冻结训练的Freeze_Epoch\n    #                       (当Freeze_Train=False时失效)\n    #   Freeze_batch_size   模型冻结训练的batch_size\n    #                       (当Freeze_Train=False时失效)\n    #------------------------------------------------------------------#\n    Init_Epoch          = 0\n    Freeze_Epoch        = 50\n    Freeze_batch_size   = 8\n    #------------------------------------------------------------------#\n    #   解冻阶段训练参数\n    #   此时模型的主干不被冻结了，特征提取网络会发生改变\n    #   占用的显存较大，网络所有的参数都会发生改变\n    #   UnFreeze_Epoch          模型总共训练的epoch\n    #                           SGD需要更长的时间收敛，因此设置较大的UnFreeze_Epoch\n    #                           Adam可以使用相对较小的UnFreeze_Epoch\n    #   Unfreeze_batch_size     模型在解冻后的batch_size\n    #------------------------------------------------------------------#\n    UnFreeze_Epoch      = 100\n    Unfreeze_batch_size = 4\n    #------------------------------------------------------------------#\n    #   Freeze_Train    是否进行冻结训练\n    #                   默认先冻结主干训练后解冻训练。\n    #------------------------------------------------------------------#\n    Freeze_Train        = True\n\n    #------------------------------------------------------------------#\n    #   其它训练参数：学习率、优化器、学习率下降有关\n    #------------------------------------------------------------------#\n    #------------------------------------------------------------------#\n    #   Init_lr         模型的最大学习率\n    #   Min_lr          模型的最小学习率，默认为最大学习率的0.01\n    #------------------------------------------------------------------#\n    Init_lr             = 1e-3\n    Min_lr              = Init_lr * 0.01\n    #------------------------------------------------------------------#\n    #   optimizer_type  使用到的优化器种类，可选的有adam、sgd\n    #                   当使用Adam优化器时建议设置  Init_lr=1e-3\n    #                   当使用SGD优化器时建议设置   Init_lr=1e-2\n    #   momentum        优化器内部使用到的momentum参数\n    #   weight_decay    权值衰减，可防止过拟合\n    #                   adam会导致weight_decay错误，使用adam时建议设置为0。\n    #------------------------------------------------------------------#\n    optimizer_type      = \"adam\"\n    momentum            = 0.937\n    weight_decay        = 0\n    #------------------------------------------------------------------#\n    #   lr_decay_type   使用到的学习率下降方式，可选的有step、cos\n    #------------------------------------------------------------------#\n    lr_decay_type       = \"step\"\n    #------------------------------------------------------------------#\n    #   save_period     多少个epoch保存一次权值\n    #------------------------------------------------------------------#\n    save_period         = 10\n    #------------------------------------------------------------------#\n    #   save_dir        权值与日志文件保存的文件夹\n    #------------------------------------------------------------------#\n    save_dir            = 'logs'\n    #------------------------------------------------------------------#\n    #   eval_flag       是否在训练时进行评估，评估对象为验证集\n    #                   安装pycocotools库后，评估体验更佳。\n    #   eval_period     代表多少个epoch评估一次，不建议频繁的评估\n    #                   评估需要消耗较多的时间，频繁评估会导致训练非常慢\n    #   此处获得的mAP会与get_map.py获得的会有所不同，原因有二：\n    #   （一）此处获得的mAP为验证集的mAP。\n    #   （二）此处设置评估参数较为保守，目的是加快评估速度。\n    #------------------------------------------------------------------#\n    eval_flag           = True\n    eval_period         = 10\n    #------------------------------------------------------------------#\n    #   num_workers     用于设置是否使用多线程读取数据\n    #                   开启后会加快数据读取速度，但是会占用更多内存\n    #                   内存较小的电脑可以设置为2或者0  \n    #------------------------------------------------------------------#\n    num_workers         = 4\n\n    #------------------------------------------------------#\n    #   train_annotation_path   训练图片路径和标签\n    #   val_annotation_path     验证图片路径和标签\n    #------------------------------------------------------#\n    train_annotation_path   = '2007_train.txt'\n    val_annotation_path     = '2007_val.txt'\n\n    #------------------------------------------------------#\n    #   设置用到的显卡\n    #------------------------------------------------------#\n    ngpus_per_node  = torch.cuda.device_count()\n    if distributed:\n        dist.init_process_group(backend=\"nccl\")\n        local_rank  = int(os.environ[\"LOCAL_RANK\"])\n        rank        = int(os.environ[\"RANK\"])\n        device      = torch.device(\"cuda\", local_rank)\n        if local_rank == 0:\n            print(f\"[{os.getpid()}] (rank = {rank}, local_rank = {local_rank}) training...\")\n            print(\"Gpu Device Count : \", ngpus_per_node)\n    else:\n        device          = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n        local_rank      = 0\n        rank            = 0\n\n    #------------------------------------------------------#\n    #   获取classes和anchor\n    #------------------------------------------------------#\n    class_names, num_classes = get_classes(classes_path)\n    anchors, num_anchors     = get_anchors(anchors_path)\n\n    #----------------------------------------------------#\n    #   下载预训练权重\n    #----------------------------------------------------#\n    if pretrained:\n        if distributed:\n            if local_rank == 0:\n                download_weights(phi)  \n            dist.barrier()\n        else:\n            download_weights(phi)\n            \n    #------------------------------------------------------#\n    #   创建yolo模型\n    #------------------------------------------------------#\n    model = YoloBody(anchors_mask, num_classes, phi, pretrained=pretrained)\n    if not pretrained:\n        weights_init(model)\n    if model_path != '':\n        #------------------------------------------------------#\n        #   权值文件请看README，百度网盘下载\n        #------------------------------------------------------#\n        if local_rank == 0:\n            print('Load weights {}.'.format(model_path))\n        \n        #------------------------------------------------------#\n        #   根据预训练权重的Key和模型的Key进行加载\n        #------------------------------------------------------#\n        model_dict      = model.state_dict()\n        pretrained_dict = torch.load(model_path, map_location = device)\n        load_key, no_load_key, temp_dict = [], [], {}\n        for k, v in pretrained_dict.items():\n            if k in model_dict.keys() and np.shape(model_dict[k]) == np.shape(v):\n                temp_dict[k] = v\n                load_key.append(k)\n            else:\n                no_load_key.append(k)\n        model_dict.update(temp_dict)\n        model.load_state_dict(model_dict)\n        #------------------------------------------------------#\n        #   显示没有匹配上的Key\n        #------------------------------------------------------#\n        if local_rank == 0:\n            print(\"\\nSuccessful Load Key:\", str(load_key)[:500], \"……\\nSuccessful Load Key Num:\", len(load_key))\n            print(\"\\nFail To Load Key:\", str(no_load_key)[:500], \"……\\nFail To Load Key num:\", len(no_load_key))\n            print(\"\\n\\033[1;33;44m温馨提示，head部分没有载入是正常现象，Backbone部分没有载入是错误的。\\033[0m\")\n\n    #----------------------#\n    #   获得损失函数\n    #----------------------#\n    yolo_loss    = YOLOLoss(anchors, num_classes, input_shape, anchors_mask, label_smoothing)\n    #----------------------#\n    #   记录Loss\n    #----------------------#\n    if local_rank == 0:\n        time_str        = datetime.datetime.strftime(datetime.datetime.now(),'%Y_%m_%d_%H_%M_%S')\n        log_dir         = os.path.join(save_dir, \"loss_\" + str(time_str))\n        loss_history    = LossHistory(log_dir, model, input_shape=input_shape)\n    else:\n        loss_history    = None\n        \n    #------------------------------------------------------------------#\n    #   torch 1.2不支持amp，建议使用torch 1.7.1及以上正确使用fp16\n    #   因此torch1.2这里显示\"could not be resolve\"\n    #------------------------------------------------------------------#\n    if fp16:\n        from torch.cuda.amp import GradScaler as GradScaler\n        scaler = GradScaler()\n    else:\n        scaler = None\n\n    model_train     = model.train()\n    #----------------------------#\n    #   多卡同步Bn\n    #----------------------------#\n    if sync_bn and ngpus_per_node > 1 and distributed:\n        model_train = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model_train)\n    elif sync_bn:\n        print(\"Sync_bn is not support in one gpu or not distributed.\")\n\n    if Cuda:\n        if distributed:\n            #----------------------------#\n            #   多卡平行运行\n            #----------------------------#\n            model_train = model_train.cuda(local_rank)\n            model_train = torch.nn.parallel.DistributedDataParallel(model_train, device_ids=[local_rank], find_unused_parameters=True)\n        else:\n            model_train = torch.nn.DataParallel(model)\n            cudnn.benchmark = True\n            model_train = model_train.cuda()\n            \n    #----------------------------#\n    #   权值平滑\n    #----------------------------#\n    ema = ModelEMA(model_train)\n    \n    #---------------------------#\n    #   读取数据集对应的txt\n    #---------------------------#\n    with open(train_annotation_path, encoding='utf-8') as f:\n        train_lines = f.readlines()\n    with open(val_annotation_path, encoding='utf-8') as f:\n        val_lines   = f.readlines()\n    num_train   = len(train_lines)\n    num_val     = len(val_lines)\n\n    if local_rank == 0:\n        show_config(\n            classes_path = classes_path, anchors_path = anchors_path, anchors_mask = anchors_mask, model_path = model_path, input_shape = input_shape, \\\n            Init_Epoch = Init_Epoch, Freeze_Epoch = Freeze_Epoch, UnFreeze_Epoch = UnFreeze_Epoch, Freeze_batch_size = Freeze_batch_size, Unfreeze_batch_size = Unfreeze_batch_size, Freeze_Train = Freeze_Train, \\\n            Init_lr = Init_lr, Min_lr = Min_lr, optimizer_type = optimizer_type, momentum = momentum, lr_decay_type = lr_decay_type, \\\n            save_period = save_period, save_dir = save_dir, num_workers = num_workers, num_train = num_train, num_val = num_val\n        )\n        #---------------------------------------------------------#\n        #   总训练世代指的是遍历全部数据的总次数\n        #   总训练步长指的是梯度下降的总次数 \n        #   每个训练世代包含若干训练步长，每个训练步长进行一次梯度下降。\n        #   此处仅建议最低训练世代，上不封顶，计算时只考虑了解冻部分\n        #----------------------------------------------------------#\n        wanted_step = 5e4 if optimizer_type == \"sgd\" else 1.5e4\n        total_step  = num_train // Unfreeze_batch_size * UnFreeze_Epoch\n        if total_step <= wanted_step:\n            if num_train // Unfreeze_batch_size == 0:\n                raise ValueError('数据集过小，无法进行训练，请扩充数据集。')\n            wanted_epoch = wanted_step // (num_train // Unfreeze_batch_size) + 1\n            print(\"\\n\\033[1;33;44m[Warning] 使用%s优化器时，建议将训练总步长设置到%d以上。\\033[0m\"%(optimizer_type, wanted_step))\n            print(\"\\033[1;33;44m[Warning] 本次运行的总训练数据量为%d，Unfreeze_batch_size为%d，共训练%d个Epoch，计算出总训练步长为%d。\\033[0m\"%(num_train, Unfreeze_batch_size, UnFreeze_Epoch, total_step))\n            print(\"\\033[1;33;44m[Warning] 由于总训练步长为%d，小于建议总步长%d，建议设置总世代为%d。\\033[0m\"%(total_step, wanted_step, wanted_epoch))\n\n    #------------------------------------------------------#\n    #   主干特征提取网络特征通用，冻结训练可以加快训练速度\n    #   也可以在训练初期防止权值被破坏。\n    #   Init_Epoch为起始世代\n    #   Freeze_Epoch为冻结训练的世代\n    #   UnFreeze_Epoch总训练世代\n    #   提示OOM或者显存不足请调小Batch_size\n    #------------------------------------------------------#\n    if True:\n        UnFreeze_flag = False\n        #------------------------------------#\n        #   冻结一定部分训练\n        #------------------------------------#\n        if Freeze_Train:\n            for param in model.backbone.parameters():\n                param.requires_grad = False\n\n        #-------------------------------------------------------------------#\n        #   如果不冻结训练的话，直接设置batch_size为Unfreeze_batch_size\n        #-------------------------------------------------------------------#\n        batch_size = Freeze_batch_size if Freeze_Train else Unfreeze_batch_size\n\n        #-------------------------------------------------------------------#\n        #   判断当前batch_size，自适应调整学习率\n        #-------------------------------------------------------------------#\n        nbs             = 64\n        lr_limit_max    = 1e-3 if optimizer_type == 'adam' else 5e-2\n        lr_limit_min    = 3e-4 if optimizer_type == 'adam' else 5e-4\n        Init_lr_fit     = min(max(batch_size / nbs * Init_lr, lr_limit_min), lr_limit_max)\n        Min_lr_fit      = min(max(batch_size / nbs * Min_lr, lr_limit_min * 1e-2), lr_limit_max * 1e-2)\n\n        #---------------------------------------#\n        #   根据optimizer_type选择优化器\n        #---------------------------------------#\n        pg0, pg1, pg2 = [], [], []  \n        for k, v in model.named_modules():\n            if hasattr(v, \"bias\") and isinstance(v.bias, nn.Parameter):\n                pg2.append(v.bias)    \n            if isinstance(v, nn.BatchNorm2d) or \"bn\" in k:\n                pg0.append(v.weight)    \n            elif hasattr(v, \"weight\") and isinstance(v.weight, nn.Parameter):\n                pg1.append(v.weight)   \n        optimizer = {\n            'adam'  : optim.Adam(pg0, Init_lr_fit, betas = (momentum, 0.999)),\n            'sgd'   : optim.SGD(pg0, Init_lr_fit, momentum = momentum, nesterov=True)\n        }[optimizer_type]\n        optimizer.add_param_group({\"params\": pg1, \"weight_decay\": weight_decay})\n        optimizer.add_param_group({\"params\": pg2})\n\n        #---------------------------------------#\n        #   获得学习率下降的公式\n        #---------------------------------------#\n        lr_scheduler_func = get_lr_scheduler(lr_decay_type, Init_lr_fit, Min_lr_fit, UnFreeze_Epoch)\n        \n        #---------------------------------------#\n        #   判断每一个世代的长度\n        #---------------------------------------#\n        epoch_step      = num_train // batch_size\n        epoch_step_val  = num_val // batch_size\n        \n        if epoch_step == 0 or epoch_step_val == 0:\n            raise ValueError(\"数据集过小，无法继续进行训练，请扩充数据集。\")\n\n        if ema:\n            ema.updates     = epoch_step * Init_Epoch\n        \n        #---------------------------------------#\n        #   构建数据集加载器。\n        #---------------------------------------#\n        train_dataset   = YoloDataset(train_lines, input_shape, num_classes, anchors, anchors_mask, epoch_length=UnFreeze_Epoch, \\\n                                        mosaic=mosaic, mixup=mixup, mosaic_prob=mosaic_prob, mixup_prob=mixup_prob, train=True, special_aug_ratio=special_aug_ratio)\n        val_dataset     = YoloDataset(val_lines, input_shape, num_classes, anchors, anchors_mask, epoch_length=UnFreeze_Epoch, \\\n                                        mosaic=False, mixup=False, mosaic_prob=0, mixup_prob=0, train=False, special_aug_ratio=0)\n        \n        if distributed:\n            train_sampler   = torch.utils.data.distributed.DistributedSampler(train_dataset, shuffle=True,)\n            val_sampler     = torch.utils.data.distributed.DistributedSampler(val_dataset, shuffle=False,)\n            batch_size      = batch_size // ngpus_per_node\n            shuffle         = False\n        else:\n            train_sampler   = None\n            val_sampler     = None\n            shuffle         = True\n\n        gen             = DataLoader(train_dataset, shuffle = shuffle, batch_size = batch_size, num_workers = num_workers, pin_memory=True,\n                                    drop_last=True, collate_fn=yolo_dataset_collate, sampler=train_sampler)\n        gen_val         = DataLoader(val_dataset  , shuffle = shuffle, batch_size = batch_size, num_workers = num_workers, pin_memory=True, \n                                    drop_last=True, collate_fn=yolo_dataset_collate, sampler=val_sampler)\n\n        #----------------------#\n        #   记录eval的map曲线\n        #----------------------#\n        if local_rank == 0:\n            eval_callback   = EvalCallback(model, input_shape, anchors, anchors_mask, class_names, num_classes, val_lines, log_dir, Cuda, \\\n                                            eval_flag=eval_flag, period=eval_period)\n        else:\n            eval_callback   = None\n        \n        #---------------------------------------#\n        #   开始模型训练\n        #---------------------------------------#\n        for epoch in range(Init_Epoch, UnFreeze_Epoch):\n            #---------------------------------------#\n            #   如果模型有冻结学习部分\n            #   则解冻，并设置参数\n            #---------------------------------------#\n            if epoch >= Freeze_Epoch and not UnFreeze_flag and Freeze_Train:\n                batch_size = Unfreeze_batch_size\n\n                #-------------------------------------------------------------------#\n                #   判断当前batch_size，自适应调整学习率\n                #-------------------------------------------------------------------#\n                nbs             = 64\n                lr_limit_max    = 1e-3 if optimizer_type == 'adam' else 5e-2\n                lr_limit_min    = 3e-4 if optimizer_type == 'adam' else 5e-4\n                Init_lr_fit     = min(max(batch_size / nbs * Init_lr, lr_limit_min), lr_limit_max)\n                Min_lr_fit      = min(max(batch_size / nbs * Min_lr, lr_limit_min * 1e-2), lr_limit_max * 1e-2)\n                #---------------------------------------#\n                #   获得学习率下降的公式\n                #---------------------------------------#\n                lr_scheduler_func = get_lr_scheduler(lr_decay_type, Init_lr_fit, Min_lr_fit, UnFreeze_Epoch)\n\n                for param in model.backbone.parameters():\n                    param.requires_grad = True\n\n                epoch_step      = num_train // batch_size\n                epoch_step_val  = num_val // batch_size\n\n                if epoch_step == 0 or epoch_step_val == 0:\n                    raise ValueError(\"数据集过小，无法继续进行训练，请扩充数据集。\")\n                    \n                if ema:\n                    ema.updates     = epoch_step * epoch\n\n                if distributed:\n                    batch_size  = batch_size // ngpus_per_node\n                    \n                gen             = DataLoader(train_dataset, shuffle = shuffle, batch_size = batch_size, num_workers = num_workers, pin_memory=True,\n                                            drop_last=True, collate_fn=yolo_dataset_collate, sampler=train_sampler)\n                gen_val         = DataLoader(val_dataset  , shuffle = shuffle, batch_size = batch_size, num_workers = num_workers, pin_memory=True, \n                                            drop_last=True, collate_fn=yolo_dataset_collate, sampler=val_sampler)\n\n                UnFreeze_flag   = True\n\n            gen.dataset.epoch_now       = epoch\n            gen_val.dataset.epoch_now   = epoch\n\n            if distributed:\n                train_sampler.set_epoch(epoch)\n\n            set_optimizer_lr(optimizer, lr_scheduler_func, epoch)\n\n            fit_one_epoch(model_train, model, ema, yolo_loss, loss_history, eval_callback, optimizer, epoch, epoch_step, epoch_step_val, gen, gen_val, UnFreeze_Epoch, Cuda, fp16, scaler, save_period, save_dir, local_rank)\n            \n            if distributed:\n                dist.barrier()\n\n        if local_rank == 0:\n            loss_history.writer.close()\n"
  },
  {
    "path": "utils/__init__.py",
    "content": "#"
  },
  {
    "path": "utils/callbacks.py",
    "content": "import datetime\nimport os\n\nimport torch\nimport matplotlib\nmatplotlib.use('Agg')\nimport scipy.signal\nfrom matplotlib import pyplot as plt\nfrom torch.utils.tensorboard import SummaryWriter\nfrom utils.utils_rbox import rbox2poly, poly2hbb\nimport shutil\nimport numpy as np\n\nfrom PIL import Image\nfrom tqdm import tqdm\nfrom .utils import cvtColor, preprocess_input, resize_image\nfrom .utils_bbox import DecodeBox\nfrom .utils_map import get_coco_map, get_map\n\n\nclass LossHistory():\n    def __init__(self, log_dir, model, input_shape):\n        self.log_dir    = log_dir\n        self.losses     = []\n        self.val_loss   = []\n        \n        os.makedirs(self.log_dir)\n        self.writer     = SummaryWriter(self.log_dir)\n        try:\n            dummy_input     = torch.randn(2, 3, input_shape[0], input_shape[1])\n            self.writer.add_graph(model, dummy_input)\n        except:\n            pass\n\n    def append_loss(self, epoch, loss, val_loss):\n        if not os.path.exists(self.log_dir):\n            os.makedirs(self.log_dir)\n\n        self.losses.append(loss)\n        self.val_loss.append(val_loss)\n\n        with open(os.path.join(self.log_dir, \"epoch_loss.txt\"), 'a') as f:\n            f.write(str(loss))\n            f.write(\"\\n\")\n        with open(os.path.join(self.log_dir, \"epoch_val_loss.txt\"), 'a') as f:\n            f.write(str(val_loss))\n            f.write(\"\\n\")\n\n        self.writer.add_scalar('loss', loss, epoch)\n        self.writer.add_scalar('val_loss', val_loss, epoch)\n        self.loss_plot()\n\n    def loss_plot(self):\n        iters = range(len(self.losses))\n\n        plt.figure()\n        plt.plot(iters, self.losses, 'red', linewidth = 2, label='train loss')\n        plt.plot(iters, self.val_loss, 'coral', linewidth = 2, label='val loss')\n        try:\n            if len(self.losses) < 25:\n                num = 5\n            else:\n                num = 15\n            \n            plt.plot(iters, scipy.signal.savgol_filter(self.losses, num, 3), 'green', linestyle = '--', linewidth = 2, label='smooth train loss')\n            plt.plot(iters, scipy.signal.savgol_filter(self.val_loss, num, 3), '#8B4513', linestyle = '--', linewidth = 2, label='smooth val loss')\n        except:\n            pass\n\n        plt.grid(True)\n        plt.xlabel('Epoch')\n        plt.ylabel('Loss')\n        plt.legend(loc=\"upper right\")\n\n        plt.savefig(os.path.join(self.log_dir, \"epoch_loss.png\"))\n\n        plt.cla()\n        plt.close(\"all\")\n\nclass EvalCallback():\n    def __init__(self, net, input_shape, anchors, anchors_mask, class_names, num_classes, val_lines, log_dir, cuda, \\\n            map_out_path=\".temp_map_out\", max_boxes=100, confidence=0.05, nms_iou=0.5, letterbox_image=False, MINOVERLAP=0.5, eval_flag=True, period=1):\n        super(EvalCallback, self).__init__()\n        \n        self.net                = net\n        self.input_shape        = input_shape\n        self.anchors            = anchors\n        self.anchors_mask       = anchors_mask\n        self.class_names        = class_names\n        self.num_classes        = num_classes\n        self.val_lines          = val_lines\n        self.log_dir            = log_dir\n        self.cuda               = cuda\n        self.map_out_path       = map_out_path\n        self.max_boxes          = max_boxes\n        self.confidence         = confidence\n        self.nms_iou            = nms_iou\n        self.letterbox_image    = letterbox_image\n        self.MINOVERLAP         = MINOVERLAP\n        self.eval_flag          = eval_flag\n        self.period             = period\n        \n        self.bbox_util          = DecodeBox(self.anchors, self.num_classes, (self.input_shape[0], self.input_shape[1]), self.anchors_mask)\n        \n        self.maps       = [0]\n        self.epoches    = [0]\n        if self.eval_flag:\n            with open(os.path.join(self.log_dir, \"epoch_map.txt\"), 'a') as f:\n                f.write(str(0))\n                f.write(\"\\n\")\n\n    def get_map_txt(self, image_id, image, class_names, map_out_path):\n        f = open(os.path.join(map_out_path, \"detection-results/\"+image_id+\".txt\"), \"w\", encoding='utf-8') \n        image_shape = np.array(np.shape(image)[0:2])\n        #---------------------------------------------------------#\n        #   在这里将图像转换成RGB图像，防止灰度图在预测时报错。\n        #   代码仅仅支持RGB图像的预测，所有其它类型的图像都会转化成RGB\n        #---------------------------------------------------------#\n        image       = cvtColor(image)\n        #---------------------------------------------------------#\n        #   给图像增加灰条，实现不失真的resize\n        #   也可以直接resize进行识别\n        #---------------------------------------------------------#\n        image_data  = resize_image(image, (self.input_shape[1], self.input_shape[0]), self.letterbox_image)\n        #---------------------------------------------------------#\n        #   添加上batch_size维度\n        #---------------------------------------------------------#\n        image_data  = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0)\n\n        with torch.no_grad():\n            images = torch.from_numpy(image_data)\n            if self.cuda:\n                images = images.cuda()\n            #---------------------------------------------------------#\n            #   将图像输入网络当中进行预测！\n            #---------------------------------------------------------#\n            outputs = self.net(images)\n            outputs = self.bbox_util.decode_box(outputs)\n            #---------------------------------------------------------#\n            #   将预测框进行堆叠，然后进行非极大抑制\n            #---------------------------------------------------------#\n            results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, \n                        image_shape, self.letterbox_image, conf_thres = self.confidence, nms_thres = self.nms_iou)\n                                                    \n            if results[0] is None: \n                return \n\n            top_label   = np.array(results[0][:, 7], dtype = 'int32')\n            top_conf    = results[0][:, 5] * results[0][:, 6]\n            top_rboxes  = results[0][:, :5]\n            top_polys   = rbox2poly(top_rboxes)\n            top_hbbs    = poly2hbb(top_polys)\n        top_100     = np.argsort(top_conf)[::-1][:self.max_boxes]\n        top_hbbs    = top_hbbs[top_100]\n        top_conf    = top_conf[top_100]\n        top_label   = top_label[top_100]\n\n        for i, c in list(enumerate(top_label)):\n            predicted_class = self.class_names[int(c)]\n            hbb             = top_hbbs[i]\n            score           = str(top_conf[i])\n\n            xc, yc, w, h = hbb\n            left   = xc - w/2\n            top    = yc - h/2\n            right  = xc + w/2\n            bottom = yc + h/2\n            if predicted_class not in class_names:\n                continue\n\n            f.write(\"%s %s %s %s %s %s\\n\" % (predicted_class, score[:6], str(int(left)), str(int(top)), str(int(right)),str(int(bottom))))\n\n        f.close()\n        return \n    \n    def on_epoch_end(self, epoch, model_eval):\n        if epoch % self.period == 0 and self.eval_flag:\n            self.net = model_eval\n            if not os.path.exists(self.map_out_path):\n                os.makedirs(self.map_out_path)\n            if not os.path.exists(os.path.join(self.map_out_path, \"ground-truth\")):\n                os.makedirs(os.path.join(self.map_out_path, \"ground-truth\"))\n            if not os.path.exists(os.path.join(self.map_out_path, \"detection-results\")):\n                os.makedirs(os.path.join(self.map_out_path, \"detection-results\"))\n            print(\"Get map.\")\n            for annotation_line in tqdm(self.val_lines):\n                line        = annotation_line.split()\n                image_id    = os.path.basename(line[0]).split('.')[0]\n                #------------------------------#\n                #   读取图像并转换成RGB图像\n                #------------------------------#\n                image       = Image.open(line[0])\n                #------------------------------#\n                #   获得预测框\n                #------------------------------#\n                gt_boxes    = np.array([np.array(list(map(float,box.split(',')))) for box in line[1:]])\n                #------------------------------#\n                #   将polygon转换为hbb\n                #------------------------------#\n                hbbs        = np.zeros((gt_boxes.shape[0], 5))\n                hbbs[..., :4] = poly2hbb(gt_boxes[..., :8])\n                hbbs[..., 4]  = gt_boxes[..., 8]\n                #------------------------------#\n                #   获得预测txt\n                #------------------------------#\n                self.get_map_txt(image_id, image, self.class_names, self.map_out_path)\n                \n                #------------------------------#\n                #   获得真实框txt\n                #------------------------------#\n                with open(os.path.join(self.map_out_path, \"ground-truth/\"+image_id+\".txt\"), \"w\") as new_f:\n                    for hbb in hbbs:\n                        xc, yc, w, h, obj = hbb\n                        left   = xc - w/2\n                        top    = yc - h/2\n                        right  = xc + w/2\n                        bottom = yc + h/2\n                        obj_name = self.class_names[int(obj)]\n                        new_f.write(\"%s %s %s %s %s\\n\" % (obj_name, left, top, right, bottom))\n                        \n            print(\"Calculate Map.\")\n            try:\n                temp_map = get_coco_map(class_names = self.class_names, path = self.map_out_path)[1]\n            except:\n                temp_map = get_map(self.MINOVERLAP, False, path = self.map_out_path)\n            self.maps.append(temp_map)\n            self.epoches.append(epoch)\n\n            with open(os.path.join(self.log_dir, \"epoch_map.txt\"), 'a') as f:\n                f.write(str(temp_map))\n                f.write(\"\\n\")\n            \n            plt.figure()\n            plt.plot(self.epoches, self.maps, 'red', linewidth = 2, label='train map')\n\n            plt.grid(True)\n            plt.xlabel('Epoch')\n            plt.ylabel('Map %s'%str(self.MINOVERLAP))\n            plt.title('A Map Curve')\n            plt.legend(loc=\"upper right\")\n\n            plt.savefig(os.path.join(self.log_dir, \"epoch_map.png\"))\n            plt.cla()\n            plt.close(\"all\")\n\n            print(\"Get map done.\")\n            shutil.rmtree(self.map_out_path)\n"
  },
  {
    "path": "utils/dataloader.py",
    "content": "from random import sample, shuffle\n\nimport cv2\nimport numpy as np\nimport torch\nfrom PIL import Image, ImageDraw\nfrom torch.utils.data.dataset import Dataset\n\nfrom utils.utils import cvtColor, preprocess_input\nfrom utils.utils_rbox import poly2rbox, rbox2poly\n\nclass YoloDataset(Dataset):\n    def __init__(self, annotation_lines, input_shape, num_classes, anchors, anchors_mask, epoch_length, \\\n                        mosaic, mixup, mosaic_prob, mixup_prob, train, special_aug_ratio = 0.7):\n        super(YoloDataset, self).__init__()\n        self.annotation_lines   = annotation_lines\n        self.input_shape        = input_shape\n        self.num_classes        = num_classes\n        self.anchors            = anchors\n        self.anchors_mask       = anchors_mask\n        self.epoch_length       = epoch_length\n        self.mosaic             = mosaic\n        self.mosaic_prob        = mosaic_prob\n        self.mixup              = mixup\n        self.mixup_prob         = mixup_prob\n        self.train              = train\n        self.special_aug_ratio  = special_aug_ratio\n\n        self.epoch_now          = -1\n        self.length             = len(self.annotation_lines)\n        \n        self.bbox_attrs         = 5 + 1 + num_classes\n\n    def __len__(self):\n        return self.length\n\n    def __getitem__(self, index):\n        index       = index % self.length\n\n        #---------------------------------------------------#\n        #   训练时进行数据的随机增强\n        #   验证时不进行数据的随机增强\n        #---------------------------------------------------#\n        if self.mosaic and self.rand() < self.mosaic_prob and self.epoch_now < self.epoch_length * self.special_aug_ratio:\n            lines = sample(self.annotation_lines, 3)\n            lines.append(self.annotation_lines[index])\n            shuffle(lines)\n            image, rbox  = self.get_random_data_with_Mosaic(lines, self.input_shape)\n                \n            if self.mixup and self.rand() < self.mixup_prob:\n                lines           = sample(self.annotation_lines, 1)\n                image_2, rbox_2  = self.get_random_data(lines[0], self.input_shape, random = self.train)\n                image, rbox      = self.get_random_data_with_MixUp(image, rbox, image_2, rbox_2)\n        else:\n            image, rbox      = self.get_random_data(self.annotation_lines[index], self.input_shape, random = self.train)\n\n        image       = np.transpose(preprocess_input(np.array(image, dtype=np.float32)), (2, 0, 1))\n        rbox        = np.array(rbox, dtype=np.float32)\n        \n        #---------------------------------------------------#\n        #   对真实框进行预处理\n        #---------------------------------------------------#\n        nL          = len(rbox)\n        labels_out  = np.zeros((nL, 7))\n        if nL:\n            #---------------------------------------------------#\n            #   对真实框进行归一化，调整到0-1之间\n            #---------------------------------------------------#\n            rbox[:, [0, 2]] = rbox[:, [0, 2]] / self.input_shape[1]\n            rbox[:, [1, 3]] = rbox[:, [1, 3]] / self.input_shape[0]\n            #---------------------------------------------------#\n            #---------------------------------------------------#\n            #   调整顺序，符合训练的格式\n            #   labels_out中序号为0的部分在collect时处理\n            #---------------------------------------------------#\n            labels_out[:, 1]  = rbox[:, -1]\n            labels_out[:, 2:] = rbox[:, :5]\n            \n        return image, labels_out\n\n    def rand(self, a=0, b=1):\n        return np.random.rand()*(b-a) + a\n\n    def get_random_data(self, annotation_line, input_shape, jitter=.3, hue=.1, sat=0.7, val=0.4, random=True, show=False):\n        line    = annotation_line.split()\n        #------------------------------#\n        #   读取图像并转换成RGB图像\n        #------------------------------#\n        image   = Image.open(line[0])\n        image   = cvtColor(image)\n        #------------------------------#\n        #   获得图像的高宽与目标高宽\n        #------------------------------#\n        iw, ih  = image.size\n        h, w    = input_shape\n        #------------------------------#\n        #   获得预测框\n        #------------------------------#\n        box     = np.array([np.array(list(map(float,box.split(',')))) for box in line[1:]])\n\n        if not random:\n            scale = min(w/iw, h/ih)\n            nw = int(iw*scale)\n            nh = int(ih*scale)\n            dx = (w-nw)//2\n            dy = (h-nh)//2\n\n            #---------------------------------#\n            #   将图像多余的部分加上灰条\n            #---------------------------------#\n            image       = image.resize((nw,nh), Image.BICUBIC)\n            new_image   = Image.new('RGB', (w,h), (128,128,128))\n            new_image.paste(image, (dx, dy))\n            image_data  = np.array(new_image, np.float32)\n\n            #---------------------------------#\n            #   对真实框进行调整\n            #---------------------------------#\n            if len(box)>0:\n                np.random.shuffle(box)\n                box[:, [0,2,4,6]] = box[:, [0,2,4,6]]*nw/iw + dx\n                box[:, [1,3,5,7]] = box[:, [1,3,5,7]]*nh/ih + dy\n                #------------------------------#\n                #   将polygon转换为rbox\n                #------------------------------#\n                rbox          = np.zeros((box.shape[0], 6))\n                rbox[..., :5] = poly2rbox(box[..., :8])\n                rbox[..., 5]  = box[..., 8]\n                keep = (rbox[:, 0] >= 0) & (rbox[:, 0] < w) \\\n                        & (rbox[:, 1] >= 0) & (rbox[:, 0] < h) \\\n                        & (rbox[:, 2] > 5) | (rbox[:, 3] > 5)\n                rbox = rbox[keep]\n            return image_data, rbox\n\n        #------------------------------------------#\n        #   对图像进行缩放并且进行长和宽的扭曲\n        #------------------------------------------#\n        new_ar = iw/ih * self.rand(1-jitter,1+jitter) / self.rand(1-jitter,1+jitter)\n        scale = self.rand(.25, 2)\n        if new_ar < 1:\n            nh = int(scale*h)\n            nw = int(nh*new_ar)\n        else:\n            nw = int(scale*w)\n            nh = int(nw/new_ar)\n        image = image.resize((nw,nh), Image.BICUBIC)\n\n        #------------------------------------------#\n        #   将图像多余的部分加上灰条\n        #------------------------------------------#\n        dx = int(self.rand(0, w-nw))\n        dy = int(self.rand(0, h-nh))\n        new_image = Image.new('RGB', (w,h), (128,128,128))\n        new_image.paste(image, (dx, dy))\n        image = new_image\n        #------------------------------------------#\n        #   翻转图像\n        #------------------------------------------#\n        flip = self.rand()<.5\n        if flip: image = image.transpose(Image.FLIP_LEFT_RIGHT)\n        \n        image_data      = np.array(image, np.uint8)\n        #---------------------------------#\n        #   对图像进行色域变换\n        #   计算色域变换的参数\n        #---------------------------------#\n        r               = np.random.uniform(-1, 1, 3) * [hue, sat, val] + 1\n        #---------------------------------#\n        #   将图像转到HSV上\n        #---------------------------------#\n        hue, sat, val   = cv2.split(cv2.cvtColor(image_data, cv2.COLOR_RGB2HSV))\n        dtype           = image_data.dtype\n        #---------------------------------#\n        #   应用变换\n        #---------------------------------#\n        x       = np.arange(0, 256, dtype=r.dtype)\n        lut_hue = ((x * r[0]) % 180).astype(dtype)\n        lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)\n        lut_val = np.clip(x * r[2], 0, 255).astype(dtype)\n\n        image_data = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val)))\n        image_data = cv2.cvtColor(image_data, cv2.COLOR_HSV2RGB)\n        #---------------------------------#\n        #   对真实框进行调整\n        #---------------------------------#\n        if len(box)>0:\n            np.random.shuffle(box)\n            box[:, [0,2,4,6]] = box[:, [0,2,4,6]]*nw/iw + dx\n            box[:, [1,3,5,7]] = box[:, [1,3,5,7]]*nh/ih + dy\n            if flip: box[:, [0,2,4,6]] = w - box[:, [0,2,4,6]]\n            #------------------------------#\n            #   将polygon转换为rbox\n            #------------------------------#\n            rbox          = np.zeros((box.shape[0], 6))\n            rbox[..., :5] = poly2rbox(box[..., :8])\n            rbox[..., 5]  = box[..., 8]\n            keep = (rbox[:, 0] >= 0) & (rbox[:, 0] < w) \\\n                    & (rbox[:, 1] >= 0) & (rbox[:, 0] < h) \\\n                    & (rbox[:, 2] > 5) | (rbox[:, 3] > 5)\n            rbox = rbox[keep]\n        #------------------------------#\n        #   检查旋转框\n        #------------------------------#\n        if show:\n            draw  = ImageDraw.Draw(image)\n            polys = rbox2poly(rbox[..., :5])\n            for poly in polys:\n                draw.polygon(xy=list(poly))\n            image.show()\n        return image_data, rbox\n    \n    def merge_rboxes(self, rboxes, cutx, cuty):\n        merge_rbox = []\n        for i in range(len(rboxes)):\n            for rbox in rboxes[i]:\n                tmp_rbox = []\n                xc, yc, w, h = rbox[0], rbox[1], rbox[2], rbox[3]\n                tmp_rbox.append(xc)\n                tmp_rbox.append(yc)\n                tmp_rbox.append(h)\n                tmp_rbox.append(w)\n                tmp_rbox.append(rbox[-1])\n                merge_rbox.append(rbox)\n        merge_rbox = np.array(merge_rbox)\n        return merge_rbox\n\n    def get_random_data_with_Mosaic(self, annotation_line, input_shape, jitter=0.3, hue=.1, sat=0.7, val=0.4, show=False):\n        h, w = input_shape\n        min_offset_x = self.rand(0.3, 0.7)\n        min_offset_y = self.rand(0.3, 0.7)\n\n        image_datas = [] \n        rbox_datas  = []\n        index       = 0\n        for line in annotation_line:\n            #---------------------------------#\n            #   每一行进行分割\n            #---------------------------------#\n            line_content = line.split()\n            #---------------------------------#\n            #   打开图片\n            #---------------------------------#\n            image = Image.open(line_content[0])\n            image = cvtColor(image)\n            \n            #---------------------------------#\n            #   图片的大小\n            #---------------------------------#\n            iw, ih = image.size\n            #---------------------------------#\n            #   保存框的位置\n            #---------------------------------#\n            box = np.array([np.array(list(map(float,box.split(',')))) for box in line_content[1:]])\n            #---------------------------------#\n            #   是否翻转图片\n            #---------------------------------#\n            flip = self.rand()<.5\n            if flip and len(box)>0:\n                image = image.transpose(Image.FLIP_LEFT_RIGHT)\n                box[:, [0,2,4,6]] = iw - box[:, [0,2,4,6]]\n            #------------------------------------------#\n            #   对图像进行缩放并且进行长和宽的扭曲\n            #------------------------------------------#\n            new_ar = iw/ih * self.rand(1-jitter,1+jitter) / self.rand(1-jitter,1+jitter)\n            scale = self.rand(.4, 1)\n            if new_ar < 1:\n                nh = int(scale*h)\n                nw = int(nh*new_ar)\n            else:\n                nw = int(scale*w)\n                nh = int(nw/new_ar)\n            image = image.resize((nw, nh), Image.BICUBIC)\n\n            #-----------------------------------------------#\n            #   将图片进行放置，分别对应四张分割图片的位置\n            #-----------------------------------------------#\n            if index == 0:\n                dx = int(w*min_offset_x) - nw\n                dy = int(h*min_offset_y) - nh\n            elif index == 1:\n                dx = int(w*min_offset_x) - nw\n                dy = int(h*min_offset_y)\n            elif index == 2:\n                dx = int(w*min_offset_x)\n                dy = int(h*min_offset_y)\n            elif index == 3:\n                dx = int(w*min_offset_x)\n                dy = int(h*min_offset_y) - nh\n            \n            new_image = Image.new('RGB', (w,h), (128,128,128))\n            new_image.paste(image, (dx, dy))\n            image_data = np.array(new_image)\n\n            index = index + 1\n            rbox_data = []\n            #---------------------------------#\n            #   对rbox进行重新处理\n            #---------------------------------#\n            if len(box)>0:\n                np.random.shuffle(box)\n                box[:, [0,2,4,6]] = box[:, [0,2,4,6]]*nw/iw + dx\n                box[:, [1,3,5,7]] = box[:, [1,3,5,7]]*nh/ih + dy\n                #------------------------------#\n                #   将polygon转换为rbox\n                #------------------------------#\n                rbox          = np.zeros((box.shape[0], 6))\n                rbox[..., :5] = poly2rbox(box[..., :8])\n                rbox[..., 5]  = box[..., 8]\n                keep = (rbox[:, 0] >= 0) & (rbox[:, 0] < w) \\\n                        & (rbox[:, 1] >= 0) & (rbox[:, 0] < h) \\\n                        & (rbox[:, 2] > 5) | (rbox[:, 3] > 5)\n                rbox = rbox[keep]\n                rbox_data = np.zeros((len(rbox),6))\n                rbox_data[:len(rbox)] = rbox\n            \n            image_datas.append(image_data)\n            rbox_datas.append(rbox_data)\n\n        #---------------------------------#\n        #   将图片分割，放在一起\n        #---------------------------------#\n        cutx = int(w * min_offset_x)\n        cuty = int(h * min_offset_y)\n\n        new_image = np.zeros([h, w, 3])\n        new_image[:cuty, :cutx, :] = image_datas[0][:cuty, :cutx, :]\n        new_image[cuty:, :cutx, :] = image_datas[1][cuty:, :cutx, :]\n        new_image[cuty:, cutx:, :] = image_datas[2][cuty:, cutx:, :]\n        new_image[:cuty, cutx:, :] = image_datas[3][:cuty, cutx:, :]\n\n        new_image       = np.array(new_image, np.uint8)\n        #---------------------------------#\n        #   对图像进行色域变换\n        #   计算色域变换的参数\n        #---------------------------------#\n        r               = np.random.uniform(-1, 1, 3) * [hue, sat, val] + 1\n        #---------------------------------#\n        #   将图像转到HSV上\n        #---------------------------------#\n        hue, sat, val   = cv2.split(cv2.cvtColor(new_image, cv2.COLOR_RGB2HSV))\n        dtype           = new_image.dtype\n        #---------------------------------#\n        #   应用变换\n        #---------------------------------#\n        x       = np.arange(0, 256, dtype=r.dtype)\n        lut_hue = ((x * r[0]) % 180).astype(dtype)\n        lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)\n        lut_val = np.clip(x * r[2], 0, 255).astype(dtype)\n\n        new_image = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val)))\n        new_image = cv2.cvtColor(new_image, cv2.COLOR_HSV2RGB)\n\n        #---------------------------------#\n        #   对框进行进一步的处理\n        #---------------------------------#\n        new_rboxes = self.merge_rboxes(rbox_datas, cutx, cuty)\n        #---------------------------------#\n        #   检查旋转框\n        #---------------------------------#\n        if show:\n            new_img = Image.fromarray(new_image) \n            draw    = ImageDraw.Draw(new_img)\n            polys   = rbox2poly(new_rboxes[..., :5])\n            for poly in polys:\n                draw.polygon(xy=list(poly))\n            new_img.show()\n        return new_image, new_rboxes\n\n    def get_random_data_with_MixUp(self, image_1, rbox_1, image_2, rbox_2):\n        new_image = np.array(image_1, np.float32) * 0.5 + np.array(image_2, np.float32) * 0.5\n        if len(rbox_1) == 0:\n            new_rboxes = rbox_2\n        elif len(rbox_2) == 0:\n            new_rboxes = rbox_1\n        else:\n            new_rboxes = np.concatenate([rbox_1, rbox_2], axis=0)\n        return new_image, new_rboxes\n    \n    \n# DataLoader中collate_fn使用\ndef yolo_dataset_collate(batch):\n    images  = []\n    bboxes  = []\n    for i, (img, box) in enumerate(batch):\n        images.append(img)\n        box[:, 0] = i\n        bboxes.append(box)\n            \n    images  = torch.from_numpy(np.array(images)).type(torch.FloatTensor)\n    bboxes  = torch.from_numpy(np.concatenate(bboxes, 0)).type(torch.FloatTensor)\n    return images, bboxes\n"
  },
  {
    "path": "utils/kld_loss.py",
    "content": "'''\nAuthor: [egrt]\nDate: 2023-01-30 18:47:24\nLastEditors: Egrt\nLastEditTime: 2023-05-26 15:00:14\nDescription: \n'''\nimport torch\nimport torch.nn as nn\n\nclass KLDloss(nn.Module):\n\n    def __init__(self, taf=1.0, fun=\"sqrt\"):\n        super(KLDloss, self).__init__()\n        self.fun = fun\n        self.taf = taf\n        self.eps = 1e-8\n\n    def forward(self, pred, target): # pred [[x,y,w,h,angle], ...]\n        #assert pred.shape[0] == target.shape[0]\n\n        pred = pred.view(-1, 5)\n        target = target.view(-1, 5)\n\n        delta_x = pred[:, 0] - target[:, 0]\n        delta_y = pred[:, 1] - target[:, 1]\n        pre_angle_radian = pred[:, 4]\n        targrt_angle_radian = target[:, 4]\n        delta_angle_radian = pre_angle_radian - targrt_angle_radian\n\n        kld =  0.5 * (\n                        4 * torch.pow( ( delta_x.mul(torch.cos(targrt_angle_radian)) + delta_y.mul(torch.sin(targrt_angle_radian)) ), 2) / torch.pow(target[:, 2], 2)\n                      + 4 * torch.pow( ( delta_y.mul(torch.cos(targrt_angle_radian)) - delta_x.mul(torch.sin(targrt_angle_radian)) ), 2) / torch.pow(target[:, 3], 2)\n                     )\\\n             + 0.5 * (\n                        torch.pow(pred[:, 3], 2) / torch.pow(target[:, 2], 2) * torch.pow(torch.sin(delta_angle_radian), 2)\n                      + torch.pow(pred[:, 2], 2) / torch.pow(target[:, 3], 2) * torch.pow(torch.sin(delta_angle_radian), 2)\n                      + torch.pow(pred[:, 3], 2) / torch.pow(target[:, 3], 2) * torch.pow(torch.cos(delta_angle_radian), 2)\n                      + torch.pow(pred[:, 2], 2) / torch.pow(target[:, 2], 2) * torch.pow(torch.cos(delta_angle_radian), 2)\n                     )\\\n             + 0.5 * (\n                        torch.log(torch.pow(target[:, 3], 2) / torch.pow(pred[:, 3], 2))\n                      + torch.log(torch.pow(target[:, 2], 2) / torch.pow(pred[:, 2], 2))\n                     )\\\n             - 1.0\n\n        \n\n        if self.fun == \"sqrt\":\n            kld = kld.clamp(1e-7).sqrt()\n        elif self.fun == \"log1p\":\n            kld = torch.log1p(kld.clamp(1e-7))\n        else:\n            pass\n\n        kld_loss = 1 - 1 / (self.taf + self.eps + kld)\n\n        return kld_loss\n    \ndef compute_kld_loss(targets, preds,taf=1.0,fun='sqrt'):\n    with torch.no_grad():\n        kld_loss_ts_ps = torch.zeros(0, preds.shape[0], device=targets.device)\n        for target in targets:\n            target = target.unsqueeze(0).repeat(preds.shape[0], 1)\n            kld_loss_t_p = kld_loss(preds, target,taf=taf, fun=fun)\n            kld_loss_ts_ps = torch.cat((kld_loss_ts_ps, kld_loss_t_p.unsqueeze(0)), dim=0)\n    return kld_loss_ts_ps\n\n\ndef kld_loss(pred, target, taf=1.0, fun='sqrt'):  # pred [[x,y,w,h,angle], ...]\n    #assert pred.shape[0] == target.shape[0]\n\n    pred = pred.view(-1, 5)\n    target = target.view(-1, 5)\n\n    delta_x = pred[:, 0] - target[:, 0]\n    delta_y = pred[:, 1] - target[:, 1]\n    pre_angle_radian = pred[:, 4]  #3.141592653589793 * pred[:, 4] / 180.0\n    targrt_angle_radian = target[:, 4] #3.141592653589793 * target[:, 4] / 180.0\n    delta_angle_radian = pre_angle_radian - targrt_angle_radian\n\n    kld = 0.5 * (\n            4 * torch.pow((delta_x.mul(torch.cos(targrt_angle_radian)) + delta_y.mul(torch.sin(targrt_angle_radian))),\n                          2) / torch.pow(target[:, 2], 2)\n            + 4 * torch.pow((delta_y.mul(torch.cos(targrt_angle_radian)) - delta_x.mul(torch.sin(targrt_angle_radian))),\n                            2) / torch.pow(target[:, 3], 2)\n    ) \\\n          + 0.5 * (\n                  torch.pow(pred[:, 3], 2) / torch.pow(target[:, 2], 2) * torch.pow(torch.sin(delta_angle_radian), 2)\n                  + torch.pow(pred[:, 2], 2) / torch.pow(target[:, 3], 2) * torch.pow(torch.sin(delta_angle_radian), 2)\n                  + torch.pow(pred[:, 3], 2) / torch.pow(target[:, 3], 2) * torch.pow(torch.cos(delta_angle_radian), 2)\n                  + torch.pow(pred[:, 2], 2) / torch.pow(target[:, 2], 2) * torch.pow(torch.cos(delta_angle_radian), 2)\n          ) \\\n          + 0.5 * (\n                  torch.log(torch.pow(target[:, 3], 2) / torch.pow(pred[:, 3], 2))\n                  + torch.log(torch.pow(target[:, 2], 2) / torch.pow(pred[:, 2], 2))\n          ) \\\n          - 1.0\n\n    if fun == \"sqrt\":\n        kld = kld.clamp(1e-7).sqrt()\n    elif fun == \"log1p\":\n        kld = torch.log1p(kld.clamp(1e-7))\n    else:\n        pass\n\n    kld_loss = 1 - 1 / (taf + kld)\n    return kld_loss\n\nif __name__ == '__main__':\n    '''\n        测试损失函数\n    '''\n    kld_loss_n = KLDloss(alpha=1,fun='log1p')\n    pred = torch.tensor([[5, 5, 5, 23, 0.15],[6,6,5,28,0]]).type(torch.float32)\n    target = torch.tensor([[5, 5, 5, 24, 0],[6,6,5,28,0]]).type(torch.float32)\n    kld = kld_loss_n(target,pred)"
  },
  {
    "path": "utils/nms_rotated/__init__.py",
    "content": "from .nms_rotated_wrapper import obb_nms, poly_nms\n\n__all__ = ['obb_nms', 'poly_nms']\n"
  },
  {
    "path": "utils/nms_rotated/nms_rotated_wrapper.py",
    "content": "import numpy as np\nimport torch\n\nfrom . import nms_rotated_ext\n\ndef obb_nms(dets, scores, iou_thr, device_id=None):\n    \"\"\"\n    RIoU NMS - iou_thr.\n    Args:\n        dets (tensor/array): (num, [cx cy w h θ]) θ∈[-pi/2, pi/2)\n        scores (tensor/array): (num)\n        iou_thr (float): (1)\n    Returns:\n        dets (tensor): (n_nms, [cx cy w h θ])\n        inds (tensor): (n_nms), nms index of dets\n    \"\"\"\n    if isinstance(dets, torch.Tensor):\n        is_numpy = False\n        dets_th = dets\n    elif isinstance(dets, np.ndarray):\n        is_numpy = True\n        device = 'cpu' if device_id is None else f'cuda:{device_id}'\n        dets_th = torch.from_numpy(dets).to(device)\n    else:\n        raise TypeError('dets must be eithr a Tensor or numpy array, '\n                        f'but got {type(dets)}')\n\n    if dets_th.numel() == 0: # len(dets)\n        inds = dets_th.new_zeros(0, dtype=torch.int64)\n    else:\n        # same bug will happen when bboxes is too small\n        too_small = dets_th[:, [2, 3]].min(1)[0] < 0.001 # [n]\n        if too_small.all(): # all the bboxes is too small\n            inds = dets_th.new_zeros(0, dtype=torch.int64)\n        else:\n            ori_inds = torch.arange(dets_th.size(0)) # 0 ~ n-1\n            ori_inds = ori_inds[~too_small]\n            dets_th = dets_th[~too_small] # (n_filter, 5)\n            scores = scores[~too_small]\n\n            inds = nms_rotated_ext.nms_rotated(dets_th, scores, iou_thr)\n            inds = ori_inds[inds]\n\n    if is_numpy:\n        inds = inds.cpu().numpy()\n    return dets[inds, :], inds\n\n\ndef poly_nms(dets, iou_thr, device_id=None):\n    if isinstance(dets, torch.Tensor):\n        is_numpy = False\n        dets_th = dets\n    elif isinstance(dets, np.ndarray):\n        is_numpy = True\n        device = 'cpu' if device_id is None else f'cuda:{device_id}'\n        dets_th = torch.from_numpy(dets).to(device)\n    else:\n        raise TypeError('dets must be eithr a Tensor or numpy array, '\n                        f'but got {type(dets)}')\n\n    if dets_th.device == torch.device('cpu'):\n        raise NotImplementedError\n    inds = nms_rotated_ext.nms_poly(dets_th.float(), iou_thr)\n\n    if is_numpy:\n        inds = inds.cpu().numpy()\n    return dets[inds, :], inds\n\nif __name__ == '__main__':\n    rboxes_opencv = torch.tensor(([136.6, 111.6, 200, 100, -60],\n                                  [136.6, 111.6, 100, 200, -30],\n                                  [100, 100, 141.4, 141.4, -45],\n                                  [100, 100, 141.4, 141.4, -45]))\n    rboxes_longedge = torch.tensor(([136.6, 111.6, 200, 100, -60],\n                                    [136.6, 111.6, 200, 100, 120],\n                                    [100, 100, 141.4, 141.4, 45],\n                                    [100, 100, 141.4, 141.4, 135]))\n    "
  },
  {
    "path": "utils/nms_rotated/setup.py",
    "content": "#!/usr/bin/env python\nimport os\nimport subprocess\nimport time\nfrom setuptools import find_packages, setup\n\nimport torch\nfrom torch.utils.cpp_extension import (BuildExtension, CppExtension,\n                                       CUDAExtension)\ndef make_cuda_ext(name, module, sources, sources_cuda=[]):\n\n    define_macros = []\n    extra_compile_args = {'cxx': []}\n\n    if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1':\n        define_macros += [('WITH_CUDA', None)]\n        extension = CUDAExtension\n        extra_compile_args['nvcc'] = [\n            '-D__CUDA_NO_HALF_OPERATORS__',\n            '-D__CUDA_NO_HALF_CONVERSIONS__',\n            '-D__CUDA_NO_HALF2_OPERATORS__',\n        ]\n        sources += sources_cuda\n    else:\n        print(f'Compiling {name} without CUDA')\n        extension = CppExtension\n        # raise EnvironmentError('CUDA is required to compile MMDetection!')\n\n    return extension(\n        name=f'{module}.{name}',\n        sources=[os.path.join(*module.split('.'), p) for p in sources],\n        define_macros=define_macros,\n        extra_compile_args=extra_compile_args)\n\n# python setup.py develop\nif __name__ == '__main__':\n    #write_version_py()\n    setup(\n        name='nms_rotated',\n        ext_modules=[\n            make_cuda_ext(\n                name='nms_rotated_ext',\n                module='',\n                sources=[\n                    'src/nms_rotated_cpu.cpp',\n                    'src/nms_rotated_ext.cpp'\n                ],\n                sources_cuda=[\n                    'src/nms_rotated_cuda.cu',\n                    'src/poly_nms_cuda.cu',\n                ]),\n        ],\n        cmdclass={'build_ext': BuildExtension},\n        zip_safe=False)"
  },
  {
    "path": "utils/nms_rotated/src/box_iou_rotated_utils.h",
    "content": "// Mortified from\n// https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/box_iou_rotated\n// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n#pragma once\n\n#include <cassert>\n#include <cmath>\n\n#if defined(__CUDACC__) || __HCC__ == 1 || __HIP__ == 1\n// Designates functions callable from the host (CPU) and the device (GPU)\n#define HOST_DEVICE __host__ __device__\n#define HOST_DEVICE_INLINE HOST_DEVICE __forceinline__\n#else\n#include <algorithm>\n#define HOST_DEVICE\n#define HOST_DEVICE_INLINE HOST_DEVICE inline\n#endif\n\n\ntemplate <typename T>\nstruct RotatedBox {\n  T x_ctr, y_ctr, w, h, a;\n};\n\ntemplate <typename T>\nstruct Point {\n  T x, y;\n  HOST_DEVICE_INLINE Point(const T& px = 0, const T& py = 0) : x(px), y(py) {}\n  HOST_DEVICE_INLINE Point operator+(const Point& p) const {\n    return Point(x + p.x, y + p.y);\n  }\n  HOST_DEVICE_INLINE Point& operator+=(const Point& p) {\n    x += p.x;\n    y += p.y;\n    return *this;\n  }\n  HOST_DEVICE_INLINE Point operator-(const Point& p) const {\n    return Point(x - p.x, y - p.y);\n  }\n  HOST_DEVICE_INLINE Point operator*(const T coeff) const {\n    return Point(x * coeff, y * coeff);\n  }\n};\n\ntemplate <typename T>\nHOST_DEVICE_INLINE T dot_2d(const Point<T>& A, const Point<T>& B) {\n  return A.x * B.x + A.y * B.y;\n}\n\n// R: result type. can be different from input type\ntemplate <typename T, typename R = T>\nHOST_DEVICE_INLINE R cross_2d(const Point<T>& A, const Point<T>& B) {\n  return static_cast<R>(A.x) * static_cast<R>(B.y) -\n      static_cast<R>(B.x) * static_cast<R>(A.y);\n}\n\ntemplate <typename T>\nHOST_DEVICE_INLINE void get_rotated_vertices(\n    const RotatedBox<T>& box,\n    Point<T> (&pts)[4]) {\n  // M_PI / 180. == 0.01745329251\n  //double theta = box.a * 0.01745329251;             ++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n  double theta = box.a;\n  T cosTheta2 = (T)cos(theta) * 0.5f;\n  T sinTheta2 = (T)sin(theta) * 0.5f;\n\n  // y: top --> down; x: left --> right\n  pts[0].x = box.x_ctr + sinTheta2 * box.h + cosTheta2 * box.w;\n  pts[0].y = box.y_ctr + cosTheta2 * box.h - sinTheta2 * box.w;\n  pts[1].x = box.x_ctr - sinTheta2 * box.h + cosTheta2 * box.w;\n  pts[1].y = box.y_ctr - cosTheta2 * box.h - sinTheta2 * box.w;\n  pts[2].x = 2 * box.x_ctr - pts[0].x;\n  pts[2].y = 2 * box.y_ctr - pts[0].y;\n  pts[3].x = 2 * box.x_ctr - pts[1].x;\n  pts[3].y = 2 * box.y_ctr - pts[1].y;\n}\n\ntemplate <typename T>\nHOST_DEVICE_INLINE int get_intersection_points(\n    const Point<T> (&pts1)[4],\n    const Point<T> (&pts2)[4],\n    Point<T> (&intersections)[24]) {\n  // Line vector\n  // A line from p1 to p2 is: p1 + (p2-p1)*t, t=[0,1]\n  Point<T> vec1[4], vec2[4];\n  for (int i = 0; i < 4; i++) {\n    vec1[i] = pts1[(i + 1) % 4] - pts1[i];\n    vec2[i] = pts2[(i + 1) % 4] - pts2[i];\n  }\n\n  // Line test - test all line combos for intersection\n  int num = 0; // number of intersections\n  for (int i = 0; i < 4; i++) {\n    for (int j = 0; j < 4; j++) {\n      // Solve for 2x2 Ax=b\n      T det = cross_2d<T>(vec2[j], vec1[i]);\n\n      // This takes care of parallel lines\n      if (fabs(det) <= 1e-14) {\n        continue;\n      }\n\n      auto vec12 = pts2[j] - pts1[i];\n\n      T t1 = cross_2d<T>(vec2[j], vec12) / det;\n      T t2 = cross_2d<T>(vec1[i], vec12) / det;\n\n      if (t1 >= 0.0f && t1 <= 1.0f && t2 >= 0.0f && t2 <= 1.0f) {\n        intersections[num++] = pts1[i] + vec1[i] * t1;\n      }\n    }\n  }\n\n  // Check for vertices of rect1 inside rect2\n  {\n    const auto& AB = vec2[0];\n    const auto& DA = vec2[3];\n    auto ABdotAB = dot_2d<T>(AB, AB);\n    auto ADdotAD = dot_2d<T>(DA, DA);\n    for (int i = 0; i < 4; i++) {\n      // assume ABCD is the rectangle, and P is the point to be judged\n      // P is inside ABCD iff. P's projection on AB lies within AB\n      // and P's projection on AD lies within AD\n\n      auto AP = pts1[i] - pts2[0];\n\n      auto APdotAB = dot_2d<T>(AP, AB);\n      auto APdotAD = -dot_2d<T>(AP, DA);\n\n      if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) &&\n          (APdotAD <= ADdotAD)) {\n        intersections[num++] = pts1[i];\n      }\n    }\n  }\n\n  // Reverse the check - check for vertices of rect2 inside rect1\n  {\n    const auto& AB = vec1[0];\n    const auto& DA = vec1[3];\n    auto ABdotAB = dot_2d<T>(AB, AB);\n    auto ADdotAD = dot_2d<T>(DA, DA);\n    for (int i = 0; i < 4; i++) {\n      auto AP = pts2[i] - pts1[0];\n\n      auto APdotAB = dot_2d<T>(AP, AB);\n      auto APdotAD = -dot_2d<T>(AP, DA);\n\n      if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) &&\n          (APdotAD <= ADdotAD)) {\n        intersections[num++] = pts2[i];\n      }\n    }\n  }\n\n  return num;\n}\n\ntemplate <typename T>\nHOST_DEVICE_INLINE int convex_hull_graham(\n    const Point<T> (&p)[24],\n    const int& num_in,\n    Point<T> (&q)[24],\n    bool shift_to_zero = false) {\n  assert(num_in >= 2);\n\n  // Step 1:\n  // Find point with minimum y\n  // if more than 1 points have the same minimum y,\n  // pick the one with the minimum x.\n  int t = 0;\n  for (int i = 1; i < num_in; i++) {\n    if (p[i].y < p[t].y || (p[i].y == p[t].y && p[i].x < p[t].x)) {\n      t = i;\n    }\n  }\n  auto& start = p[t]; // starting point\n\n  // Step 2:\n  // Subtract starting point from every points (for sorting in the next step)\n  for (int i = 0; i < num_in; i++) {\n    q[i] = p[i] - start;\n  }\n\n  // Swap the starting point to position 0\n  auto tmp = q[0];\n  q[0] = q[t];\n  q[t] = tmp;\n\n  // Step 3:\n  // Sort point 1 ~ num_in according to their relative cross-product values\n  // (essentially sorting according to angles)\n  // If the angles are the same, sort according to their distance to origin\n  T dist[24];\n#if defined(__CUDACC__) || __HCC__ == 1 || __HIP__ == 1\n  // compute distance to origin before sort, and sort them together with the\n  // points\n  for (int i = 0; i < num_in; i++) {\n    dist[i] = dot_2d<T>(q[i], q[i]);\n  }\n\n  // CUDA version\n  // In the future, we can potentially use thrust\n  // for sorting here to improve speed (though not guaranteed)\n  for (int i = 1; i < num_in - 1; i++) {\n    for (int j = i + 1; j < num_in; j++) {\n      T crossProduct = cross_2d<T>(q[i], q[j]);\n      if ((crossProduct < -1e-6) ||\n          (fabs(crossProduct) < 1e-6 && dist[i] > dist[j])) {\n        auto q_tmp = q[i];\n        q[i] = q[j];\n        q[j] = q_tmp;\n        auto dist_tmp = dist[i];\n        dist[i] = dist[j];\n        dist[j] = dist_tmp;\n      }\n    }\n  }\n#else\n  // CPU version\n  std::sort(\n      q + 1, q + num_in, [](const Point<T>& A, const Point<T>& B) -> bool {\n        T temp = cross_2d<T>(A, B);\n        if (fabs(temp) < 1e-6) {\n          return dot_2d<T>(A, A) < dot_2d<T>(B, B);\n        } else {\n          return temp > 0;\n        }\n      });\n  // compute distance to origin after sort, since the points are now different.\n  for (int i = 0; i < num_in; i++) {\n    dist[i] = dot_2d<T>(q[i], q[i]);\n  }\n#endif\n\n  // Step 4:\n  // Make sure there are at least 2 points (that don't overlap with each other)\n  // in the stack\n  int k; // index of the non-overlapped second point\n  for (k = 1; k < num_in; k++) {\n    if (dist[k] > 1e-8) {\n      break;\n    }\n  }\n  if (k == num_in) {\n    // We reach the end, which means the convex hull is just one point\n    q[0] = p[t];\n    return 1;\n  }\n  q[1] = q[k];\n  int m = 2; // 2 points in the stack\n  // Step 5:\n  // Finally we can start the scanning process.\n  // When a non-convex relationship between the 3 points is found\n  // (either concave shape or duplicated points),\n  // we pop the previous point from the stack\n  // until the 3-point relationship is convex again, or\n  // until the stack only contains two points\n  for (int i = k + 1; i < num_in; i++) {\n    while (m > 1) {\n      auto q1 = q[i] - q[m - 2], q2 = q[m - 1] - q[m - 2];\n      // cross_2d() uses FMA and therefore computes round(round(q1.x*q2.y) -\n      // q2.x*q1.y) So it may not return 0 even when q1==q2. Therefore we\n      // compare round(q1.x*q2.y) and round(q2.x*q1.y) directly. (round means\n      // round to nearest floating point).\n      if (q1.x * q2.y >= q2.x * q1.y)\n        m--;\n      else\n        break;\n    }\n    // Using double also helps, but float can solve the issue for now.\n    // while (m > 1 && cross_2d<T, double>(q[i] - q[m - 2], q[m - 1] - q[m - 2])\n    // >= 0) {\n    //     m--;\n    // }\n    q[m++] = q[i];\n  }\n\n  // Step 6 (Optional):\n  // In general sense we need the original coordinates, so we\n  // need to shift the points back (reverting Step 2)\n  // But if we're only interested in getting the area/perimeter of the shape\n  // We can simply return.\n  if (!shift_to_zero) {\n    for (int i = 0; i < m; i++) {\n      q[i] += start;\n    }\n  }\n\n  return m;\n}\n\ntemplate <typename T>\nHOST_DEVICE_INLINE T polygon_area(const Point<T> (&q)[24], const int& m) {\n  if (m <= 2) {\n    return 0;\n  }\n\n  T area = 0;\n  for (int i = 1; i < m - 1; i++) {\n    area += fabs(cross_2d<T>(q[i] - q[0], q[i + 1] - q[0]));\n  }\n\n  return area / 2.0;\n}\n\ntemplate <typename T>\nHOST_DEVICE_INLINE T rotated_boxes_intersection(\n    const RotatedBox<T>& box1,\n    const RotatedBox<T>& box2) {\n  // There are up to 4 x 4 + 4 + 4 = 24 intersections (including dups) returned\n  // from rotated_rect_intersection_pts\n  Point<T> intersectPts[24], orderedPts[24];\n\n  Point<T> pts1[4];\n  Point<T> pts2[4];\n  get_rotated_vertices<T>(box1, pts1);\n  get_rotated_vertices<T>(box2, pts2);\n\n  int num = get_intersection_points<T>(pts1, pts2, intersectPts);\n\n  if (num <= 2) {\n    return 0.0;\n  }\n\n  // Convex Hull to order the intersection points in clockwise order and find\n  // the contour area.\n  int num_convex = convex_hull_graham<T>(intersectPts, num, orderedPts, true);\n  return polygon_area<T>(orderedPts, num_convex);\n}\n\n\ntemplate <typename T>\nHOST_DEVICE_INLINE T\nsingle_box_iou_rotated(T const* const box1_raw, T const* const box2_raw) {\n  // shift center to the middle point to achieve higher precision in result\n  RotatedBox<T> box1, box2;\n  auto center_shift_x = (box1_raw[0] + box2_raw[0]) / 2.0;\n  auto center_shift_y = (box1_raw[1] + box2_raw[1]) / 2.0;\n  box1.x_ctr = box1_raw[0] - center_shift_x;\n  box1.y_ctr = box1_raw[1] - center_shift_y;\n  box1.w = box1_raw[2];\n  box1.h = box1_raw[3];\n  box1.a = box1_raw[4];\n  box2.x_ctr = box2_raw[0] - center_shift_x;\n  box2.y_ctr = box2_raw[1] - center_shift_y;\n  box2.w = box2_raw[2];\n  box2.h = box2_raw[3];\n  box2.a = box2_raw[4];\n\n  T area1 = box1.w * box1.h;\n  T area2 = box2.w * box2.h;\n  if (area1 < 1e-14 || area2 < 1e-14) {\n    return 0.f;\n  }\n\n  T intersection = rotated_boxes_intersection<T>(box1, box2);\n  T iou = intersection / (area1 + area2 - intersection);\n  return iou;\n}\n"
  },
  {
    "path": "utils/nms_rotated/src/nms_rotated_cpu.cpp",
    "content": "// Modified from\n// https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/nms_rotated\n// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n#include <torch/types.h>\n#include \"box_iou_rotated_utils.h\"\n\n\ntemplate <typename scalar_t>\nat::Tensor nms_rotated_cpu_kernel(\n    const at::Tensor& dets,\n    const at::Tensor& scores,\n    const float iou_threshold) {\n  // nms_rotated_cpu_kernel is modified from torchvision's nms_cpu_kernel,\n  // however, the code in this function is much shorter because\n  // we delegate the IoU computation for rotated boxes to\n  // the single_box_iou_rotated function in box_iou_rotated_utils.h\n  AT_ASSERTM(dets.device().is_cpu(), \"dets must be a CPU tensor\");\n  AT_ASSERTM(scores.device().is_cpu(), \"scores must be a CPU tensor\");\n  AT_ASSERTM(\n      dets.scalar_type() == scores.scalar_type(),\n      \"dets should have the same type as scores\");\n\n  if (dets.numel() == 0) {\n    return at::empty({0}, dets.options().dtype(at::kLong));\n  }\n\n  auto order_t = std::get<1>(scores.sort(0, /* descending=*/true));\n\n  auto ndets = dets.size(0);\n  at::Tensor suppressed_t = at::zeros({ndets}, dets.options().dtype(at::kByte));\n  at::Tensor keep_t = at::zeros({ndets}, dets.options().dtype(at::kLong));\n\n  auto suppressed = suppressed_t.data_ptr<uint8_t>();\n  auto keep = keep_t.data_ptr<int64_t>();\n  auto order = order_t.data_ptr<int64_t>();\n\n  int64_t num_to_keep = 0;\n\n  for (int64_t _i = 0; _i < ndets; _i++) {\n    auto i = order[_i];\n    if (suppressed[i] == 1) {\n      continue;\n    }\n\n    keep[num_to_keep++] = i;\n\n    for (int64_t _j = _i + 1; _j < ndets; _j++) {\n      auto j = order[_j];\n      if (suppressed[j] == 1) {\n        continue;\n      }\n\n      auto ovr = single_box_iou_rotated<scalar_t>(\n          dets[i].data_ptr<scalar_t>(), dets[j].data_ptr<scalar_t>());\n      if (ovr >= iou_threshold) {\n        suppressed[j] = 1;\n      }\n    }\n  }\n  return keep_t.narrow(/*dim=*/0, /*start=*/0, /*length=*/num_to_keep);\n}\n\nat::Tensor nms_rotated_cpu(\n    // input must be contiguous\n    const at::Tensor& dets,\n    const at::Tensor& scores,\n    const float iou_threshold) {\n  auto result = at::empty({0}, dets.options());\n\n  AT_DISPATCH_FLOATING_TYPES(dets.scalar_type(), \"nms_rotated\", [&] {\n    result = nms_rotated_cpu_kernel<scalar_t>(dets, scores, iou_threshold);\n  });\n  return result;\n}\n"
  },
  {
    "path": "utils/nms_rotated/src/nms_rotated_cuda.cu",
    "content": "// Modified from\n// https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/nms_rotated\n// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n#include <c10/cuda/CUDAGuard.h>\n#include <ATen/cuda/CUDAApplyUtils.cuh>\n#include \"box_iou_rotated_utils.h\"\n\nint const threadsPerBlock = sizeof(unsigned long long) * 8;\n\ntemplate <typename T>\n__global__ void nms_rotated_cuda_kernel(\n    const int n_boxes,\n    const float iou_threshold,\n    const T* dev_boxes,\n    unsigned long long* dev_mask) {\n  // nms_rotated_cuda_kernel is modified from torchvision's nms_cuda_kernel\n\n  const int row_start = blockIdx.y;\n  const int col_start = blockIdx.x;\n\n  // if (row_start > col_start) return;\n\n  const int row_size =\n      min(n_boxes - row_start * threadsPerBlock, threadsPerBlock);\n  const int col_size =\n      min(n_boxes - col_start * threadsPerBlock, threadsPerBlock);\n\n  // Compared to nms_cuda_kernel, where each box is represented with 4 values\n  // (x1, y1, x2, y2), each rotated box is represented with 5 values\n  // (x_center, y_center, width, height, angle_degrees) here.\n  __shared__ T block_boxes[threadsPerBlock * 5];\n  if (threadIdx.x < col_size) {\n    block_boxes[threadIdx.x * 5 + 0] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0];\n    block_boxes[threadIdx.x * 5 + 1] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1];\n    block_boxes[threadIdx.x * 5 + 2] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2];\n    block_boxes[threadIdx.x * 5 + 3] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3];\n    block_boxes[threadIdx.x * 5 + 4] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4];\n  }\n  __syncthreads();\n\n  if (threadIdx.x < row_size) {\n    const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;\n    const T* cur_box = dev_boxes + cur_box_idx * 5;\n    int i = 0;\n    unsigned long long t = 0;\n    int start = 0;\n    if (row_start == col_start) {\n      start = threadIdx.x + 1;\n    }\n    for (i = start; i < col_size; i++) {\n      // Instead of devIoU used by original horizontal nms, here\n      // we use the single_box_iou_rotated function from box_iou_rotated_utils.h\n      if (single_box_iou_rotated<T>(cur_box, block_boxes + i * 5) >\n          iou_threshold) {\n        t |= 1ULL << i;\n      }\n    }\n    const int col_blocks = at::cuda::ATenCeilDiv(n_boxes, threadsPerBlock);\n    dev_mask[cur_box_idx * col_blocks + col_start] = t;\n  }\n}\n\n\nat::Tensor nms_rotated_cuda(\n    // input must be contiguous\n    const at::Tensor& dets,\n    const at::Tensor& scores,\n    float iou_threshold) {\n  // using scalar_t = float;\n  AT_ASSERTM(dets.is_cuda(), \"dets must be a CUDA tensor\");\n  AT_ASSERTM(scores.is_cuda(), \"scores must be a CUDA tensor\");\n  at::cuda::CUDAGuard device_guard(dets.device());\n\n  auto order_t = std::get<1>(scores.sort(0, /* descending=*/true));\n  auto dets_sorted = dets.index_select(0, order_t);\n\n  auto dets_num = dets.size(0);\n\n  const int col_blocks =\n      at::cuda::ATenCeilDiv(static_cast<int>(dets_num), threadsPerBlock);\n\n  at::Tensor mask =\n      at::empty({dets_num * col_blocks}, dets.options().dtype(at::kLong));\n\n  dim3 blocks(col_blocks, col_blocks);\n  dim3 threads(threadsPerBlock);\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  AT_DISPATCH_FLOATING_TYPES(\n      dets_sorted.scalar_type(), \"nms_rotated_kernel_cuda\", [&] {\n        nms_rotated_cuda_kernel<scalar_t><<<blocks, threads, 0, stream>>>(\n            dets_num,\n            iou_threshold,\n            dets_sorted.data_ptr<scalar_t>(),\n            (unsigned long long*)mask.data_ptr<int64_t>());\n      });\n\n  at::Tensor mask_cpu = mask.to(at::kCPU);\n  unsigned long long* mask_host =\n      (unsigned long long*)mask_cpu.data_ptr<int64_t>();\n\n  std::vector<unsigned long long> remv(col_blocks);\n  memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks);\n\n  at::Tensor keep =\n      at::empty({dets_num}, dets.options().dtype(at::kLong).device(at::kCPU));\n  int64_t* keep_out = keep.data_ptr<int64_t>();\n\n  int num_to_keep = 0;\n  for (int i = 0; i < dets_num; i++) {\n    int nblock = i / threadsPerBlock;\n    int inblock = i % threadsPerBlock;\n\n    if (!(remv[nblock] & (1ULL << inblock))) {\n      keep_out[num_to_keep++] = i;\n      unsigned long long* p = mask_host + i * col_blocks;\n      for (int j = nblock; j < col_blocks; j++) {\n        remv[j] |= p[j];\n      }\n    }\n  }\n\n  AT_CUDA_CHECK(cudaGetLastError());\n  return order_t.index(\n      {keep.narrow(/*dim=*/0, /*start=*/0, /*length=*/num_to_keep)\n           .to(order_t.device(), keep.scalar_type())});\n}\n"
  },
  {
    "path": "utils/nms_rotated/src/nms_rotated_ext.cpp",
    "content": "// Modified from\n// https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/nms_rotated\n// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n#include <ATen/ATen.h>\n#include <torch/extension.h>\n\n\n#ifdef WITH_CUDA\nat::Tensor nms_rotated_cuda(\n    const at::Tensor& dets,\n    const at::Tensor& scores,\n    const float iou_threshold);\n\nat::Tensor poly_nms_cuda(\n    const at::Tensor boxes,\n    float nms_overlap_thresh);\n#endif\n\nat::Tensor nms_rotated_cpu(\n    const at::Tensor& dets,\n    const at::Tensor& scores,\n    const float iou_threshold);\n\n\ninline at::Tensor nms_rotated(\n    const at::Tensor& dets,\n    const at::Tensor& scores,\n    const float iou_threshold) {\n  assert(dets.device().is_cuda() == scores.device().is_cuda());\n  if (dets.device().is_cuda()) {\n#ifdef WITH_CUDA\n    return nms_rotated_cuda(\n        dets.contiguous(), scores.contiguous(), iou_threshold);\n#else\n    AT_ERROR(\"Not compiled with GPU support\");\n#endif\n  }\n  return nms_rotated_cpu(dets.contiguous(), scores.contiguous(), iou_threshold);\n}\n\n\ninline at::Tensor nms_poly(\n    const at::Tensor& dets,\n    const float iou_threshold) {\n  if (dets.device().is_cuda()) {\n#ifdef WITH_CUDA\n    if (dets.numel() == 0)\n      return at::empty({0}, dets.options().dtype(at::kLong).device(at::kCPU));\n    return poly_nms_cuda(dets, iou_threshold);\n#else\n    AT_ERROR(\"POLY_NMS is not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"POLY_NMS is not implemented on CPU\");\n}\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"nms_rotated\", &nms_rotated, \"nms for rotated bboxes\");\n  m.def(\"nms_poly\", &nms_poly, \"nms for poly bboxes\");\n}\n"
  },
  {
    "path": "utils/nms_rotated/src/poly_nms_cpu.cpp",
    "content": "#include <torch/extension.h>\n\ntemplate <typename scalar_t>\nat::Tensor poly_nms_cpu_kernel(const at::Tensor& dets, const float threshold) {\n\n"
  },
  {
    "path": "utils/nms_rotated/src/poly_nms_cuda.cu",
    "content": "#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n\n#include <THC/THC.h>\n#include <THC/THCDeviceUtils.cuh>\n\n#include <vector>\n#include <iostream>\n\n#define CUDA_CHECK(condition) \\\n  /* Code block avoids redefinition of cudaError_t error */ \\\n  do { \\\n    cudaError_t error = condition; \\\n    if (error != cudaSuccess) { \\\n      std::cout << cudaGetErrorString(error) << std::endl; \\\n    } \\\n  } while (0)\n\n#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0))\nint const threadsPerBlock = sizeof(unsigned long long) * 8;\n\n\n#define maxn 10\nconst double eps=1E-8;\n\n__device__ inline int sig(float d){\n    return(d>1E-8)-(d<-1E-8);\n}\n\n__device__ inline int point_eq(const float2 a, const float2 b) {\n    return sig(a.x - b.x) == 0 && sig(a.y - b.y)==0;\n}\n\n__device__ inline void point_swap(float2 *a, float2 *b) {\n    float2 temp = *a;\n    *a = *b;\n    *b = temp;\n}\n\n__device__ inline void point_reverse(float2 *first, float2* last)\n{\n    while ((first!=last)&&(first!=--last)) {\n        point_swap (first,last);\n        ++first;\n    }\n}\n\n__device__ inline float cross(float2 o,float2 a,float2 b){  //叉积\n    return(a.x-o.x)*(b.y-o.y)-(b.x-o.x)*(a.y-o.y);\n}\n__device__ inline float area(float2* ps,int n){\n    ps[n]=ps[0];\n    float res=0;\n    for(int i=0;i<n;i++){\n        res+=ps[i].x*ps[i+1].y-ps[i].y*ps[i+1].x;\n    }\n    return res/2.0;\n}\n__device__ inline int lineCross(float2 a,float2 b,float2 c,float2 d,float2&p){\n    float s1,s2;\n    s1=cross(a,b,c);\n    s2=cross(a,b,d);\n    if(sig(s1)==0&&sig(s2)==0) return 2;\n    if(sig(s2-s1)==0) return 0;\n    p.x=(c.x*s2-d.x*s1)/(s2-s1);\n    p.y=(c.y*s2-d.y*s1)/(s2-s1);\n    return 1;\n}\n\n__device__ inline void polygon_cut(float2*p,int&n,float2 a,float2 b, float2* pp){\n\n    int m=0;p[n]=p[0];\n    for(int i=0;i<n;i++){\n        if(sig(cross(a,b,p[i]))>0) pp[m++]=p[i];\n        if(sig(cross(a,b,p[i]))!=sig(cross(a,b,p[i+1])))\n            lineCross(a,b,p[i],p[i+1],pp[m++]);\n    }\n    n=0;\n    for(int i=0;i<m;i++)\n        if(!i||!(point_eq(pp[i], pp[i-1])))\n            p[n++]=pp[i];\n    // while(n>1&&p[n-1]==p[0])n--;\n    while(n>1&&point_eq(p[n-1], p[0]))n--;\n}\n\n//---------------华丽的分隔线-----------------//\n//返回三角形oab和三角形ocd的有向交面积,o是原点//\n__device__ inline float intersectArea(float2 a,float2 b,float2 c,float2 d){\n    float2 o = make_float2(0,0);\n    int s1=sig(cross(o,a,b));\n    int s2=sig(cross(o,c,d));\n    if(s1==0||s2==0)return 0.0;//退化，面积为0\n    // if(s1==-1) swap(a,b);\n    // if(s2==-1) swap(c,d);\n    if (s1 == -1) point_swap(&a, &b);\n    if (s2 == -1) point_swap(&c, &d);\n    float2 p[10]={o,a,b};\n    int n=3;\n    float2 pp[maxn];\n    polygon_cut(p,n,o,c,pp);\n    polygon_cut(p,n,c,d,pp);\n    polygon_cut(p,n,d,o,pp);\n    float res=fabs(area(p,n));\n    if(s1*s2==-1) res=-res;return res;\n}\n//求两多边形的交面积\n__device__ inline float intersectArea(float2*ps1,int n1,float2*ps2,int n2){\n    if(area(ps1,n1)<0) point_reverse(ps1,ps1+n1);\n    if(area(ps2,n2)<0) point_reverse(ps2,ps2+n2);\n    ps1[n1]=ps1[0];\n    ps2[n2]=ps2[0];\n    float res=0;\n    for(int i=0;i<n1;i++){\n        for(int j=0;j<n2;j++){\n            res+=intersectArea(ps1[i],ps1[i+1],ps2[j],ps2[j+1]);\n        }\n    }\n    return res;//assumeresispositive!\n}\n\n// TODO: optimal if by first calculate the iou between two hbbs\n__device__ inline float devPolyIoU(float const * const p, float const * const q) {\n    float2 ps1[maxn], ps2[maxn];\n    int n1 = 4;\n    int n2 = 4;\n    for (int i = 0; i < 4; i++) {\n        ps1[i].x = p[i * 2];\n        ps1[i].y = p[i * 2 + 1];\n\n        ps2[i].x = q[i * 2];\n        ps2[i].y = q[i * 2 + 1];\n    }\n    float inter_area = intersectArea(ps1, n1, ps2, n2);\n    float union_area = fabs(area(ps1, n1)) + fabs(area(ps2, n2)) - inter_area;\n    float iou = 0;\n    if (union_area == 0) {\n        iou = (inter_area + 1) / (union_area + 1);\n    } else {\n        iou = inter_area / union_area;\n    }\n    return iou;\n}\n\n__global__ void poly_nms_kernel(const int n_polys, const float nms_overlap_thresh,\n                            const float *dev_polys, unsigned long long *dev_mask) {\n    const int row_start = blockIdx.y;\n    const int col_start = blockIdx.x;\n\n    const int row_size =\n            min(n_polys - row_start * threadsPerBlock, threadsPerBlock);\n    const int cols_size =\n            min(n_polys - col_start * threadsPerBlock, threadsPerBlock);\n\n    __shared__ float block_polys[threadsPerBlock * 9];\n    if (threadIdx.x < cols_size) {\n        block_polys[threadIdx.x * 9 + 0] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 0];\n        block_polys[threadIdx.x * 9 + 1] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 1];\n        block_polys[threadIdx.x * 9 + 2] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 2];\n        block_polys[threadIdx.x * 9 + 3] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 3];\n        block_polys[threadIdx.x * 9 + 4] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 4];\n        block_polys[threadIdx.x * 9 + 5] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 5];\n        block_polys[threadIdx.x * 9 + 6] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 6];\n        block_polys[threadIdx.x * 9 + 7] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 7];\n        block_polys[threadIdx.x * 9 + 8] =\n            dev_polys[(threadsPerBlock * col_start + threadIdx.x) * 9 + 8];\n    }\n    __syncthreads();\n\n    if (threadIdx.x < row_size) {\n        const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;\n        const float *cur_box = dev_polys + cur_box_idx * 9;\n        int i = 0;\n        unsigned long long t = 0;\n        int start = 0;\n        if (row_start == col_start) {\n            start = threadIdx.x + 1;\n        }\n        for (i = start; i < cols_size; i++) {\n            if (devPolyIoU(cur_box, block_polys + i * 9) > nms_overlap_thresh) {\n                t |= 1ULL << i;\n            }\n        }\n        const int col_blocks = THCCeilDiv(n_polys, threadsPerBlock);\n        dev_mask[cur_box_idx * col_blocks + col_start] = t;\n    }\n}\n\n// boxes is a N x 9 tensor\nat::Tensor poly_nms_cuda(const at::Tensor boxes, float nms_overlap_thresh) {\n\n    at::DeviceGuard guard(boxes.device());\n\n    using scalar_t = float;\n    AT_ASSERTM(boxes.device().is_cuda(), \"boxes must be a CUDA tensor\");\n    auto scores = boxes.select(1, 8);\n    auto order_t = std::get<1>(scores.sort(0, /*descending=*/true));\n    auto boxes_sorted = boxes.index_select(0, order_t);\n\n    int boxes_num = boxes.size(0);\n\n    const int col_blocks = THCCeilDiv(boxes_num, threadsPerBlock);\n\n    scalar_t* boxes_dev = boxes_sorted.data_ptr<scalar_t>();\n\n    THCState *state = at::globalContext().lazyInitCUDA();\n\n    unsigned long long* mask_dev = NULL;\n\n    mask_dev = (unsigned long long*) THCudaMalloc(state, boxes_num * col_blocks * sizeof(unsigned long long));\n\n    dim3 blocks(THCCeilDiv(boxes_num, threadsPerBlock),\n                THCCeilDiv(boxes_num, threadsPerBlock));\n    dim3 threads(threadsPerBlock);\n    poly_nms_kernel<<<blocks, threads, 0, at::cuda::getCurrentCUDAStream()>>>(boxes_num,\n                                        nms_overlap_thresh,\n                                        boxes_dev,\n                                        mask_dev);\n    \n    std::vector<unsigned long long> mask_host(boxes_num * col_blocks);\n    THCudaCheck(cudaMemcpyAsync(\n\t\t\t    &mask_host[0],\n                            mask_dev,\n                            sizeof(unsigned long long) * boxes_num * col_blocks,\n                            cudaMemcpyDeviceToHost,\n\t\t\t    at::cuda::getCurrentCUDAStream()\n\t\t\t    ));\n    \n    std::vector<unsigned long long> remv(col_blocks);\n    memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks);\n\n    at::Tensor keep = at::empty({boxes_num}, boxes.options().dtype(at::kLong).device(at::kCPU));\n    int64_t* keep_out = keep.data_ptr<int64_t>();\n\n    int num_to_keep = 0;\n    for (int i = 0; i < boxes_num; i++) {\n        int nblock = i / threadsPerBlock;\n        int inblock = i % threadsPerBlock;\n\n        if (!(remv[nblock] & (1ULL << inblock))) {\n            keep_out[num_to_keep++] = i;\n            unsigned long long *p = &mask_host[0] + i * col_blocks;\n            for (int j = nblock; j < col_blocks; j++) {\n                remv[j] |= p[j];\n            }\n        }\n    }\n\n    THCudaFree(state, mask_dev);\n\n    return order_t.index({\n        keep.narrow(/*dim=*/0, /*start=*/0, /*length=*/num_to_keep).to(\n          order_t.device(), keep.scalar_type())});\n}\n\n"
  },
  {
    "path": "utils/utils.py",
    "content": "import numpy as np\nfrom PIL import Image\n\n\n#---------------------------------------------------------#\n#   将图像转换成RGB图像，防止灰度图在预测时报错。\n#   代码仅仅支持RGB图像的预测，所有其它类型的图像都会转化成RGB\n#---------------------------------------------------------#\ndef cvtColor(image):\n    if len(np.shape(image)) == 3 and np.shape(image)[2] == 3:\n        return image \n    else:\n        image = image.convert('RGB')\n        return image \n\n#---------------------------------------------------#\n#   对输入图像进行resize\n#---------------------------------------------------#\ndef resize_image(image, size, letterbox_image):\n    iw, ih  = image.size\n    w, h    = size\n    if letterbox_image:\n        scale   = min(w/iw, h/ih)\n        nw      = int(iw*scale)\n        nh      = int(ih*scale)\n\n        image   = image.resize((nw,nh), Image.BICUBIC)\n        new_image = Image.new('RGB', size, (128,128,128))\n        new_image.paste(image, ((w-nw)//2, (h-nh)//2))\n    else:\n        new_image = image.resize((w, h), Image.BICUBIC)\n    return new_image\n\n#---------------------------------------------------#\n#   获得类\n#---------------------------------------------------#\ndef get_classes(classes_path):\n    with open(classes_path, encoding='utf-8') as f:\n        class_names = f.readlines()\n    class_names = [c.strip() for c in class_names]\n    return class_names, len(class_names)\n\n#---------------------------------------------------#\n#   获得先验框\n#---------------------------------------------------#\ndef get_anchors(anchors_path):\n    '''loads the anchors from a file'''\n    with open(anchors_path, encoding='utf-8') as f:\n        anchors = f.readline()\n    anchors = [float(x) for x in anchors.split(',')]\n    anchors = np.array(anchors).reshape(-1, 2)\n    return anchors, len(anchors)\n\n#---------------------------------------------------#\n#   获得学习率\n#---------------------------------------------------#\ndef get_lr(optimizer):\n    for param_group in optimizer.param_groups:\n        return param_group['lr']\n\ndef preprocess_input(image):\n    image /= 255.0\n    return image\n\ndef show_config(**kwargs):\n    print('Configurations:')\n    print('-' * 70)\n    print('|%25s | %40s|' % ('keys', 'values'))\n    print('-' * 70)\n    for key, value in kwargs.items():\n        print('|%25s | %40s|' % (str(key), str(value)))\n    print('-' * 70)\n        \ndef download_weights(phi, model_dir=\"./model_data\"):\n    import os\n    from torch.hub import load_state_dict_from_url\n    \n    download_urls = {\n        \"l\" : 'https://github.com/bubbliiiing/yolov7-pytorch/releases/download/v1.0/yolov7_backbone_weights.pth',\n        \"x\" : 'https://github.com/bubbliiiing/yolov7-pytorch/releases/download/v1.0/yolov7_x_backbone_weights.pth',\n    }\n    url = download_urls[phi]\n    \n    if not os.path.exists(model_dir):\n        os.makedirs(model_dir)\n    load_state_dict_from_url(url, model_dir)"
  },
  {
    "path": "utils/utils_bbox.py",
    "content": "import numpy as np\nimport torch\nimport math\nfrom utils.utils_rbox import *\nfrom utils.nms_rotated import obb_nms\n\nclass DecodeBox():\n    def __init__(self, anchors, num_classes, input_shape, anchors_mask = [[6,7,8], [3,4,5], [0,1,2]]):\n        super(DecodeBox, self).__init__()\n        self.anchors        = anchors\n        self.num_classes    = num_classes\n        self.bbox_attrs     = 6 + num_classes\n        self.input_shape    = input_shape\n        #-----------------------------------------------------------#\n        #   13x13的特征层对应的anchor是[142, 110],[192, 243],[459, 401]\n        #   26x26的特征层对应的anchor是[36, 75],[76, 55],[72, 146]\n        #   52x52的特征层对应的anchor是[12, 16],[19, 36],[40, 28]\n        #-----------------------------------------------------------#\n        self.anchors_mask   = anchors_mask\n\n    def decode_box(self, inputs):\n        outputs = []\n        for i, input in enumerate(inputs):\n            #-----------------------------------------------#\n            #   输入的input一共有三个，他们的shape分别是\n            #   batch_size = 1\n            #   batch_size, 3 * (5 + 1 + 80), 20, 20\n            #   batch_size, 255, 40, 40\n            #   batch_size, 255, 80, 80\n            #-----------------------------------------------#\n            batch_size      = input.size(0)\n            input_height    = input.size(2)\n            input_width     = input.size(3)\n\n            #-----------------------------------------------#\n            #   输入为640x640时\n            #   stride_h = stride_w = 32、16、8\n            #-----------------------------------------------#\n            stride_h = self.input_shape[0] / input_height\n            stride_w = self.input_shape[1] / input_width\n            #-------------------------------------------------#\n            #   此时获得的scaled_anchors大小是相对于特征层的\n            #-------------------------------------------------#\n            scaled_anchors = [(anchor_width / stride_w, anchor_height / stride_h) for anchor_width, anchor_height in self.anchors[self.anchors_mask[i]]]\n\n            #-----------------------------------------------#\n            #   输入的input一共有三个，他们的shape分别是\n            #   batch_size, 3, 20, 20, 85\n            #   batch_size, 3, 40, 40, 85\n            #   batch_size, 3, 80, 80, 85\n            #-----------------------------------------------#\n            prediction = input.view(batch_size, len(self.anchors_mask[i]),\n                                    self.bbox_attrs, input_height, input_width).permute(0, 1, 3, 4, 2).contiguous()\n\n            #-----------------------------------------------#\n            #   先验框的中心位置的调整参数\n            #-----------------------------------------------#\n            x = torch.sigmoid(prediction[..., 0])  \n            y = torch.sigmoid(prediction[..., 1])\n            #-----------------------------------------------#\n            #   先验框的宽高调整参数\n            #-----------------------------------------------#\n            w = torch.sigmoid(prediction[..., 2]) \n            h = torch.sigmoid(prediction[..., 3]) \n            #-----------------------------------------------#\n            #   获取旋转角度\n            #-----------------------------------------------#\n            angle       = torch.sigmoid(prediction[..., 4])\n            #-----------------------------------------------#\n            #   获得置信度，是否有物体\n            #-----------------------------------------------#\n            conf        = torch.sigmoid(prediction[..., 5])\n            #-----------------------------------------------#\n            #   种类置信度\n            #-----------------------------------------------#\n            pred_cls    = torch.sigmoid(prediction[..., 6:])\n\n            FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor\n            LongTensor  = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor\n\n            #----------------------------------------------------------#\n            #   生成网格，先验框中心，网格左上角 \n            #   batch_size,3,20,20\n            #----------------------------------------------------------#\n            grid_x = torch.linspace(0, input_width - 1, input_width).repeat(input_height, 1).repeat(\n                batch_size * len(self.anchors_mask[i]), 1, 1).view(x.shape).type(FloatTensor)\n            grid_y = torch.linspace(0, input_height - 1, input_height).repeat(input_width, 1).t().repeat(\n                batch_size * len(self.anchors_mask[i]), 1, 1).view(y.shape).type(FloatTensor)\n\n            #----------------------------------------------------------#\n            #   按照网格格式生成先验框的宽高\n            #   batch_size,3,20,20\n            #----------------------------------------------------------#\n            anchor_w = FloatTensor(scaled_anchors).index_select(1, LongTensor([0]))\n            anchor_h = FloatTensor(scaled_anchors).index_select(1, LongTensor([1]))\n            anchor_w = anchor_w.repeat(batch_size, 1).repeat(1, 1, input_height * input_width).view(w.shape)\n            anchor_h = anchor_h.repeat(batch_size, 1).repeat(1, 1, input_height * input_width).view(h.shape)\n\n            #----------------------------------------------------------#\n            #   利用预测结果对先验框进行调整\n            #   首先调整先验框的中心，从先验框中心向右下角偏移\n            #   再调整先验框的宽高。\n            #   x 0 ~ 1 => 0 ~ 2 => -0.5, 1.5 => 负责一定范围的目标的预测\n            #   y 0 ~ 1 => 0 ~ 2 => -0.5, 1.5 => 负责一定范围的目标的预测\n            #   w 0 ~ 1 => 0 ~ 2 => 0 ~ 4 => 先验框的宽高调节范围为0~4倍\n            #   h 0 ~ 1 => 0 ~ 2 => 0 ~ 4 => 先验框的宽高调节范围为0~4倍\n            #----------------------------------------------------------#\n            pred_boxes          = FloatTensor(prediction[..., :4].shape)\n            pred_boxes[..., 0]  = x.data * 2. - 0.5 + grid_x\n            pred_boxes[..., 1]  = y.data * 2. - 0.5 + grid_y\n            pred_boxes[..., 2]  = (w.data * 2) ** 2 * anchor_w\n            pred_boxes[..., 3]  = (h.data * 2) ** 2 * anchor_h\n            pred_theta          = (angle.data - 0.5) * math.pi\n            #----------------------------------------------------------#\n            #   将输出结果归一化成小数的形式\n            #----------------------------------------------------------#\n            _scale = torch.Tensor([input_width, input_height, input_width, input_height]).type(FloatTensor)\n            output = torch.cat((pred_boxes.view(batch_size, -1, 4) / _scale, pred_theta.view(batch_size, -1, 1),\n                                conf.view(batch_size, -1, 1), pred_cls.view(batch_size, -1, self.num_classes)), -1)\n            outputs.append(output.data)\n        return outputs\n\n    def non_max_suppression(self, prediction, num_classes, input_shape, image_shape, letterbox_image, conf_thres=0.5, nms_thres=0.4):\n        #----------------------------------------------------------#\n        #   prediction  [batch_size, num_anchors, 85]\n        #----------------------------------------------------------#\n\n        output = [None for _ in range(len(prediction))]\n        for i, image_pred in enumerate(prediction):\n            #----------------------------------------------------------#\n            #   对种类预测部分取max。\n            #   class_conf  [num_anchors, 1]    种类置信度\n            #   class_pred  [num_anchors, 1]    种类\n            #----------------------------------------------------------#\n            class_conf, class_pred = torch.max(image_pred[:, 6:6 + num_classes], 1, keepdim=True)\n\n            #----------------------------------------------------------#\n            #   利用置信度进行第一轮筛选\n            #----------------------------------------------------------#\n            conf_mask = (image_pred[:, 5] * class_conf[:, 0] >= conf_thres).squeeze()\n            #----------------------------------------------------------#\n            #   根据置信度进行预测结果的筛选\n            #----------------------------------------------------------#\n            image_pred = image_pred[conf_mask]\n            class_conf = class_conf[conf_mask]\n            class_pred = class_pred[conf_mask]\n            if not image_pred.size(0):\n                continue\n            #-------------------------------------------------------------------------#\n            #   detections  [num_anchors, 8]\n            #   8的内容为：x, y, w, h, angle, obj_conf, class_conf, class_pred\n            #-------------------------------------------------------------------------#\n            detections = torch.cat((image_pred[:, :6], class_conf.float(), class_pred.float()), 1)\n\n            #------------------------------------------#\n            #   获得预测结果中包含的所有种类\n            #------------------------------------------#\n            unique_labels = detections[:, -1].cpu().unique()\n\n            if prediction.is_cuda:\n                unique_labels = unique_labels.cuda()\n                detections = detections.cuda()\n\n            for c in unique_labels:\n                #------------------------------------------#\n                #   获得某一类得分筛选后全部的预测结果\n                #------------------------------------------#\n                detections_class = detections[detections[:, -1] == c]\n\n                #------------------------------------------#\n                #   使用官方自带的非极大抑制会速度更快一些！\n                #   筛选出一定区域内，属于同一种类得分最大的框\n                #------------------------------------------#\n                _, keep = obb_nms(\n                    detections_class[:, :5],\n                    detections_class[:, 5] * detections_class[:, 6],\n                    nms_thres\n                )\n                max_detections = detections_class[keep]\n                \n                # Add max detections to outputs\n                output[i] = max_detections if output[i] is None else torch.cat((output[i], max_detections))\n            \n            if output[i] is not None:\n                output[i] = output[i].cpu().numpy()\n                output[i][:, :5]  = self.yolo_correct_boxes(output[i], input_shape, image_shape, letterbox_image)\n        return output\n\n    def yolo_correct_boxes(self, output, input_shape, image_shape, letterbox_image):\n        #-----------------------------------------------------------------#\n        #   把y轴放前面是因为方便预测框和图像的宽高进行相乘\n        #-----------------------------------------------------------------#\n        box_xy = output[..., 0:2]\n        box_wh = output[..., 2:4]\n        angle  = output[..., 4:5]\n        box_yx = box_xy[..., ::-1]\n        box_hw = box_wh[..., ::-1]\n        input_shape = np.array(input_shape)\n        image_shape = np.array(image_shape)\n\n        if letterbox_image:\n            #-----------------------------------------------------------------#\n            #   这里求出来的offset是图像有效区域相对于图像左上角的偏移情况\n            #   new_shape指的是宽高缩放情况\n            #-----------------------------------------------------------------#\n            new_shape = np.round(image_shape * np.min(input_shape/image_shape))\n            offset  = (input_shape - new_shape)/2./input_shape\n            scale   = input_shape/new_shape\n\n            box_yx  = (box_yx - offset) * scale\n            box_hw *= scale\n\n        box_xy = box_yx[:, ::-1]\n        box_hw = box_wh[:, ::-1]\n\n        rboxes  = np.concatenate([box_xy, box_wh, angle], axis=-1)\n        rboxes[:, [0, 2]] *= image_shape[1]\n        rboxes[:, [1, 3]] *= image_shape[0]\n        return rboxes\n\nif __name__ == \"__main__\":\n    import matplotlib.pyplot as plt\n    import numpy as np\n\n    #---------------------------------------------------#\n    #   将预测值的每个特征层调成真实值\n    #---------------------------------------------------#\n    def get_anchors_and_decode(input, input_shape, anchors, anchors_mask, num_classes):\n        #-----------------------------------------------#\n        #   input   batch_size, 3 * (5 + 1 + num_classes), 20, 20\n        #-----------------------------------------------#\n        batch_size      = input.size(0)\n        input_height    = input.size(2)\n        input_width     = input.size(3)\n\n        #-----------------------------------------------#\n        #   输入为640x640时 input_shape = [640, 640]  input_height = 20, input_width = 20\n        #   640 / 20 = 32\n        #   stride_h = stride_w = 32\n        #-----------------------------------------------#\n        stride_h = input_shape[0] / input_height\n        stride_w = input_shape[1] / input_width\n        #-------------------------------------------------#\n        #   此时获得的scaled_anchors大小是相对于特征层的\n        #   anchor_width, anchor_height / stride_h, stride_w\n        #-------------------------------------------------#\n        scaled_anchors = [(anchor_width / stride_w, anchor_height / stride_h) for anchor_width, anchor_height in anchors[anchors_mask[2]]]\n\n        #-----------------------------------------------#\n        #   batch_size, 3 * (4 + 1 + num_classes), 20, 20 => \n        #   batch_size, 3, 5 + num_classes, 20, 20  => \n        #   batch_size, 3, 20, 20, 4 + 1 + num_classes\n        #-----------------------------------------------#\n        prediction = input.view(batch_size, len(anchors_mask[2]),\n                                num_classes + 6, input_height, input_width).permute(0, 1, 3, 4, 2).contiguous()\n\n        #-----------------------------------------------#\n        #   先验框的中心位置的调整参数\n        #-----------------------------------------------#\n        x = torch.sigmoid(prediction[..., 0])  \n        y = torch.sigmoid(prediction[..., 1])\n        #-----------------------------------------------#\n        #   先验框的宽高调整参数\n        #-----------------------------------------------#\n        w = torch.sigmoid(prediction[..., 2]) \n        h = torch.sigmoid(prediction[..., 3]) \n        #-----------------------------------------------#\n        #   获得置信度，是否有物体 0 - 1\n        #-----------------------------------------------#\n        conf        = torch.sigmoid(prediction[..., 5])\n        #-----------------------------------------------#\n        #   种类置信度 0 - 1\n        #-----------------------------------------------#\n        pred_cls    = torch.sigmoid(prediction[..., 6:])\n\n        FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor\n        LongTensor  = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor\n\n        #----------------------------------------------------------#\n        #   生成网格，先验框中心，网格左上角 \n        #   batch_size,3,20,20\n        #   range(20)\n        #   [\n        #       [0, 1, 2, 3 ……, 19], \n        #       [0, 1, 2, 3 ……, 19], \n        #       …… （20次）\n        #       [0, 1, 2, 3 ……, 19]\n        #   ] * (batch_size * 3)\n        #   [batch_size, 3, 20, 20]\n        #   \n        #   [\n        #       [0, 1, 2, 3 ……, 19], \n        #       [0, 1, 2, 3 ……, 19], \n        #       …… （20次）\n        #       [0, 1, 2, 3 ……, 19]\n        #   ].T * (batch_size * 3)\n        #   [batch_size, 3, 20, 20]\n        #----------------------------------------------------------#\n        grid_x = torch.linspace(0, input_width - 1, input_width).repeat(input_height, 1).repeat(\n            batch_size * len(anchors_mask[2]), 1, 1).view(x.shape).type(FloatTensor)\n        grid_y = torch.linspace(0, input_height - 1, input_height).repeat(input_width, 1).t().repeat(\n            batch_size * len(anchors_mask[2]), 1, 1).view(y.shape).type(FloatTensor)\n\n        #----------------------------------------------------------#\n        #   按照网格格式生成先验框的宽高\n        #   batch_size, 3, 20 * 20 => batch_size, 3, 20, 20\n        #   batch_size, 3, 20 * 20 => batch_size, 3, 20, 20\n        #----------------------------------------------------------#\n        anchor_w = FloatTensor(scaled_anchors).index_select(1, LongTensor([0]))\n        anchor_h = FloatTensor(scaled_anchors).index_select(1, LongTensor([1]))\n        anchor_w = anchor_w.repeat(batch_size, 1).repeat(1, 1, input_height * input_width).view(w.shape)\n        anchor_h = anchor_h.repeat(batch_size, 1).repeat(1, 1, input_height * input_width).view(h.shape)\n\n        #----------------------------------------------------------#\n        #   利用预测结果对先验框进行调整\n        #   首先调整先验框的中心，从先验框中心向右下角偏移\n        #   再调整先验框的宽高。\n        #   x  0 ~ 1 => 0 ~ 2 => -0.5 ~ 1.5 + grid_x\n        #   y  0 ~ 1 => 0 ~ 2 => -0.5 ~ 1.5 + grid_y\n        #   w  0 ~ 1 => 0 ~ 2 => 0 ~ 4 * anchor_w\n        #   h  0 ~ 1 => 0 ~ 2 => 0 ~ 4 * anchor_h \n        #----------------------------------------------------------#\n        pred_boxes          = FloatTensor(prediction[..., :4].shape)\n        pred_boxes[..., 0]  = x.data * 2. - 0.5 + grid_x\n        pred_boxes[..., 1]  = y.data * 2. - 0.5 + grid_y\n        pred_boxes[..., 2]  = (w.data * 2) ** 2 * anchor_w\n        pred_boxes[..., 3]  = (h.data * 2) ** 2 * anchor_h\n\n        point_h = 5\n        point_w = 5\n        \n        box_xy          = pred_boxes[..., 0:2].cpu().numpy() * 32\n        box_wh          = pred_boxes[..., 2:4].cpu().numpy() * 32\n        grid_x          = grid_x.cpu().numpy() * 32\n        grid_y          = grid_y.cpu().numpy() * 32\n        anchor_w        = anchor_w.cpu().numpy() * 32\n        anchor_h        = anchor_h.cpu().numpy() * 32\n        \n        fig = plt.figure()\n        ax  = fig.add_subplot(121)\n        from PIL import Image\n        img = Image.open(\"img/street.jpg\").resize([640, 640])\n        plt.imshow(img, alpha=0.5)\n        plt.ylim(-30, 650)\n        plt.xlim(-30, 650)\n        plt.scatter(grid_x, grid_y)\n        plt.scatter(point_h * 32, point_w * 32, c='black')\n        plt.gca().invert_yaxis()\n\n        anchor_left = grid_x - anchor_w / 2\n        anchor_top  = grid_y - anchor_h / 2\n        \n        rect1 = plt.Rectangle([anchor_left[0, 0, point_h, point_w],anchor_top[0, 0, point_h, point_w]], \\\n            anchor_w[0, 0, point_h, point_w],anchor_h[0, 0, point_h, point_w],color=\"r\",fill=False)\n        rect2 = plt.Rectangle([anchor_left[0, 1, point_h, point_w],anchor_top[0, 1, point_h, point_w]], \\\n            anchor_w[0, 1, point_h, point_w],anchor_h[0, 1, point_h, point_w],color=\"r\",fill=False)\n        rect3 = plt.Rectangle([anchor_left[0, 2, point_h, point_w],anchor_top[0, 2, point_h, point_w]], \\\n            anchor_w[0, 2, point_h, point_w],anchor_h[0, 2, point_h, point_w],color=\"r\",fill=False)\n\n        ax.add_patch(rect1)\n        ax.add_patch(rect2)\n        ax.add_patch(rect3)\n\n        ax  = fig.add_subplot(122)\n        plt.imshow(img, alpha=0.5)\n        plt.ylim(-30, 650)\n        plt.xlim(-30, 650)\n        plt.scatter(grid_x, grid_y)\n        plt.scatter(point_h * 32, point_w * 32, c='black')\n        plt.scatter(box_xy[0, :, point_h, point_w, 0], box_xy[0, :, point_h, point_w, 1], c='r')\n        plt.gca().invert_yaxis()\n\n        pre_left    = box_xy[...,0] - box_wh[...,0] / 2\n        pre_top     = box_xy[...,1] - box_wh[...,1] / 2\n\n        rect1 = plt.Rectangle([pre_left[0, 0, point_h, point_w], pre_top[0, 0, point_h, point_w]],\\\n            box_wh[0, 0, point_h, point_w,0], box_wh[0, 0, point_h, point_w,1],color=\"r\",fill=False)\n        rect2 = plt.Rectangle([pre_left[0, 1, point_h, point_w], pre_top[0, 1, point_h, point_w]],\\\n            box_wh[0, 1, point_h, point_w,0], box_wh[0, 1, point_h, point_w,1],color=\"r\",fill=False)\n        rect3 = plt.Rectangle([pre_left[0, 2, point_h, point_w], pre_top[0, 2, point_h, point_w]],\\\n            box_wh[0, 2, point_h, point_w,0], box_wh[0, 2, point_h, point_w,1],color=\"r\",fill=False)\n\n        ax.add_patch(rect1)\n        ax.add_patch(rect2)\n        ax.add_patch(rect3)\n\n        plt.show()\n        #\n    feat            = torch.from_numpy(np.random.normal(0.2, 0.5, [4, 258, 20, 20])).float()\n    anchors         = np.array([[116, 90], [156, 198], [373, 326], [30,61], [62,45], [59,119], [10,13], [16,30], [33,23]])\n    anchors_mask    = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]\n    get_anchors_and_decode(feat, [640, 640], anchors, anchors_mask, 80)\n"
  },
  {
    "path": "utils/utils_fit.py",
    "content": "import os\n\nimport torch\nfrom tqdm import tqdm\n\nfrom utils.utils import get_lr\n        \ndef fit_one_epoch(model_train, model, ema, yolo_loss, loss_history, eval_callback, optimizer, epoch, epoch_step, epoch_step_val, gen, gen_val, Epoch, cuda, fp16, scaler, save_period, save_dir, local_rank=0):\n    loss        = 0\n    val_loss    = 0\n\n    if local_rank == 0:\n        print('Start Train')\n        pbar = tqdm(total=epoch_step,desc=f'Epoch {epoch + 1}/{Epoch}',postfix=dict,mininterval=0.3)\n    model_train.train()\n    for iteration, batch in enumerate(gen):\n        if iteration >= epoch_step:\n            break\n\n        images, targets = batch[0], batch[1]\n        with torch.no_grad():\n            if cuda:\n                images  = images.cuda(local_rank)\n                targets = targets.cuda(local_rank)\n        #----------------------#\n        #   清零梯度\n        #----------------------#\n        optimizer.zero_grad()\n        if not fp16:\n            #----------------------#\n            #   前向传播\n            #----------------------#\n            outputs         = model_train(images)\n            loss_value      = yolo_loss(outputs, targets, images)\n\n            #----------------------#\n            #   反向传播\n            #----------------------#\n            loss_value.backward()\n            optimizer.step()\n        else:\n            from torch.cuda.amp import autocast\n            with autocast():\n                #----------------------#\n                #   前向传播\n                #----------------------#\n                outputs         = model_train(images)\n                loss_value      = yolo_loss(outputs, targets, images)\n\n            #----------------------#\n            #   反向传播\n            #----------------------#\n            scaler.scale(loss_value).backward()\n            scaler.step(optimizer)\n            scaler.update()\n        if ema:\n            ema.update(model_train)\n\n        loss += loss_value.item()\n        \n        if local_rank == 0:\n            pbar.set_postfix(**{'loss'  : loss / (iteration + 1), \n                                'lr'    : get_lr(optimizer)})\n            pbar.update(1)\n\n    if local_rank == 0:\n        pbar.close()\n        print('Finish Train')\n        print('Start Validation')\n        pbar = tqdm(total=epoch_step_val, desc=f'Epoch {epoch + 1}/{Epoch}',postfix=dict,mininterval=0.3)\n\n    if ema:\n        model_train_eval = ema.ema\n    else:\n        model_train_eval = model_train.eval()\n        \n    for iteration, batch in enumerate(gen_val):\n        if iteration >= epoch_step_val:\n            break\n        images, targets = batch[0], batch[1]\n        with torch.no_grad():\n            if cuda:\n                images  = images.cuda(local_rank)\n                targets = targets.cuda(local_rank)\n            #----------------------#\n            #   清零梯度\n            #----------------------#\n            optimizer.zero_grad()\n            #----------------------#\n            #   前向传播\n            #----------------------#\n            outputs         = model_train_eval(images)\n            loss_value      = yolo_loss(outputs, targets, images)\n\n        val_loss += loss_value.item()\n        if local_rank == 0:\n            pbar.set_postfix(**{'val_loss': val_loss / (iteration + 1)})\n            pbar.update(1)\n            \n    if local_rank == 0:\n        pbar.close()\n        print('Finish Validation')\n        loss_history.append_loss(epoch + 1, loss / epoch_step, val_loss / epoch_step_val)\n        eval_callback.on_epoch_end(epoch + 1, model_train_eval)\n        print('Epoch:'+ str(epoch + 1) + '/' + str(Epoch))\n        print('Total Loss: %.3f || Val Loss: %.3f ' % (loss / epoch_step, val_loss / epoch_step_val))\n        \n        #-----------------------------------------------#\n        #   保存权值\n        #-----------------------------------------------#\n        if ema:\n            save_state_dict = ema.ema.state_dict()\n        else:\n            save_state_dict = model.state_dict()\n\n        if (epoch + 1) % save_period == 0 or epoch + 1 == Epoch:\n            torch.save(save_state_dict, os.path.join(save_dir, \"ep%03d-loss%.3f-val_loss%.3f.pth\" % (epoch + 1, loss / epoch_step, val_loss / epoch_step_val)))\n            \n        if len(loss_history.val_loss) <= 1 or (val_loss / epoch_step_val) <= min(loss_history.val_loss):\n            print('Save best model to best_epoch_weights.pth')\n            torch.save(save_state_dict, os.path.join(save_dir, \"best_epoch_weights.pth\"))\n            \n        torch.save(save_state_dict, os.path.join(save_dir, \"last_epoch_weights.pth\"))"
  },
  {
    "path": "utils/utils_map.py",
    "content": "import glob\nimport json\nimport math\nimport operator\nimport os\nimport shutil\nimport sys\n\ntry:\n    from pycocotools.coco import COCO\n    from pycocotools.cocoeval import COCOeval\nexcept:\n    pass\nimport cv2\nimport matplotlib\nmatplotlib.use('Agg')\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\n'''\n    0,0 ------> x (width)\n     |\n     |  (Left,Top)\n     |      *_________\n     |      |         |\n            |         |\n     y      |_________|\n  (height)            *\n                (Right,Bottom)\n'''\n\ndef iou_rotate_calculate(boxes1, boxes2):\n    \"\"\"\n    计算旋转面积\n    boxes1,boxes2格式为x,y,w,h,theta\n    \"\"\"\n    area1 = boxes1[2] * boxes1[3]\n    area2 = boxes2[2] * boxes2[3]\n    r1 = ((boxes1[0], boxes1[1]), (boxes1[2], boxes1[3]), boxes1[4])\n    r2 = ((boxes2[0], boxes2[1]), (boxes2[2], boxes2[3]), boxes2[4])\n    int_pts = cv2.rotatedRectangleIntersection(r1, r2)[1]\n    if int_pts is not None:\n        order_pts = cv2.convexHull(int_pts, returnPoints=True)\n        int_area = cv2.contourArea(order_pts)\n        ious = int_area * 1.0 / (area1 + area2 - int_area)\n    else:\n        ious = 0\n    return ious\n\ndef log_average_miss_rate(precision, fp_cumsum, num_images):\n    \"\"\"\n        log-average miss rate:\n            Calculated by averaging miss rates at 9 evenly spaced FPPI points\n            between 10e-2 and 10e0, in log-space.\n\n        output:\n                lamr | log-average miss rate\n                mr | miss rate\n                fppi | false positives per image\n\n        references:\n            [1] Dollar, Piotr, et al. \"Pedestrian Detection: An Evaluation of the\n               State of the Art.\" Pattern Analysis and Machine Intelligence, IEEE\n               Transactions on 34.4 (2012): 743 - 761.\n    \"\"\"\n\n    if precision.size == 0:\n        lamr = 0\n        mr = 1\n        fppi = 0\n        return lamr, mr, fppi\n\n    fppi = fp_cumsum / float(num_images)\n    mr = (1 - precision)\n\n    fppi_tmp = np.insert(fppi, 0, -1.0)\n    mr_tmp = np.insert(mr, 0, 1.0)\n\n    ref = np.logspace(-2.0, 0.0, num = 9)\n    for i, ref_i in enumerate(ref):\n        j = np.where(fppi_tmp <= ref_i)[-1][-1]\n        ref[i] = mr_tmp[j]\n\n    lamr = math.exp(np.mean(np.log(np.maximum(1e-10, ref))))\n\n    return lamr, mr, fppi\n\n\"\"\"\n throw error and exit\n\"\"\"\ndef error(msg):\n    print(msg)\n    sys.exit(0)\n\n\"\"\"\n check if the number is a float between 0.0 and 1.0\n\"\"\"\ndef is_float_between_0_and_1(value):\n    try:\n        val = float(value)\n        if val > 0.0 and val < 1.0:\n            return True\n        else:\n            return False\n    except ValueError:\n        return False\n\n\"\"\"\n Calculate the AP given the recall and precision array\n    1st) We compute a version of the measured precision/recall curve with\n         precision monotonically decreasing\n    2nd) We compute the AP as the area under this curve by numerical integration.\n\"\"\"\ndef voc_ap(rec, prec):\n    \"\"\"\n    --- Official matlab code VOC2012---\n    mrec=[0 ; rec ; 1];\n    mpre=[0 ; prec ; 0];\n    for i=numel(mpre)-1:-1:1\n            mpre(i)=max(mpre(i),mpre(i+1));\n    end\n    i=find(mrec(2:end)~=mrec(1:end-1))+1;\n    ap=sum((mrec(i)-mrec(i-1)).*mpre(i));\n    \"\"\"\n    rec.insert(0, 0.0) # insert 0.0 at begining of list\n    rec.append(1.0) # insert 1.0 at end of list\n    mrec = rec[:]\n    prec.insert(0, 0.0) # insert 0.0 at begining of list\n    prec.append(0.0) # insert 0.0 at end of list\n    mpre = prec[:]\n    \"\"\"\n     This part makes the precision monotonically decreasing\n        (goes from the end to the beginning)\n        matlab: for i=numel(mpre)-1:-1:1\n                    mpre(i)=max(mpre(i),mpre(i+1));\n    \"\"\"\n    for i in range(len(mpre)-2, -1, -1):\n        mpre[i] = max(mpre[i], mpre[i+1])\n    \"\"\"\n     This part creates a list of indexes where the recall changes\n        matlab: i=find(mrec(2:end)~=mrec(1:end-1))+1;\n    \"\"\"\n    i_list = []\n    for i in range(1, len(mrec)):\n        if mrec[i] != mrec[i-1]:\n            i_list.append(i) # if it was matlab would be i + 1\n    \"\"\"\n     The Average Precision (AP) is the area under the curve\n        (numerical integration)\n        matlab: ap=sum((mrec(i)-mrec(i-1)).*mpre(i));\n    \"\"\"\n    ap = 0.0\n    for i in i_list:\n        ap += ((mrec[i]-mrec[i-1])*mpre[i])\n    return ap, mrec, mpre\n\n\n\"\"\"\n Convert the lines of a file to a list\n\"\"\"\ndef file_lines_to_list(path):\n    # open txt file lines to a list\n    with open(path) as f:\n        content = f.readlines()\n    # remove whitespace characters like `\\n` at the end of each line\n    content = [x.strip() for x in content]\n    return content\n\n\"\"\"\n Draws text in image\n\"\"\"\ndef draw_text_in_image(img, text, pos, color, line_width):\n    font = cv2.FONT_HERSHEY_PLAIN\n    fontScale = 1\n    lineType = 1\n    bottomLeftCornerOfText = pos\n    cv2.putText(img, text,\n            bottomLeftCornerOfText,\n            font,\n            fontScale,\n            color,\n            lineType)\n    text_width, _ = cv2.getTextSize(text, font, fontScale, lineType)[0]\n    return img, (line_width + text_width)\n\n\"\"\"\n Plot - adjust axes\n\"\"\"\ndef adjust_axes(r, t, fig, axes):\n    # get text width for re-scaling\n    bb = t.get_window_extent(renderer=r)\n    text_width_inches = bb.width / fig.dpi\n    # get axis width in inches\n    current_fig_width = fig.get_figwidth()\n    new_fig_width = current_fig_width + text_width_inches\n    propotion = new_fig_width / current_fig_width\n    # get axis limit\n    x_lim = axes.get_xlim()\n    axes.set_xlim([x_lim[0], x_lim[1]*propotion])\n\n\"\"\"\n Draw plot using Matplotlib\n\"\"\"\ndef draw_plot_func(dictionary, n_classes, window_title, plot_title, x_label, output_path, to_show, plot_color, true_p_bar):\n    # sort the dictionary by decreasing value, into a list of tuples\n    sorted_dic_by_value = sorted(dictionary.items(), key=operator.itemgetter(1))\n    # unpacking the list of tuples into two lists\n    sorted_keys, sorted_values = zip(*sorted_dic_by_value)\n    # \n    if true_p_bar != \"\":\n        \"\"\"\n         Special case to draw in:\n            - green -> TP: True Positives (object detected and matches ground-truth)\n            - red -> FP: False Positives (object detected but does not match ground-truth)\n            - orange -> FN: False Negatives (object not detected but present in the ground-truth)\n        \"\"\"\n        fp_sorted = []\n        tp_sorted = []\n        for key in sorted_keys:\n            fp_sorted.append(dictionary[key] - true_p_bar[key])\n            tp_sorted.append(true_p_bar[key])\n        plt.barh(range(n_classes), fp_sorted, align='center', color='crimson', label='False Positive')\n        plt.barh(range(n_classes), tp_sorted, align='center', color='forestgreen', label='True Positive', left=fp_sorted)\n        # add legend\n        plt.legend(loc='lower right')\n        \"\"\"\n         Write number on side of bar\n        \"\"\"\n        fig = plt.gcf() # gcf - get current figure\n        axes = plt.gca()\n        r = fig.canvas.manager.get_renderer()\n        for i, val in enumerate(sorted_values):\n            fp_val = fp_sorted[i]\n            tp_val = tp_sorted[i]\n            fp_str_val = \" \" + str(fp_val)\n            tp_str_val = fp_str_val + \" \" + str(tp_val)\n            # trick to paint multicolor with offset:\n            # first paint everything and then repaint the first number\n            t = plt.text(val, i, tp_str_val, color='forestgreen', va='center', fontweight='bold')\n            plt.text(val, i, fp_str_val, color='crimson', va='center', fontweight='bold')\n            if i == (len(sorted_values)-1): # largest bar\n                adjust_axes(r, t, fig, axes)\n    else:\n        plt.barh(range(n_classes), sorted_values, color=plot_color)\n        \"\"\"\n         Write number on side of bar\n        \"\"\"\n        fig = plt.gcf() # gcf - get current figure\n        axes = plt.gca()\n        r = fig.canvas.get_renderer()\n        for i, val in enumerate(sorted_values):\n            str_val = \" \" + str(val) # add a space before\n            if val < 1.0:\n                str_val = \" {0:.2f}\".format(val)\n            t = plt.text(val, i, str_val, color=plot_color, va='center', fontweight='bold')\n            # re-set axes to show number inside the figure\n            if i == (len(sorted_values)-1): # largest bar\n                adjust_axes(r, t, fig, axes)\n    # set window title\n    fig.canvas.manager.set_window_title(window_title)\n    # write classes in y axis\n    tick_font_size = 12\n    plt.yticks(range(n_classes), sorted_keys, fontsize=tick_font_size)\n    \"\"\"\n     Re-scale height accordingly\n    \"\"\"\n    init_height = fig.get_figheight()\n    # comput the matrix height in points and inches\n    dpi = fig.dpi\n    height_pt = n_classes * (tick_font_size * 1.4) # 1.4 (some spacing)\n    height_in = height_pt / dpi\n    # compute the required figure height \n    top_margin = 0.15 # in percentage of the figure height\n    bottom_margin = 0.05 # in percentage of the figure height\n    figure_height = height_in / (1 - top_margin - bottom_margin)\n    # set new height\n    if figure_height > init_height:\n        fig.set_figheight(figure_height)\n\n    # set plot title\n    plt.title(plot_title, fontsize=14)\n    # set axis titles\n    # plt.xlabel('classes')\n    plt.xlabel(x_label, fontsize='large')\n    # adjust size of window\n    fig.tight_layout()\n    # save the plot\n    fig.savefig(output_path)\n    # show image\n    if to_show:\n        plt.show()\n    # close the plot\n    plt.close()\n\ndef get_map(MINOVERLAP, draw_plot, score_threhold=0.5, path = './map_out'):\n    GT_PATH             = os.path.join(path, 'ground-truth')\n    DR_PATH             = os.path.join(path, 'detection-results')\n    IMG_PATH            = os.path.join(path, 'images-optional')\n    TEMP_FILES_PATH     = os.path.join(path, '.temp_files')\n    RESULTS_FILES_PATH  = os.path.join(path, 'results')\n\n    show_animation = True\n    if os.path.exists(IMG_PATH): \n        for dirpath, dirnames, files in os.walk(IMG_PATH):\n            if not files:\n                show_animation = False\n    else:\n        show_animation = False\n\n    if not os.path.exists(TEMP_FILES_PATH):\n        os.makedirs(TEMP_FILES_PATH)\n        \n    if os.path.exists(RESULTS_FILES_PATH):\n        shutil.rmtree(RESULTS_FILES_PATH)\n    else:\n        os.makedirs(RESULTS_FILES_PATH)\n    if draw_plot:\n        try:\n            matplotlib.use('TkAgg')\n        except:\n            pass\n        os.makedirs(os.path.join(RESULTS_FILES_PATH, \"AP\"))\n        os.makedirs(os.path.join(RESULTS_FILES_PATH, \"F1\"))\n        os.makedirs(os.path.join(RESULTS_FILES_PATH, \"Recall\"))\n        os.makedirs(os.path.join(RESULTS_FILES_PATH, \"Precision\"))\n    if show_animation:\n        os.makedirs(os.path.join(RESULTS_FILES_PATH, \"images\", \"detections_one_by_one\"))\n\n    ground_truth_files_list = glob.glob(GT_PATH + '/*.txt')\n    if len(ground_truth_files_list) == 0:\n        error(\"Error: No ground-truth files found!\")\n    ground_truth_files_list.sort()\n    gt_counter_per_class     = {}\n    counter_images_per_class = {}\n\n    for txt_file in ground_truth_files_list:\n        file_id     = txt_file.split(\".txt\", 1)[0]\n        file_id     = os.path.basename(os.path.normpath(file_id))\n        temp_path   = os.path.join(DR_PATH, (file_id + \".txt\"))\n        if not os.path.exists(temp_path):\n            error_msg = \"Error. File not found: {}\\n\".format(temp_path)\n            error(error_msg)\n        lines_list      = file_lines_to_list(txt_file)\n        bounding_boxes  = []\n        is_difficult    = False\n        already_seen_classes = []\n        for line in lines_list:\n            try:\n                if \"difficult\" in line:\n                    class_name, x, y, w, h,angle, _difficult = line.split()\n                    is_difficult = True\n                else:\n                    class_name, x, y, w, h,angle = line.split()\n            except:\n                if \"difficult\" in line:\n                    line_split  = line.split()\n                    _difficult  = line_split[-1]\n                    angle       = line_split[-2]\n                    h           = line_split[-3]\n                    w           = line_split[-4]\n                    y           = line_split[-5]\n                    x           = line_split[-6]\n                    class_name  = \"\"\n                    for name in line_split[:-6]:\n                        class_name += name + \" \"\n                    class_name   = class_name[:-1]\n                    is_difficult = True\n                else:\n                    line_split  = line.split()\n                    angle       = line_split[-1]\n                    h           = line_split[-2]\n                    w           = line_split[-3]\n                    y           = line_split[-4]\n                    x           = line_split[-5]\n                    class_name  = \"\"\n                    for name in line_split[:-5]:\n                        class_name += name + \" \"\n                    class_name = class_name[:-1]\n\n            bbox = x + \" \" + y + \" \" + w + \" \" + h + \" \" + angle\n            if is_difficult:\n                bounding_boxes.append({\"class_name\":class_name, \"bbox\":bbox, \"used\":False, \"difficult\":True})\n                is_difficult = False\n            else:\n                bounding_boxes.append({\"class_name\":class_name, \"bbox\":bbox, \"used\":False})\n                if class_name in gt_counter_per_class:\n                    gt_counter_per_class[class_name] += 1\n                else:\n                    gt_counter_per_class[class_name] = 1\n\n                if class_name not in already_seen_classes:\n                    if class_name in counter_images_per_class:\n                        counter_images_per_class[class_name] += 1\n                    else:\n                        counter_images_per_class[class_name] = 1\n                    already_seen_classes.append(class_name)\n\n        with open(TEMP_FILES_PATH + \"/\" + file_id + \"_ground_truth.json\", 'w') as outfile:\n            json.dump(bounding_boxes, outfile)\n\n    gt_classes  = list(gt_counter_per_class.keys())\n    gt_classes  = sorted(gt_classes)\n    n_classes   = len(gt_classes)\n\n    dr_files_list = glob.glob(DR_PATH + '/*.txt')\n    dr_files_list.sort()\n    for class_index, class_name in enumerate(gt_classes):\n        bounding_boxes = []\n        for txt_file in dr_files_list:\n            file_id = txt_file.split(\".txt\",1)[0]\n            file_id = os.path.basename(os.path.normpath(file_id))\n            temp_path = os.path.join(GT_PATH, (file_id + \".txt\"))\n            if class_index == 0:\n                if not os.path.exists(temp_path):\n                    error_msg = \"Error. File not found: {}\\n\".format(temp_path)\n                    error(error_msg)\n            lines = file_lines_to_list(txt_file)\n            for line in lines:\n                try:\n                    tmp_class_name, confidence, x, y, w, h,angle = line.split()\n                except:\n                    line_split      = line.split()\n                    angle           = line_split[-1]\n                    h               = line_split[-2]\n                    w               = line_split[-3]\n                    y               = line_split[-4]\n                    x               = line_split[-5]\n                    confidence      = line_split[-6]\n                    tmp_class_name  = \"\"\n                    for name in line_split[:-6]:\n                        tmp_class_name += name + \" \"\n                    tmp_class_name  = tmp_class_name[:-1]\n\n                if tmp_class_name == class_name:\n                    bbox = x + \" \" + y + \" \" + w + \" \" + h + \" \" + angle\n                    bounding_boxes.append({\"confidence\":confidence, \"file_id\":file_id, \"bbox\":bbox})\n\n        bounding_boxes.sort(key=lambda x:float(x['confidence']), reverse=True)\n        with open(TEMP_FILES_PATH + \"/\" + class_name + \"_dr.json\", 'w') as outfile:\n            json.dump(bounding_boxes, outfile)\n\n    sum_AP = 0.0\n    ap_dictionary = {}\n    lamr_dictionary = {}\n    with open(RESULTS_FILES_PATH + \"/results.txt\", 'w') as results_file:\n        results_file.write(\"# AP and precision/recall per class\\n\")\n        count_true_positives = {}\n\n        for class_index, class_name in enumerate(gt_classes):\n            count_true_positives[class_name] = 0\n            dr_file = TEMP_FILES_PATH + \"/\" + class_name + \"_dr.json\"\n            dr_data = json.load(open(dr_file))\n\n            nd          = len(dr_data)\n            tp          = [0] * nd\n            fp          = [0] * nd\n            score       = [0] * nd\n            score_threhold_idx = 0\n            for idx, detection in enumerate(dr_data):\n                file_id     = detection[\"file_id\"]\n                score[idx]  = float(detection[\"confidence\"])\n                if score[idx] >= score_threhold:\n                    score_threhold_idx = idx\n\n                if show_animation:\n                    ground_truth_img = glob.glob1(IMG_PATH, file_id + \".*\")\n                    if len(ground_truth_img) == 0:\n                        error(\"Error. Image not found with id: \" + file_id)\n                    elif len(ground_truth_img) > 1:\n                        error(\"Error. Multiple image with id: \" + file_id)\n                    else:\n                        img = cv2.imread(IMG_PATH + \"/\" + ground_truth_img[0])\n                        img_cumulative_path = RESULTS_FILES_PATH + \"/images/\" + ground_truth_img[0]\n                        if os.path.isfile(img_cumulative_path):\n                            img_cumulative = cv2.imread(img_cumulative_path)\n                        else:\n                            img_cumulative = img.copy()\n                        bottom_border = 60\n                        BLACK = [0, 0, 0]\n                        img = cv2.copyMakeBorder(img, 0, bottom_border, 0, 0, cv2.BORDER_CONSTANT, value=BLACK)\n\n                gt_file             = TEMP_FILES_PATH + \"/\" + file_id + \"_ground_truth.json\"\n                ground_truth_data   = json.load(open(gt_file))\n                ovmax       = -1\n                gt_match    = -1\n                bb          = [float(x) for x in detection[\"bbox\"].split()]\n                for obj in ground_truth_data:\n                    if obj[\"class_name\"] == class_name:\n                        bbgt    = [float(x) for x in obj[\"bbox\"].split() ]\n                        box1    = np.array([bb[0], bb[1], bb[2], bb[3], bb[4]], np.float32)\n                        box2    = np.array([bbgt[0], bbgt[1], bbgt[2], bbgt[3], bbgt[4]], np.float32)\n                        ov      = iou_rotate_calculate(box1, box2)\n                        if ov > ovmax:\n                            ovmax = ov\n                            gt_match = obj\n\n                if show_animation:\n                    status = \"NO MATCH FOUND!\" \n                    \n                min_overlap = MINOVERLAP\n                if ovmax >= min_overlap:\n                    if \"difficult\" not in gt_match:\n                        if not bool(gt_match[\"used\"]):\n                            tp[idx] = 1\n                            gt_match[\"used\"] = True\n                            count_true_positives[class_name] += 1\n                            with open(gt_file, 'w') as f:\n                                    f.write(json.dumps(ground_truth_data))\n                            if show_animation:\n                                status = \"MATCH!\"\n                        else:\n                            fp[idx] = 1\n                            if show_animation:\n                                status = \"REPEATED MATCH!\"\n                else:\n                    fp[idx] = 1\n                    if ovmax > 0:\n                        status = \"INSUFFICIENT OVERLAP\"\n\n                \"\"\"\n                Draw image to show animation\n                \"\"\"\n                if show_animation:\n                    height, widht = img.shape[:2]\n                    white           = (255,255,255)\n                    light_blue      = (255,200,100)\n                    green           = (0,255,0)\n                    light_red       = (30,30,255)\n                    margin          = 10\n                    # 1nd line\n                    v_pos           = int(height - margin - (bottom_border / 2.0))\n                    text            = \"Image: \" + ground_truth_img[0] + \" \"\n                    img, line_width = draw_text_in_image(img, text, (margin, v_pos), white, 0)\n                    text            = \"Class [\" + str(class_index) + \"/\" + str(n_classes) + \"]: \" + class_name + \" \"\n                    img, line_width = draw_text_in_image(img, text, (margin + line_width, v_pos), light_blue, line_width)\n                    if ovmax != -1:\n                        color       = light_red\n                        if status   == \"INSUFFICIENT OVERLAP\":\n                            text    = \"IoU: {0:.2f}% \".format(ovmax*100) + \"< {0:.2f}% \".format(min_overlap*100)\n                        else:\n                            text    = \"IoU: {0:.2f}% \".format(ovmax*100) + \">= {0:.2f}% \".format(min_overlap*100)\n                            color   = green\n                        img, _ = draw_text_in_image(img, text, (margin + line_width, v_pos), color, line_width)\n                    # 2nd line\n                    v_pos           += int(bottom_border / 2.0)\n                    rank_pos        = str(idx+1)\n                    text            = \"Detection #rank: \" + rank_pos + \" confidence: {0:.2f}% \".format(float(detection[\"confidence\"])*100)\n                    img, line_width = draw_text_in_image(img, text, (margin, v_pos), white, 0)\n                    color           = light_red\n                    if status == \"MATCH!\":\n                        color = green\n                    text            = \"Result: \" + status + \" \"\n                    img, line_width = draw_text_in_image(img, text, (margin + line_width, v_pos), color, line_width)\n\n                    font = cv2.FONT_HERSHEY_SIMPLEX\n                    if ovmax > 0: \n                        bbgt = [ int(round(float(x))) for x in gt_match[\"bbox\"].split() ]\n                        cv2.rectangle(img,(bbgt[0],bbgt[1]),(bbgt[2],bbgt[3]),light_blue,2)\n                        cv2.rectangle(img_cumulative,(bbgt[0],bbgt[1]),(bbgt[2],bbgt[3]),light_blue,2)\n                        cv2.putText(img_cumulative, class_name, (bbgt[0],bbgt[1] - 5), font, 0.6, light_blue, 1, cv2.LINE_AA)\n                    bb = [int(i) for i in bb]\n                    cv2.rectangle(img,(bb[0],bb[1]),(bb[2],bb[3]),color,2)\n                    cv2.rectangle(img_cumulative,(bb[0],bb[1]),(bb[2],bb[3]),color,2)\n                    cv2.putText(img_cumulative, class_name, (bb[0],bb[1] - 5), font, 0.6, color, 1, cv2.LINE_AA)\n\n                    cv2.imshow(\"Animation\", img)\n                    cv2.waitKey(20) \n                    output_img_path = RESULTS_FILES_PATH + \"/images/detections_one_by_one/\" + class_name + \"_detection\" + str(idx) + \".jpg\"\n                    cv2.imwrite(output_img_path, img)\n                    cv2.imwrite(img_cumulative_path, img_cumulative)\n\n            cumsum = 0\n            for idx, val in enumerate(fp):\n                fp[idx] += cumsum\n                cumsum += val\n                \n            cumsum = 0\n            for idx, val in enumerate(tp):\n                tp[idx] += cumsum\n                cumsum += val\n\n            rec = tp[:]\n            for idx, val in enumerate(tp):\n                rec[idx] = float(tp[idx]) / np.maximum(gt_counter_per_class[class_name], 1)\n\n            prec = tp[:]\n            for idx, val in enumerate(tp):\n                prec[idx] = float(tp[idx]) / np.maximum((fp[idx] + tp[idx]), 1)\n\n            ap, mrec, mprec = voc_ap(rec[:], prec[:])\n            F1  = np.array(rec)*np.array(prec)*2 / np.where((np.array(prec)+np.array(rec))==0, 1, (np.array(prec)+np.array(rec)))\n\n            sum_AP  += ap\n            text    = \"{0:.2f}%\".format(ap*100) + \" = \" + class_name + \" AP \" #class_name + \" AP = {0:.2f}%\".format(ap*100)\n\n            if len(prec)>0:\n                F1_text         = \"{0:.2f}\".format(F1[score_threhold_idx]) + \" = \" + class_name + \" F1 \"\n                Recall_text     = \"{0:.2f}%\".format(rec[score_threhold_idx]*100) + \" = \" + class_name + \" Recall \"\n                Precision_text  = \"{0:.2f}%\".format(prec[score_threhold_idx]*100) + \" = \" + class_name + \" Precision \"\n            else:\n                F1_text         = \"0.00\" + \" = \" + class_name + \" F1 \" \n                Recall_text     = \"0.00%\" + \" = \" + class_name + \" Recall \" \n                Precision_text  = \"0.00%\" + \" = \" + class_name + \" Precision \" \n\n            rounded_prec    = [ '%.2f' % elem for elem in prec ]\n            rounded_rec     = [ '%.2f' % elem for elem in rec ]\n            results_file.write(text + \"\\n Precision: \" + str(rounded_prec) + \"\\n Recall :\" + str(rounded_rec) + \"\\n\\n\")\n            \n            if len(prec)>0:\n                print(text + \"\\t||\\tscore_threhold=\" + str(score_threhold) + \" : \" + \"F1=\" + \"{0:.2f}\".format(F1[score_threhold_idx])\\\n                    + \" ; Recall=\" + \"{0:.2f}%\".format(rec[score_threhold_idx]*100) + \" ; Precision=\" + \"{0:.2f}%\".format(prec[score_threhold_idx]*100))\n            else:\n                print(text + \"\\t||\\tscore_threhold=\" + str(score_threhold) + \" : \" + \"F1=0.00% ; Recall=0.00% ; Precision=0.00%\")\n            ap_dictionary[class_name] = ap\n\n            n_images = counter_images_per_class[class_name]\n            lamr, mr, fppi = log_average_miss_rate(np.array(rec), np.array(fp), n_images)\n            lamr_dictionary[class_name] = lamr\n\n            if draw_plot:\n                plt.plot(rec, prec, '-o')\n                area_under_curve_x = mrec[:-1] + [mrec[-2]] + [mrec[-1]]\n                area_under_curve_y = mprec[:-1] + [0.0] + [mprec[-1]]\n                plt.fill_between(area_under_curve_x, 0, area_under_curve_y, alpha=0.2, edgecolor='r')\n\n                fig = plt.gcf()\n                fig.canvas.manager.set_window_title('AP ' + class_name)\n\n                plt.title('class: ' + text)\n                plt.xlabel('Recall')\n                plt.ylabel('Precision')\n                axes = plt.gca()\n                axes.set_xlim([0.0,1.0])\n                axes.set_ylim([0.0,1.05]) \n                fig.savefig(RESULTS_FILES_PATH + \"/AP/\" + class_name + \".png\")\n                plt.cla()\n\n                plt.plot(score, F1, \"-\", color='orangered')\n                plt.title('class: ' + F1_text + \"\\nscore_threhold=\" + str(score_threhold))\n                plt.xlabel('Score_Threhold')\n                plt.ylabel('F1')\n                axes = plt.gca()\n                axes.set_xlim([0.0,1.0])\n                axes.set_ylim([0.0,1.05])\n                fig.savefig(RESULTS_FILES_PATH + \"/F1/\" + class_name + \".png\")\n                plt.cla()\n\n                plt.plot(score, rec, \"-H\", color='gold')\n                plt.title('class: ' + Recall_text + \"\\nscore_threhold=\" + str(score_threhold))\n                plt.xlabel('Score_Threhold')\n                plt.ylabel('Recall')\n                axes = plt.gca()\n                axes.set_xlim([0.0,1.0])\n                axes.set_ylim([0.0,1.05])\n                fig.savefig(RESULTS_FILES_PATH + \"/Recall/\" + class_name + \".png\")\n                plt.cla()\n\n                plt.plot(score, prec, \"-s\", color='palevioletred')\n                plt.title('class: ' + Precision_text + \"\\nscore_threhold=\" + str(score_threhold))\n                plt.xlabel('Score_Threhold')\n                plt.ylabel('Precision')\n                axes = plt.gca()\n                axes.set_xlim([0.0,1.0])\n                axes.set_ylim([0.0,1.05])\n                fig.savefig(RESULTS_FILES_PATH + \"/Precision/\" + class_name + \".png\")\n                plt.cla()\n                \n        if show_animation:\n            cv2.destroyAllWindows()\n        if n_classes == 0:\n            print(\"未检测到任何种类，请检查标签信息与get_map.py中的classes_path是否修改。\")\n            return 0\n        results_file.write(\"\\n# mAP of all classes\\n\")\n        mAP     = sum_AP / n_classes\n        text    = \"mAP = {0:.2f}%\".format(mAP*100)\n        results_file.write(text + \"\\n\")\n        print(text)\n\n    shutil.rmtree(TEMP_FILES_PATH)\n\n    \"\"\"\n    Count total of detection-results\n    \"\"\"\n    det_counter_per_class = {}\n    for txt_file in dr_files_list:\n        lines_list = file_lines_to_list(txt_file)\n        for line in lines_list:\n            class_name = line.split()[0]\n            if class_name in det_counter_per_class:\n                det_counter_per_class[class_name] += 1\n            else:\n                det_counter_per_class[class_name] = 1\n    dr_classes = list(det_counter_per_class.keys())\n\n    \"\"\"\n    Write number of ground-truth objects per class to results.txt\n    \"\"\"\n    with open(RESULTS_FILES_PATH + \"/results.txt\", 'a') as results_file:\n        results_file.write(\"\\n# Number of ground-truth objects per class\\n\")\n        for class_name in sorted(gt_counter_per_class):\n            results_file.write(class_name + \": \" + str(gt_counter_per_class[class_name]) + \"\\n\")\n\n    \"\"\"\n    Finish counting true positives\n    \"\"\"\n    for class_name in dr_classes:\n        if class_name not in gt_classes:\n            count_true_positives[class_name] = 0\n\n    \"\"\"\n    Write number of detected objects per class to results.txt\n    \"\"\"\n    with open(RESULTS_FILES_PATH + \"/results.txt\", 'a') as results_file:\n        results_file.write(\"\\n# Number of detected objects per class\\n\")\n        for class_name in sorted(dr_classes):\n            n_det = det_counter_per_class[class_name]\n            text = class_name + \": \" + str(n_det)\n            text += \" (tp:\" + str(count_true_positives[class_name]) + \"\"\n            text += \", fp:\" + str(n_det - count_true_positives[class_name]) + \")\\n\"\n            results_file.write(text)\n\n    \"\"\"\n    Plot the total number of occurences of each class in the ground-truth\n    \"\"\"\n    if draw_plot:\n        window_title = \"ground-truth-info\"\n        plot_title = \"ground-truth\\n\"\n        plot_title += \"(\" + str(len(ground_truth_files_list)) + \" files and \" + str(n_classes) + \" classes)\"\n        x_label = \"Number of objects per class\"\n        output_path = RESULTS_FILES_PATH + \"/ground-truth-info.png\"\n        to_show = False\n        plot_color = 'forestgreen'\n        draw_plot_func(\n            gt_counter_per_class,\n            n_classes,\n            window_title,\n            plot_title,\n            x_label,\n            output_path,\n            to_show,\n            plot_color,\n            '',\n            )\n\n    # \"\"\"\n    # Plot the total number of occurences of each class in the \"detection-results\" folder\n    # \"\"\"\n    # if draw_plot:\n    #     window_title = \"detection-results-info\"\n    #     # Plot title\n    #     plot_title = \"detection-results\\n\"\n    #     plot_title += \"(\" + str(len(dr_files_list)) + \" files and \"\n    #     count_non_zero_values_in_dictionary = sum(int(x) > 0 for x in list(det_counter_per_class.values()))\n    #     plot_title += str(count_non_zero_values_in_dictionary) + \" detected classes)\"\n    #     # end Plot title\n    #     x_label = \"Number of objects per class\"\n    #     output_path = RESULTS_FILES_PATH + \"/detection-results-info.png\"\n    #     to_show = False\n    #     plot_color = 'forestgreen'\n    #     true_p_bar = count_true_positives\n    #     draw_plot_func(\n    #         det_counter_per_class,\n    #         len(det_counter_per_class),\n    #         window_title,\n    #         plot_title,\n    #         x_label,\n    #         output_path,\n    #         to_show,\n    #         plot_color,\n    #         true_p_bar\n    #         )\n\n    \"\"\"\n    Draw log-average miss rate plot (Show lamr of all classes in decreasing order)\n    \"\"\"\n    if draw_plot:\n        window_title = \"lamr\"\n        plot_title = \"log-average miss rate\"\n        x_label = \"log-average miss rate\"\n        output_path = RESULTS_FILES_PATH + \"/lamr.png\"\n        to_show = False\n        plot_color = 'royalblue'\n        draw_plot_func(\n            lamr_dictionary,\n            n_classes,\n            window_title,\n            plot_title,\n            x_label,\n            output_path,\n            to_show,\n            plot_color,\n            \"\"\n            )\n\n    \"\"\"\n    Draw mAP plot (Show AP's of all classes in decreasing order)\n    \"\"\"\n    if draw_plot:\n        window_title = \"mAP\"\n        plot_title = \"mAP = {0:.2f}%\".format(mAP*100)\n        x_label = \"Average Precision\"\n        output_path = RESULTS_FILES_PATH + \"/mAP.png\"\n        to_show = False\n        plot_color = 'royalblue'\n        draw_plot_func(\n            ap_dictionary,\n            n_classes,\n            window_title,\n            plot_title,\n            x_label,\n            output_path,\n            to_show,\n            plot_color,\n            \"\"\n            )\n    return mAP\n\ndef preprocess_gt(gt_path, class_names):\n    image_ids   = os.listdir(gt_path)\n    results = {}\n\n    images = []\n    bboxes = []\n    for i, image_id in enumerate(image_ids):\n        lines_list      = file_lines_to_list(os.path.join(gt_path, image_id))\n        boxes_per_image = []\n        image           = {}\n        image_id        = os.path.splitext(image_id)[0]\n        image['file_name'] = image_id + '.jpg'\n        image['width']     = 1\n        image['height']    = 1\n        #-----------------------------------------------------------------#\n        #   感谢 多学学英语吧 的提醒\n        #   解决了'Results do not correspond to current coco set'问题\n        #-----------------------------------------------------------------#\n        image['id']        = str(image_id)\n\n        for line in lines_list:\n            difficult = 0 \n            if \"difficult\" in line:\n                line_split  = line.split()\n                left, top, right, bottom, _difficult = line_split[-5:]\n                class_name  = \"\"\n                for name in line_split[:-5]:\n                    class_name += name + \" \"\n                class_name  = class_name[:-1]\n                difficult = 1\n            else:\n                line_split  = line.split()\n                left, top, right, bottom = line_split[-4:]\n                class_name  = \"\"\n                for name in line_split[:-4]:\n                    class_name += name + \" \"\n                class_name = class_name[:-1]\n            \n            left, top, right, bottom = float(left), float(top), float(right), float(bottom)\n            if class_name not in class_names:\n                continue\n            cls_id  = class_names.index(class_name) + 1\n            bbox    = [left, top, right - left, bottom - top, difficult, str(image_id), cls_id, (right - left) * (bottom - top) - 10.0]\n            boxes_per_image.append(bbox)\n        images.append(image)\n        bboxes.extend(boxes_per_image)\n    results['images']        = images\n\n    categories = []\n    for i, cls in enumerate(class_names):\n        category = {}\n        category['supercategory']   = cls\n        category['name']            = cls\n        category['id']              = i + 1\n        categories.append(category)\n    results['categories']   = categories\n\n    annotations = []\n    for i, box in enumerate(bboxes):\n        annotation = {}\n        annotation['area']        = box[-1]\n        annotation['category_id'] = box[-2]\n        annotation['image_id']    = box[-3]\n        annotation['iscrowd']     = box[-4]\n        annotation['bbox']        = box[:4]\n        annotation['id']          = i\n        annotations.append(annotation)\n    results['annotations'] = annotations\n    return results\n\ndef preprocess_dr(dr_path, class_names):\n    image_ids = os.listdir(dr_path)\n    results = []\n    for image_id in image_ids:\n        lines_list      = file_lines_to_list(os.path.join(dr_path, image_id))\n        image_id        = os.path.splitext(image_id)[0]\n        for line in lines_list:\n            line_split  = line.split()\n            confidence, left, top, right, bottom = line_split[-5:]\n            class_name  = \"\"\n            for name in line_split[:-5]:\n                class_name += name + \" \"\n            class_name  = class_name[:-1]\n            left, top, right, bottom = float(left), float(top), float(right), float(bottom)\n            result                  = {}\n            result[\"image_id\"]      = str(image_id)\n            if class_name not in class_names:\n                continue\n            result[\"category_id\"]   = class_names.index(class_name) + 1\n            result[\"bbox\"]          = [left, top, right - left, bottom - top]\n            result[\"score\"]         = float(confidence)\n            results.append(result)\n    return results\n \ndef get_coco_map(class_names, path):\n    GT_PATH     = os.path.join(path, 'ground-truth')\n    DR_PATH     = os.path.join(path, 'detection-results')\n    COCO_PATH   = os.path.join(path, 'coco_eval')\n\n    if not os.path.exists(COCO_PATH):\n        os.makedirs(COCO_PATH)\n\n    GT_JSON_PATH = os.path.join(COCO_PATH, 'instances_gt.json')\n    DR_JSON_PATH = os.path.join(COCO_PATH, 'instances_dr.json')\n\n    with open(GT_JSON_PATH, \"w\") as f:\n        results_gt  = preprocess_gt(GT_PATH, class_names)\n        json.dump(results_gt, f, indent=4)\n\n    with open(DR_JSON_PATH, \"w\") as f:\n        results_dr  = preprocess_dr(DR_PATH, class_names)\n        json.dump(results_dr, f, indent=4)\n        if len(results_dr) == 0:\n            print(\"未检测到任何目标。\")\n            return [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n\n    cocoGt      = COCO(GT_JSON_PATH)\n    cocoDt      = cocoGt.loadRes(DR_JSON_PATH)\n    cocoEval    = COCOeval(cocoGt, cocoDt, 'bbox') \n    cocoEval.evaluate()\n    cocoEval.accumulate()\n    cocoEval.summarize()\n\n    return cocoEval.stats"
  },
  {
    "path": "utils/utils_rbox.py",
    "content": "'''\nAuthor: [egrt]\nDate: 2023-01-30 19:00:28\nLastEditors: Egrt\nLastEditTime: 2023-03-13 16:22:48\nDescription: Oriented Bounding Boxes utils\n'''\n\nimport numpy as np\nimport math\npi = np.pi \nimport cv2\nimport torch\n\ndef poly2rbox(polys):\n    \"\"\"\n    Trans poly format to rbox format.\n    Args:\n        polys (array): (num_gts, [x1 y1 x2 y2 x3 y3 x4 y4]) \n    Returns:\n        rboxes (array): (num_gts, [cx cy l s θ]) \n    \"\"\"\n    assert polys.shape[-1] == 8\n    rboxes = []\n    for poly in polys:\n        poly = np.float32(poly.reshape(4, 2))\n        (x, y), (w, h), angle = cv2.minAreaRect(poly) # θ ∈ [0， 90]\n        theta = angle / 180 * pi # 转为pi制\n        # trans opencv format to longedge format θ ∈ [-pi/2， pi/2]\n        if w < h:\n            w, h = h, w\n            theta += np.pi / 2\n        while not np.pi / 2 > theta >= -np.pi / 2:\n            if theta >= np.pi / 2:\n                theta -= np.pi\n            else:\n                theta += np.pi\n        assert np.pi / 2 > theta >= -np.pi / 2\n        rboxes.append([x, y, w, h, theta])\n    return np.array(rboxes)\n\ndef poly2obb_np_le90(poly):\n    \"\"\"Convert polygons to oriented bounding boxes.\n    Args:\n        polys (ndarray): [x0,y0,x1,y1,x2,y2,x3,y3]\n    Returns:\n        obbs (ndarray): [x_ctr,y_ctr,w,h,angle]\n    \"\"\"\n    bboxps = np.array(poly).reshape((4, 2))\n    rbbox = cv2.minAreaRect(bboxps)\n    x, y, w, h, a = rbbox[0][0], rbbox[0][1], rbbox[1][0], rbbox[1][1], rbbox[2]\n    if w < 2 or h < 2:\n        return\n    a = a / 180 * np.pi\n    if w < h:\n        w, h = h, w\n        a += np.pi / 2\n    while not np.pi / 2 > a >= -np.pi / 2:\n        if a >= np.pi / 2:\n            a -= np.pi\n        else:\n            a += np.pi\n    assert np.pi / 2 > a >= -np.pi / 2\n    return x, y, w, h, a\n    \ndef poly2hbb(polys):\n    \"\"\"\n    Trans poly format to hbb format\n    Args:\n        rboxes (array/tensor): (num_gts, poly) \n\n    Returns:\n        hbboxes (array/tensor): (num_gts, [xc yc w h]) \n    \"\"\"\n    assert polys.shape[-1] == 8\n    if isinstance(polys, torch.Tensor):\n        x = polys[:, 0::2] # (num, 4) \n        y = polys[:, 1::2]\n        x_max = torch.amax(x, dim=1) # (num)\n        x_min = torch.amin(x, dim=1)\n        y_max = torch.amax(y, dim=1)\n        y_min = torch.amin(y, dim=1)\n        x_ctr, y_ctr = (x_max + x_min) / 2.0, (y_max + y_min) / 2.0 # (num)\n        h = y_max - y_min # (num)\n        w = x_max - x_min\n        x_ctr, y_ctr, w, h = x_ctr.reshape(-1, 1), y_ctr.reshape(-1, 1), w.reshape(-1, 1), h.reshape(-1, 1) # (num, 1)\n        hbboxes = torch.cat((x_ctr, y_ctr, w, h), dim=1)\n    else:\n        x = polys[:, 0::2] # (num, 4) \n        y = polys[:, 1::2]\n        x_max = np.amax(x, axis=1) # (num)\n        x_min = np.amin(x, axis=1) \n        y_max = np.amax(y, axis=1)\n        y_min = np.amin(y, axis=1)\n        x_ctr, y_ctr = (x_max + x_min) / 2.0, (y_max + y_min) / 2.0 # (num)\n        h = y_max - y_min # (num)\n        w = x_max - x_min\n        x_ctr, y_ctr, w, h = x_ctr.reshape(-1, 1), y_ctr.reshape(-1, 1), w.reshape(-1, 1), h.reshape(-1, 1) # (num, 1)\n        hbboxes = np.concatenate((x_ctr, y_ctr, w, h), axis=1)\n    return hbboxes\n\ndef rbox2poly(obboxes):\n    \"\"\"Convert oriented bounding boxes to polygons.\n    Args:\n        obbs (ndarray): [x_ctr,y_ctr,w,h,angle]\n    Returns:\n        polys (ndarray): [x0,y0,x1,y1,x2,y2,x3,y3]\n    \"\"\"\n    try:\n        center, w, h, theta = np.split(obboxes, (2, 3, 4), axis=-1)\n    except:\n        results = np.stack([0., 0., 0., 0., 0., 0., 0., 0.], axis=-1)\n        return results.reshape(1, -1)\n    Cos, Sin = np.cos(theta), np.sin(theta)\n    vector1 = np.concatenate([w / 2 * Cos, w / 2 * Sin], axis=-1)\n    vector2 = np.concatenate([-h / 2 * Sin, h / 2 * Cos], axis=-1)\n    point1 = center - vector1 - vector2\n    point2 = center + vector1 - vector2\n    point3 = center + vector1 + vector2\n    point4 = center - vector1 + vector2\n    polys = np.concatenate([point1, point2, point3, point4], axis=-1)\n    polys = get_best_begin_point(polys)\n    return polys\n\ndef cal_line_length(point1, point2):\n    \"\"\"Calculate the length of line.\n    Args:\n        point1 (List): [x,y]\n        point2 (List): [x,y]\n    Returns:\n        length (float)\n    \"\"\"\n    return math.sqrt(\n        math.pow(point1[0] - point2[0], 2) +\n        math.pow(point1[1] - point2[1], 2))\n\n\ndef get_best_begin_point_single(coordinate):\n    \"\"\"Get the best begin point of the single polygon.\n    Args:\n        coordinate (List): [x1, y1, x2, y2, x3, y3, x4, y4, score]\n    Returns:\n        reorder coordinate (List): [x1, y1, x2, y2, x3, y3, x4, y4, score]\n    \"\"\"\n    x1, y1, x2, y2, x3, y3, x4, y4 = coordinate\n    xmin = min(x1, x2, x3, x4)\n    ymin = min(y1, y2, y3, y4)\n    xmax = max(x1, x2, x3, x4)\n    ymax = max(y1, y2, y3, y4)\n    combine = [[[x1, y1], [x2, y2], [x3, y3], [x4, y4]],\n               [[x2, y2], [x3, y3], [x4, y4], [x1, y1]],\n               [[x3, y3], [x4, y4], [x1, y1], [x2, y2]],\n               [[x4, y4], [x1, y1], [x2, y2], [x3, y3]]]\n    dst_coordinate = [[xmin, ymin], [xmax, ymin], [xmax, ymax], [xmin, ymax]]\n    force = 100000000.0\n    force_flag = 0\n    for i in range(4):\n        temp_force = cal_line_length(combine[i][0], dst_coordinate[0]) \\\n                     + cal_line_length(combine[i][1], dst_coordinate[1]) \\\n                     + cal_line_length(combine[i][2], dst_coordinate[2]) \\\n                     + cal_line_length(combine[i][3], dst_coordinate[3])\n        if temp_force < force:\n            force = temp_force\n            force_flag = i\n    if force_flag != 0:\n        pass\n    return np.hstack(\n        (np.array(combine[force_flag]).reshape(8)))\n\n\ndef get_best_begin_point(coordinates):\n    \"\"\"Get the best begin points of polygons.\n    Args:\n        coordinate (ndarray): shape(n, 8).\n    Returns:\n        reorder coordinate (ndarray): shape(n, 8).\n    \"\"\"\n    coordinates = list(map(get_best_begin_point_single, coordinates.tolist()))\n    coordinates = np.array(coordinates)\n    return coordinates\n\ndef correct_rboxes(rboxes, image_shape):\n    \"\"\"将polys按比例进行缩放\n    Args:\n        coordinate (ndarray): shape(n, 8).\n    Returns:\n        reorder coordinate (ndarray): shape(n, 8).\n    \"\"\"\n    polys = rbox2poly(rboxes)\n    nh, nw = image_shape\n    polys[:, [0, 2, 4, 6]] *= nw\n    polys[:, [1, 3, 5, 7]] *= nh\n\n    return polys\n"
  },
  {
    "path": "utils_coco/coco_annotation.py",
    "content": "#-------------------------------------------------------#\n#   用于处理COCO数据集，根据json文件生成txt文件用于训练\n#-------------------------------------------------------#\nimport json\nimport os\nfrom collections import defaultdict\n\n#-------------------------------------------------------#\n#   指向了COCO训练集与验证集图片的路径\n#-------------------------------------------------------#\ntrain_datasets_path     = \"coco_dataset/train2017\"\nval_datasets_path       = \"coco_dataset/val2017\"\n\n#-------------------------------------------------------#\n#   指向了COCO训练集与验证集标签的路径\n#-------------------------------------------------------#\ntrain_annotation_path   = \"coco_dataset/annotations/instances_train2017.json\"\nval_annotation_path     = \"coco_dataset/annotations/instances_val2017.json\"\n\n#-------------------------------------------------------#\n#   生成的txt文件路径\n#-------------------------------------------------------#\ntrain_output_path       = \"coco_train.txt\"\nval_output_path         = \"coco_val.txt\"\n\nif __name__ == \"__main__\":\n    name_box_id = defaultdict(list)\n    id_name     = dict()\n    f           = open(train_annotation_path, encoding='utf-8')\n    data        = json.load(f)\n\n    annotations = data['annotations']\n    for ant in annotations:\n        id = ant['image_id']\n        name = os.path.join(train_datasets_path, '%012d.jpg' % id)\n        cat = ant['category_id']\n        if cat >= 1 and cat <= 11:\n            cat = cat - 1\n        elif cat >= 13 and cat <= 25:\n            cat = cat - 2\n        elif cat >= 27 and cat <= 28:\n            cat = cat - 3\n        elif cat >= 31 and cat <= 44:\n            cat = cat - 5\n        elif cat >= 46 and cat <= 65:\n            cat = cat - 6\n        elif cat == 67:\n            cat = cat - 7\n        elif cat == 70:\n            cat = cat - 9\n        elif cat >= 72 and cat <= 82:\n            cat = cat - 10\n        elif cat >= 84 and cat <= 90:\n            cat = cat - 11\n        name_box_id[name].append([ant['bbox'], cat])\n\n    f = open(train_output_path, 'w')\n    for key in name_box_id.keys():\n        f.write(key)\n        box_infos = name_box_id[key]\n        for info in box_infos:\n            x_min = int(info[0][0])\n            y_min = int(info[0][1])\n            x_max = x_min + int(info[0][2])\n            y_max = y_min + int(info[0][3])\n\n            box_info = \" %d,%d,%d,%d,%d\" % (\n                x_min, y_min, x_max, y_max, int(info[1]))\n            f.write(box_info)\n        f.write('\\n')\n    f.close()\n\n    name_box_id = defaultdict(list)\n    id_name     = dict()\n    f           = open(val_annotation_path, encoding='utf-8')\n    data        = json.load(f)\n\n    annotations = data['annotations']\n    for ant in annotations:\n        id = ant['image_id']\n        name = os.path.join(val_datasets_path, '%012d.jpg' % id)\n        cat = ant['category_id']\n        if cat >= 1 and cat <= 11:\n            cat = cat - 1\n        elif cat >= 13 and cat <= 25:\n            cat = cat - 2\n        elif cat >= 27 and cat <= 28:\n            cat = cat - 3\n        elif cat >= 31 and cat <= 44:\n            cat = cat - 5\n        elif cat >= 46 and cat <= 65:\n            cat = cat - 6\n        elif cat == 67:\n            cat = cat - 7\n        elif cat == 70:\n            cat = cat - 9\n        elif cat >= 72 and cat <= 82:\n            cat = cat - 10\n        elif cat >= 84 and cat <= 90:\n            cat = cat - 11\n        name_box_id[name].append([ant['bbox'], cat])\n\n    f = open(val_output_path, 'w')\n    for key in name_box_id.keys():\n        f.write(key)\n        box_infos = name_box_id[key]\n        for info in box_infos:\n            x_min = int(info[0][0])\n            y_min = int(info[0][1])\n            x_max = x_min + int(info[0][2])\n            y_max = y_min + int(info[0][3])\n\n            box_info = \" %d,%d,%d,%d,%d\" % (\n                x_min, y_min, x_max, y_max, int(info[1]))\n            f.write(box_info)\n        f.write('\\n')\n    f.close()\n"
  },
  {
    "path": "utils_coco/get_map_coco.py",
    "content": "import json\nimport os\n\nimport numpy as np\nimport torch\nfrom PIL import Image\nfrom pycocotools.coco import COCO\nfrom pycocotools.cocoeval import COCOeval\nfrom tqdm import tqdm\n\nfrom utils.utils import cvtColor, preprocess_input, resize_image\nfrom yolo import YOLO\n\n#---------------------------------------------------------------------------#\n#   map_mode用于指定该文件运行时计算的内容\n#   map_mode为0代表整个map计算流程，包括获得预测结果、计算map。\n#   map_mode为1代表仅仅获得预测结果。\n#   map_mode为2代表仅仅获得计算map。\n#---------------------------------------------------------------------------#\nmap_mode            = 0\n#-------------------------------------------------------#\n#   指向了验证集标签与图片路径\n#-------------------------------------------------------#\ncocoGt_path         = 'coco_dataset/annotations/instances_val2017.json'\ndataset_img_path    = 'coco_dataset/val2017'\n#-------------------------------------------------------#\n#   结果输出的文件夹，默认为map_out\n#-------------------------------------------------------#\ntemp_save_path      = 'map_out/coco_eval'\n\nclass mAP_YOLO(YOLO):\n    #---------------------------------------------------#\n    #   检测图片\n    #---------------------------------------------------#\n    def detect_image(self, image_id, image, results, clsid2catid):\n        #---------------------------------------------------#\n        #   计算输入图片的高和宽\n        #---------------------------------------------------#\n        image_shape = np.array(np.shape(image)[0:2])\n        #---------------------------------------------------------#\n        #   在这里将图像转换成RGB图像，防止灰度图在预测时报错。\n        #   代码仅仅支持RGB图像的预测，所有其它类型的图像都会转化成RGB\n        #---------------------------------------------------------#\n        image       = cvtColor(image)\n        #---------------------------------------------------------#\n        #   给图像增加灰条，实现不失真的resize\n        #   也可以直接resize进行识别\n        #---------------------------------------------------------#\n        image_data  = resize_image(image, (self.input_shape[1],self.input_shape[0]), self.letterbox_image)\n        #---------------------------------------------------------#\n        #   添加上batch_size维度\n        #---------------------------------------------------------#\n        image_data  = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0)\n\n        with torch.no_grad():\n            images = torch.from_numpy(image_data)\n            if self.cuda:\n                images = images.cuda()\n            #---------------------------------------------------------#\n            #   将图像输入网络当中进行预测！\n            #---------------------------------------------------------#\n            outputs = self.net(images)\n            outputs = self.bbox_util.decode_box(outputs)\n            #---------------------------------------------------------#\n            #   将预测框进行堆叠，然后进行非极大抑制\n            #---------------------------------------------------------#\n            outputs = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, \n                        image_shape, self.letterbox_image, conf_thres = self.confidence, nms_thres = self.nms_iou)\n                                                    \n            if outputs[0] is None: \n                return results\n\n            top_label   = np.array(outputs[0][:, 6], dtype = 'int32')\n            top_conf    = outputs[0][:, 4] * outputs[0][:, 5]\n            top_boxes   = outputs[0][:, :4]\n\n        for i, c in enumerate(top_label):\n            result                      = {}\n            top, left, bottom, right    = top_boxes[i]\n\n            result[\"image_id\"]      = int(image_id)\n            result[\"category_id\"]   = clsid2catid[c]\n            result[\"bbox\"]          = [float(left),float(top),float(right-left),float(bottom-top)]\n            result[\"score\"]         = float(top_conf[i])\n            results.append(result)\n        return results\n\nif __name__ == \"__main__\":\n    if not os.path.exists(temp_save_path):\n        os.makedirs(temp_save_path)\n\n    cocoGt      = COCO(cocoGt_path)\n    ids         = list(cocoGt.imgToAnns.keys())\n    clsid2catid = cocoGt.getCatIds()\n\n    if map_mode == 0 or map_mode == 1:\n        yolo = mAP_YOLO(confidence = 0.001, nms_iou = 0.65)\n\n        with open(os.path.join(temp_save_path, 'eval_results.json'),\"w\") as f:\n            results = []\n            for image_id in tqdm(ids):\n                image_path  = os.path.join(dataset_img_path, cocoGt.loadImgs(image_id)[0]['file_name'])\n                image       = Image.open(image_path)\n                results     = yolo.detect_image(image_id, image, results, clsid2catid)\n            json.dump(results, f)\n\n    if map_mode == 0 or map_mode == 2:\n        cocoDt      = cocoGt.loadRes(os.path.join(temp_save_path, 'eval_results.json'))\n        cocoEval    = COCOeval(cocoGt, cocoDt, 'bbox') \n        cocoEval.evaluate()\n        cocoEval.accumulate()\n        cocoEval.summarize()\n        print(\"Get map done.\")\n"
  },
  {
    "path": "voc_annotation.py",
    "content": "import os\nimport random\nimport xml.etree.ElementTree as ET\n\nimport numpy as np\n\nfrom utils.utils import get_classes\n\n#--------------------------------------------------------------------------------------------------------------------------------#\n#   annotation_mode用于指定该文件运行时计算的内容\n#   annotation_mode为0代表整个标签处理过程，包括获得VOCdevkit/VOC2007/ImageSets里面的txt以及训练用的2007_train.txt、2007_val.txt\n#   annotation_mode为1代表获得VOCdevkit/VOC2007/ImageSets里面的txt\n#   annotation_mode为2代表获得训练用的2007_train.txt、2007_val.txt\n#--------------------------------------------------------------------------------------------------------------------------------#\nannotation_mode     = 0\n#-------------------------------------------------------------------#\n#   必须要修改，用于生成2007_train.txt、2007_val.txt的目标信息\n#   与训练和预测所用的classes_path一致即可\n#   如果生成的2007_train.txt里面没有目标信息\n#   那么就是因为classes没有设定正确\n#   仅在annotation_mode为0和2的时候有效\n#-------------------------------------------------------------------#\nclasses_path        = 'model_data/ssdd_classes.txt'\n#--------------------------------------------------------------------------------------------------------------------------------#\n#   trainval_percent用于指定(训练集+验证集)与测试集的比例，默认情况下 (训练集+验证集):测试集 = 9:1\n#   train_percent用于指定(训练集+验证集)中训练集与验证集的比例，默认情况下 训练集:验证集 = 9:1\n#   仅在annotation_mode为0和1的时候有效\n#--------------------------------------------------------------------------------------------------------------------------------#\ntrainval_percent    = 0.9\ntrain_percent       = 0.9\n#-------------------------------------------------------#\n#   指向VOC数据集所在的文件夹\n#   默认指向根目录下的VOC数据集\n#-------------------------------------------------------#\nVOCdevkit_path  = 'VOCdevkit'\n\nVOCdevkit_sets  = [('2007', 'train'), ('2007', 'val')]\nclasses, _      = get_classes(classes_path)\n\n#-------------------------------------------------------#\n#   统计目标数量\n#-------------------------------------------------------#\nphoto_nums  = np.zeros(len(VOCdevkit_sets))\nnums        = np.zeros(len(classes))\ndef convert_annotation(year, image_id, list_file):\n    in_file = open(os.path.join(VOCdevkit_path, 'VOC%s/Annotations/%s.xml'%(year, image_id)), encoding='utf-8')\n    tree=ET.parse(in_file)\n    root = tree.getroot()\n\n    for obj in root.iter('object'):\n        difficult = 0 \n        if obj.find('difficult')!=None:\n            difficult = obj.find('difficult').text\n        cls = obj.find('name').text\n        if cls not in classes or int(difficult)==1:\n            continue\n        cls_id = classes.index(cls)\n        xmlbox = obj.find('rotated_bndbox')\n        b = (int(float(xmlbox.find('x1').text)), int(float(xmlbox.find('y1').text)), \\\n            int(float(xmlbox.find('x2').text)), int(float(xmlbox.find('y2').text)), \\\n            int(float(xmlbox.find('x3').text)), int(float(xmlbox.find('y3').text)), \\\n            int(float(xmlbox.find('x4').text)), int(float(xmlbox.find('y4').text)))\n        list_file.write(\" \" + \",\".join([str(a) for a in b]) + ',' + str(cls_id))\n        \n        nums[classes.index(cls)] = nums[classes.index(cls)] + 1\n        \nif __name__ == \"__main__\":\n    random.seed(0)\n    if \" \" in os.path.abspath(VOCdevkit_path):\n        raise ValueError(\"数据集存放的文件夹路径与图片名称中不可以存在空格，否则会影响正常的模型训练，请注意修改。\")\n\n    if annotation_mode == 0 or annotation_mode == 1:\n        print(\"Generate txt in ImageSets.\")\n        xmlfilepath     = os.path.join(VOCdevkit_path, 'VOC2007/Annotations')\n        saveBasePath    = os.path.join(VOCdevkit_path, 'VOC2007/ImageSets/Main')\n        temp_xml        = os.listdir(xmlfilepath)\n        total_xml       = []\n        for xml in temp_xml:\n            if xml.endswith(\".xml\"):\n                total_xml.append(xml)\n\n        num     = len(total_xml)  \n        list    = range(num)  \n        tv      = int(num*trainval_percent)  \n        tr      = int(tv*train_percent)  \n        trainval= random.sample(list,tv)  \n        train   = random.sample(trainval,tr)  \n        \n        print(\"train and val size\",tv)\n        print(\"train size\",tr)\n        ftrainval   = open(os.path.join(saveBasePath,'trainval.txt'), 'w')  \n        ftest       = open(os.path.join(saveBasePath,'test.txt'), 'w')  \n        ftrain      = open(os.path.join(saveBasePath,'train.txt'), 'w')  \n        fval        = open(os.path.join(saveBasePath,'val.txt'), 'w')  \n        \n        for i in list:  \n            name=total_xml[i][:-4]+'\\n'  \n            if i in trainval:  \n                ftrainval.write(name)  \n                if i in train:  \n                    ftrain.write(name)  \n                else:  \n                    fval.write(name)  \n            else:  \n                ftest.write(name)  \n        \n        ftrainval.close()  \n        ftrain.close()  \n        fval.close()  \n        ftest.close()\n        print(\"Generate txt in ImageSets done.\")\n\n    if annotation_mode == 0 or annotation_mode == 2:\n        print(\"Generate 2007_train.txt and 2007_val.txt for train.\")\n        type_index = 0\n        for year, image_set in VOCdevkit_sets:\n            image_ids = open(os.path.join(VOCdevkit_path, 'VOC%s/ImageSets/Main/%s.txt'%(year, image_set)), encoding='utf-8').read().strip().split()\n            list_file = open('%s_%s.txt'%(year, image_set), 'w', encoding='utf-8')\n            for image_id in image_ids:\n                list_file.write('%s/VOC%s/JPEGImages/%s.jpg'%(os.path.abspath(VOCdevkit_path), year, image_id))\n\n                convert_annotation(year, image_id, list_file)\n                list_file.write('\\n')\n            photo_nums[type_index] = len(image_ids)\n            type_index += 1\n            list_file.close()\n        print(\"Generate 2007_train.txt and 2007_val.txt for train done.\")\n        \n        def printTable(List1, List2):\n            for i in range(len(List1[0])):\n                print(\"|\", end=' ')\n                for j in range(len(List1)):\n                    print(List1[j][i].rjust(int(List2[j])), end=' ')\n                    print(\"|\", end=' ')\n                print()\n\n        str_nums = [str(int(x)) for x in nums]\n        tableData = [\n            classes, str_nums\n        ]\n        colWidths = [0]*len(tableData)\n        len1 = 0\n        for i in range(len(tableData)):\n            for j in range(len(tableData[i])):\n                if len(tableData[i][j]) > colWidths[i]:\n                    colWidths[i] = len(tableData[i][j])\n        printTable(tableData, colWidths)\n\n        if photo_nums[0] <= 500:\n            print(\"训练集数量小于500，属于较小的数据量，请注意设置较大的训练世代（Epoch）以满足足够的梯度下降次数（Step）。\")\n\n        if np.sum(nums) == 0:\n            print(\"在数据集中并未获得任何目标，请注意修改classes_path对应自己的数据集，并且保证标签名字正确，否则训练将会没有任何效果！\")\n            print(\"在数据集中并未获得任何目标，请注意修改classes_path对应自己的数据集，并且保证标签名字正确，否则训练将会没有任何效果！\")\n            print(\"在数据集中并未获得任何目标，请注意修改classes_path对应自己的数据集，并且保证标签名字正确，否则训练将会没有任何效果！\")\n            print(\"（重要的事情说三遍）。\")\n"
  },
  {
    "path": "yolo.py",
    "content": "import colorsys\nimport os\nimport time\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom PIL import ImageDraw, ImageFont\n\nfrom nets.yolo import YoloBody\nfrom utils.utils import (cvtColor, get_anchors, get_classes, preprocess_input,\n                         resize_image, show_config)\nfrom utils.utils_bbox import DecodeBox\nfrom utils.utils_rbox import *\n'''\n训练自己的数据集必看注释！\n'''\nclass YOLO(object):\n    _defaults = {\n        #--------------------------------------------------------------------------#\n        #   使用自己训练好的模型进行预测一定要修改model_path和classes_path！\n        #   model_path指向logs文件夹下的权值文件，classes_path指向model_data下的txt\n        #\n        #   训练好后logs文件夹下存在多个权值文件，选择验证集损失较低的即可。\n        #   验证集损失较低不代表mAP较高，仅代表该权值在验证集上泛化性能较好。\n        #   如果出现shape不匹配，同时要注意训练时的model_path和classes_path参数的修改\n        #--------------------------------------------------------------------------#\n        \"model_path\"        : 'model_data/yolov7_obb_ssdd.pth',\n        \"classes_path\"      : 'model_data/ssdd_classes.txt',\n        #---------------------------------------------------------------------#\n        #   anchors_path代表先验框对应的txt文件，一般不修改。\n        #   anchors_mask用于帮助代码找到对应的先验框，一般不修改。\n        #---------------------------------------------------------------------#\n        \"anchors_path\"      : 'model_data/yolo_anchors.txt',\n        \"anchors_mask\"      : [[6, 7, 8], [3, 4, 5], [0, 1, 2]],\n        #---------------------------------------------------------------------#\n        #   输入图片的大小，必须为32的倍数。\n        #---------------------------------------------------------------------#\n        \"input_shape\"       : [640, 640],\n        #------------------------------------------------------#\n        #   所使用到的yolov7的版本，本仓库一共提供两个：\n        #   l : 对应yolov7\n        #   x : 对应yolov7_x\n        #------------------------------------------------------#\n        \"phi\"               : 'l',\n        #---------------------------------------------------------------------#\n        #   只有得分大于置信度的预测框会被保留下来\n        #---------------------------------------------------------------------#\n        \"confidence\"        : 0.5,\n        #---------------------------------------------------------------------#\n        #   非极大抑制所用到的nms_iou大小\n        #---------------------------------------------------------------------#\n        \"nms_iou\"           : 0.3,\n        #---------------------------------------------------------------------#\n        #   该变量用于控制是否使用letterbox_image对输入图像进行不失真的resize，\n        #   在多次测试后，发现关闭letterbox_image直接resize的效果更好\n        #---------------------------------------------------------------------#\n        \"letterbox_image\"   : True,\n        #-------------------------------#\n        #   是否使用Cuda\n        #   没有GPU可以设置成False\n        #-------------------------------#\n        \"cuda\"              : False,\n    }\n\n    @classmethod\n    def get_defaults(cls, n):\n        if n in cls._defaults:\n            return cls._defaults[n]\n        else:\n            return \"Unrecognized attribute name '\" + n + \"'\"\n\n    #---------------------------------------------------#\n    #   初始化YOLO\n    #---------------------------------------------------#\n    def __init__(self, **kwargs):\n        self.__dict__.update(self._defaults)\n        for name, value in kwargs.items():\n            setattr(self, name, value)\n            self._defaults[name] = value \n            \n        #---------------------------------------------------#\n        #   获得种类和先验框的数量\n        #---------------------------------------------------#\n        self.class_names, self.num_classes  = get_classes(self.classes_path)\n        self.anchors, self.num_anchors      = get_anchors(self.anchors_path)\n        self.bbox_util                      = DecodeBox(self.anchors, self.num_classes, (self.input_shape[0], self.input_shape[1]), self.anchors_mask)\n        #---------------------------------------------------#\n        #   画框设置不同的颜色\n        #---------------------------------------------------#\n        hsv_tuples = [(x / self.num_classes, 1., 1.) for x in range(self.num_classes)]\n        self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))\n        self.colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), self.colors))\n        self.generate()\n\n        show_config(**self._defaults)\n\n    #---------------------------------------------------#\n    #   生成模型\n    #---------------------------------------------------#\n    def generate(self, onnx=False):\n        #---------------------------------------------------#\n        #   建立yolo模型，载入yolo模型的权重\n        #---------------------------------------------------#\n        self.net    = YoloBody(self.anchors_mask, self.num_classes, self.phi)\n        device      = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n        self.net.load_state_dict(torch.load(self.model_path, map_location=device))\n        self.net    = self.net.fuse().eval()\n        print('{} model, and classes loaded.'.format(self.model_path))\n        if not onnx:\n            if self.cuda:\n                self.net = nn.DataParallel(self.net)\n                self.net = self.net.cuda()\n\n    #---------------------------------------------------#\n    #   检测图片\n    #---------------------------------------------------#\n    def detect_image(self, image, crop = False, count = False):\n        #---------------------------------------------------#\n        #   计算输入图片的高和宽\n        #---------------------------------------------------#\n        image_shape = np.array(np.shape(image)[0:2])\n        #---------------------------------------------------------#\n        #   在这里将图像转换成RGB图像，防止灰度图在预测时报错。\n        #   代码仅仅支持RGB图像的预测，所有其它类型的图像都会转化成RGB\n        #---------------------------------------------------------#\n        image       = cvtColor(image)\n        #---------------------------------------------------------#\n        #   给图像增加灰条，实现不失真的resize\n        #   也可以直接resize进行识别\n        #---------------------------------------------------------#\n        image_data  = resize_image(image, (self.input_shape[1], self.input_shape[0]), self.letterbox_image)\n        #---------------------------------------------------------#\n        #   添加上batch_size维度\n        #   h, w, 3 => 3, h, w => 1, 3, h, w\n        #---------------------------------------------------------#\n        image_data  = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0)\n\n        with torch.no_grad():\n            images = torch.from_numpy(image_data)\n            if self.cuda:\n                images = images.cuda()\n            #---------------------------------------------------------#\n            #   将图像输入网络当中进行预测！\n            #---------------------------------------------------------#\n            outputs = self.net(images)\n            outputs = self.bbox_util.decode_box(outputs)\n            #---------------------------------------------------------#\n            #   将预测框进行堆叠，然后进行非极大抑制\n            #---------------------------------------------------------#\n            results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, \n                        image_shape, self.letterbox_image, conf_thres = self.confidence, nms_thres = self.nms_iou)\n                                                    \n            if results[0] is None: \n                return image\n\n            top_label   = np.array(results[0][:, 7], dtype = 'int32')\n            top_conf    = results[0][:, 5] * results[0][:, 6]\n            top_rboxes  = results[0][:, :5]\n            top_polys   = rbox2poly(top_rboxes)\n        #---------------------------------------------------------#\n        #   设置字体与边框厚度\n        #---------------------------------------------------------#\n        font        = ImageFont.truetype(font='model_data/simhei.ttf', size=np.floor(3e-2 * image.size[1] + 0.5).astype('int32'))\n        thickness   = int(max((image.size[0] + image.size[1]) // np.mean(self.input_shape), 1))\n        #---------------------------------------------------------#\n        #   计数\n        #---------------------------------------------------------#\n        if count:\n            print(\"top_label:\", top_label)\n            classes_nums    = np.zeros([self.num_classes])\n            for i in range(self.num_classes):\n                num = np.sum(top_label == i)\n                if num > 0:\n                    print(self.class_names[i], \" : \", num)\n                classes_nums[i] = num\n            print(\"classes_nums:\", classes_nums)\n        #---------------------------------------------------------#\n        #   图像绘制\n        #---------------------------------------------------------#\n        for i, c in list(enumerate(top_label)):\n            predicted_class = self.class_names[int(c)]\n            poly            = top_polys[i].astype(np.int32)\n            score           = top_conf[i]\n\n            polygon_list = list(poly)\n            label = '{} {:.2f}'.format(predicted_class, score)\n            draw = ImageDraw.Draw(image)\n            label_size = draw.textsize(label, font)\n            label = label.encode('utf-8')\n            print(label, polygon_list)\n            \n            text_origin = np.array([poly[0], poly[1]], np.int32)\n\n            draw.polygon(xy=polygon_list, outline=self.colors[c])\n            draw.text(text_origin, str(label,'UTF-8'), fill=self.colors[c], font=font)\n            del draw\n\n        return image\n\n    def get_FPS(self, image, test_interval):\n        image_shape = np.array(np.shape(image)[0:2])\n        #---------------------------------------------------------#\n        #   在这里将图像转换成RGB图像，防止灰度图在预测时报错。\n        #   代码仅仅支持RGB图像的预测，所有其它类型的图像都会转化成RGB\n        #---------------------------------------------------------#\n        image       = cvtColor(image)\n        #---------------------------------------------------------#\n        #   给图像增加灰条，实现不失真的resize\n        #   也可以直接resize进行识别\n        #---------------------------------------------------------#\n        image_data  = resize_image(image, (self.input_shape[1], self.input_shape[0]), self.letterbox_image)\n        #---------------------------------------------------------#\n        #   添加上batch_size维度\n        #---------------------------------------------------------#\n        image_data  = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0)\n\n        with torch.no_grad():\n            images = torch.from_numpy(image_data)\n            if self.cuda:\n                images = images.cuda()\n            #---------------------------------------------------------#\n            #   将图像输入网络当中进行预测！\n            #---------------------------------------------------------#\n            outputs = self.net(images)\n            outputs = self.bbox_util.decode_box(outputs)\n            #---------------------------------------------------------#\n            #   将预测框进行堆叠，然后进行非极大抑制\n            #---------------------------------------------------------#\n            results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, \n                        image_shape, self.letterbox_image, conf_thres=self.confidence, nms_thres=self.nms_iou)\n                                                    \n        t1 = time.time()\n        for _ in range(test_interval):\n            with torch.no_grad():\n                #---------------------------------------------------------#\n                #   将图像输入网络当中进行预测！\n                #---------------------------------------------------------#\n                outputs = self.net(images)\n                outputs = self.bbox_util.decode_box(outputs)\n                #---------------------------------------------------------#\n                #   将预测框进行堆叠，然后进行非极大抑制\n                #---------------------------------------------------------#\n                results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, \n                            image_shape, self.letterbox_image, conf_thres=self.confidence, nms_thres=self.nms_iou)\n                            \n        t2 = time.time()\n        tact_time = (t2 - t1) / test_interval\n        return tact_time\n\n    def detect_heatmap(self, image, heatmap_save_path):\n        import cv2\n        import matplotlib.pyplot as plt\n        def sigmoid(x):\n            y = 1.0 / (1.0 + np.exp(-x))\n            return y\n        #---------------------------------------------------------#\n        #   在这里将图像转换成RGB图像，防止灰度图在预测时报错。\n        #   代码仅仅支持RGB图像的预测，所有其它类型的图像都会转化成RGB\n        #---------------------------------------------------------#\n        image       = cvtColor(image)\n        #---------------------------------------------------------#\n        #   给图像增加灰条，实现不失真的resize\n        #   也可以直接resize进行识别\n        #---------------------------------------------------------#\n        image_data  = resize_image(image, (self.input_shape[1],self.input_shape[0]), self.letterbox_image)\n        #---------------------------------------------------------#\n        #   添加上batch_size维度\n        #---------------------------------------------------------#\n        image_data  = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0)\n\n        with torch.no_grad():\n            images = torch.from_numpy(image_data)\n            if self.cuda:\n                images = images.cuda()\n            #---------------------------------------------------------#\n            #   将图像输入网络当中进行预测！\n            #---------------------------------------------------------#\n            outputs = self.net(images)\n        \n        plt.imshow(image, alpha=1)\n        plt.axis('off')\n        mask    = np.zeros((image.size[1], image.size[0]))\n        for sub_output in outputs:\n            sub_output = sub_output.cpu().numpy()\n            b, c, h, w = np.shape(sub_output)\n            sub_output = np.transpose(np.reshape(sub_output, [b, 3, -1, h, w]), [0, 3, 4, 1, 2])[0]\n            score      = np.max(sigmoid(sub_output[..., 4]), -1)\n            score      = cv2.resize(score, (image.size[0], image.size[1]))\n            normed_score    = (score * 255).astype('uint8')\n            mask            = np.maximum(mask, normed_score)\n            \n        plt.imshow(mask, alpha=0.5, interpolation='nearest', cmap=\"jet\")\n\n        plt.axis('off')\n        plt.subplots_adjust(top=1, bottom=0, right=1,  left=0, hspace=0, wspace=0)\n        plt.margins(0, 0)\n        plt.savefig(heatmap_save_path, dpi=200, bbox_inches='tight', pad_inches = -0.1)\n        print(\"Save to the \" + heatmap_save_path)\n        plt.show()\n\n    def convert_to_onnx(self, simplify, model_path):\n        import onnx\n        self.generate(onnx=True)\n\n        im                  = torch.zeros(1, 3, *self.input_shape).to('cpu')  # image size(1, 3, 512, 512) BCHW\n        input_layer_names   = [\"images\"]\n        output_layer_names  = [\"output\"]\n        \n        # Export the model\n        print(f'Starting export with onnx {onnx.__version__}.')\n        torch.onnx.export(self.net,\n                        im,\n                        f               = model_path,\n                        verbose         = False,\n                        opset_version   = 12,\n                        training        = torch.onnx.TrainingMode.EVAL,\n                        do_constant_folding = True,\n                        input_names     = input_layer_names,\n                        output_names    = output_layer_names,\n                        dynamic_axes    = None)\n\n        # Checks\n        model_onnx = onnx.load(model_path)  # load onnx model\n        onnx.checker.check_model(model_onnx)  # check onnx model\n\n        # Simplify onnx\n        if simplify:\n            import onnxsim\n            print(f'Simplifying with onnx-simplifier {onnxsim.__version__}.')\n            model_onnx, check = onnxsim.simplify(\n                model_onnx,\n                dynamic_input_shape=False,\n                input_shapes=None)\n            assert check, 'assert check failed'\n            onnx.save(model_onnx, model_path)\n\n        print('Onnx model save as {}'.format(model_path))\n\n    def get_map_txt(self, image_id, image, class_names, map_out_path):\n        f = open(os.path.join(map_out_path, \"detection-results/\"+image_id+\".txt\"), \"w\", encoding='utf-8') \n        image_shape = np.array(np.shape(image)[0:2])\n        #---------------------------------------------------------#\n        #   在这里将图像转换成RGB图像，防止灰度图在预测时报错。\n        #   代码仅仅支持RGB图像的预测，所有其它类型的图像都会转化成RGB\n        #---------------------------------------------------------#\n        image       = cvtColor(image)\n        #---------------------------------------------------------#\n        #   给图像增加灰条，实现不失真的resize\n        #   也可以直接resize进行识别\n        #---------------------------------------------------------#\n        image_data  = resize_image(image, (self.input_shape[1], self.input_shape[0]), self.letterbox_image)\n        #---------------------------------------------------------#\n        #   添加上batch_size维度\n        #---------------------------------------------------------#\n        image_data  = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0)\n\n        with torch.no_grad():\n            images = torch.from_numpy(image_data)\n            if self.cuda:\n                images = images.cuda()\n            #---------------------------------------------------------#\n            #   将图像输入网络当中进行预测！\n            #---------------------------------------------------------#\n            outputs = self.net(images)\n            outputs = self.bbox_util.decode_box(outputs)\n            #---------------------------------------------------------#\n            #   将预测框进行堆叠，然后进行非极大抑制\n            #---------------------------------------------------------#\n            results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, \n                        image_shape, self.letterbox_image, conf_thres = self.confidence, nms_thres = self.nms_iou)\n                                                    \n            if results[0] is None: \n                return \n\n            top_label   = np.array(results[0][:, 7], dtype = 'int32')\n            top_conf    = results[0][:, 5] * results[0][:, 6]\n            top_rboxes  = results[0][:, :5]\n        for i, c in list(enumerate(top_label)):\n            predicted_class = self.class_names[int(c)]\n            obb             = top_rboxes[i]\n            score           = str(top_conf[i])\n\n            xc, yc, w, h, angle = obb\n\n            if predicted_class not in class_names:\n                continue\n\n            f.write(\"%s %s %s %s %s %s %s\\n\" % (predicted_class, score[:6], str(int(xc)), str(int(yc)), str(int(w)), str(int(h)), str(math.degrees(angle))))\n\n        f.close()\n        return \n"
  },
  {
    "path": "常见问题汇总.md",
    "content": "问题汇总的博客地址为[https://blog.csdn.net/weixin_44791964/article/details/107517428](https://blog.csdn.net/weixin_44791964/article/details/107517428)。\n\n# 问题汇总\n## 1、下载问题\n### a、代码下载\n**问：up主，可以给我发一份代码吗，代码在哪里下载啊？ \n答：Github上的地址就在视频简介里。复制一下就能进去下载了。**\n\n**问：up主，为什么我下载的代码提示压缩包损坏？\n答：重新去Github下载。**\n\n**问：up主，为什么我下载的代码和你在视频以及博客上的代码不一样？\n答：我常常会对代码进行更新，最终以实际的代码为准。**\n\n### b、 权值下载\n**问：up主，为什么我下载的代码里面，model_data下面没有.pth或者.h5文件？ \n答：我一般会把权值上传到Github和百度网盘，在GITHUB的README里面就能找到。**\n\n### c、 数据集下载\n**问：up主，XXXX数据集在哪里下载啊？\n答：一般数据集的下载地址我会放在README里面，基本上都有，没有的话请及时联系我添加，直接发github的issue即可**。\n\n## 2、环境配置问题\n### a、20系列及以下显卡环境配置\n**pytorch代码对应的pytorch版本为1.2，博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/106037141](https://blog.csdn.net/weixin_44791964/article/details/106037141)。\n\n**keras代码对应的tensorflow版本为1.13.2，keras版本是2.1.5，博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/104702142](https://blog.csdn.net/weixin_44791964/article/details/104702142)。\n\n**tf2代码对应的tensorflow版本为2.2.0，无需安装keras，博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/109161493](https://blog.csdn.net/weixin_44791964/article/details/109161493)。\n\n**问：你的代码某某某版本的tensorflow和pytorch能用嘛？\n答：最好按照我推荐的配置，配置教程也有！其它版本的我没有试过！可能出现问题但是一般问题不大。仅需要改少量代码即可。**\n\n### b、30系列显卡环境配置\n30系显卡由于框架更新不可使用上述环境配置教程。\n当前我已经测试的可以用的30显卡配置如下：\n**pytorch代码对应的pytorch版本为1.7.0，cuda为11.0，cudnn为8.0.5，博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/120668551](https://blog.csdn.net/weixin_44791964/article/details/120668551)。\n\n**keras代码无法在win10下配置cuda11，在ubuntu下可以百度查询一下，配置tensorflow版本为1.15.4，keras版本是2.1.5或者2.3.1（少量函数接口不同，代码可能还需要少量调整。）**\n\n**tf2代码对应的tensorflow版本为2.4.0，cuda为11.0，cudnn为8.0.5，博客地址对应为**[https://blog.csdn.net/weixin_44791964/article/details/120657664](https://blog.csdn.net/weixin_44791964/article/details/120657664)。\n\n### c、CPU环境配置\n**pytorch代码对应的pytorch-cpu版本为1.2，博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/120655098](https://blog.csdn.net/weixin_44791964/article/details/120655098)\n\n**keras代码对应的tensorflow-cpu版本为1.13.2，keras版本是2.1.5，博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/120653717](https://blog.csdn.net/weixin_44791964/article/details/120653717)。\n\n**tf2代码对应的tensorflow-cpu版本为2.2.0，无需安装keras，博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/120656291](https://blog.csdn.net/weixin_44791964/article/details/120656291)。\n\n\n### d、GPU利用问题与环境使用问题\n**问：为什么我安装了tensorflow-gpu但是却没用利用GPU进行训练呢？\n答：确认tensorflow-gpu已经装好，利用pip list查看tensorflow版本，然后查看任务管理器或者利用nvidia命令看看是否使用了gpu进行训练，任务管理器的话要看显存使用情况。**\n\n**问：up主，我好像没有在用gpu进行训练啊，怎么看是不是用了GPU进行训练？\n答：查看是否使用GPU进行训练一般使用NVIDIA在命令行的查看命令。在windows电脑中打开cmd然后利用nvidia-smi指令查看GPU利用情况**\n![在这里插入图片描述](https://img-blog.csdnimg.cn/f88ef794c9a341918f000eb2b1c67af6.png?x-oss-process=image/watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBAQnViYmxpaWlpbmc=,size_20,color_FFFFFF,t_70,g_se,x_16)\n**如果要一定看任务管理器的话，请看性能部分GPU的显存是否利用，或者查看任务管理器的Cuda，而非Copy。**\n![在这里插入图片描述](https://img-blog.csdnimg.cn/20201013234241524.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDc5MTk2NA==,size_16,color_FFFFFF,t_70#pic_center)\n\n### e、DLL load failed: 找不到指定的模块\n**问：出现如下错误**\n```python\nTraceback (most recent call last):\n  File \"C:\\Users\\focus\\Anaconda3\\ana\\envs\\tensorflow-gpu\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow.py\", line 58, in <module>\n from tensorflow.python.pywrap_tensorflow_internal import *\nFile \"C:\\Users\\focus\\Anaconda3\\ana\\envs\\tensorflow-gpu\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow_internal.py\", line 28, in <module>\npywrap_tensorflow_internal = swig_import_helper()\n  File \"C:\\Users\\focus\\Anaconda3\\ana\\envs\\tensorflow-gpu\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow_internal.py\", line 24, in swig_import_helper\n    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)\nFile \"C:\\Users\\focus\\Anaconda3\\ana\\envs\\tensorflow-gpu\\lib\\imp.py\", line 243, in load_modulereturn load_dynamic(name, filename, file)\nFile \"C:\\Users\\focus\\Anaconda3\\ana\\envs\\tensorflow-gpu\\lib\\imp.py\", line 343, in load_dynamic\n    return _load(spec)\nImportError: DLL load failed: 找不到指定的模块。\n```\n**答：如果没重启过就重启一下，否则重新按照步骤安装，还无法解决则把你的GPU、CUDA、CUDNN、TF版本以及PYTORCH版本私聊告诉我。**\n\n### f、no module问题（no module name utils.utils、no module named 'matplotlib' ）\n**问：为什么提示说no module name utils.utils（no module name nets.yolo、no module name nets.ssd等一系列问题）啊？\n答：utils并不需要用pip装，它就在我上传的仓库的根目录，出现这个问题的原因是根目录不对，查查相对目录和根目录的概念。查了基本上就明白了。**\n\n**问：为什么提示说no module name matplotlib（no module name PIL，no module name cv2等等）？\n答：这个库没安装打开命令行安装就好。pip install matplotlib**\n\n**问：为什么我已经用pip装了opencv（pillow、matplotlib等），还是提示no module name cv2？\n答：没有激活环境装，要激活对应的conda环境进行安装才可以正常使用**\n\n**问：为什么提示说No module named 'torch' ？\n答：其实我也真的很想知道为什么会有这个问题……这个pytorch没装是什么情况？一般就俩情况，一个是真的没装，还有一个是装到其它环境了，当前激活的环境不是自己装的环境。**\n\n**问：为什么提示说No module named 'tensorflow' ？\n答：同上。**\n\n### g、cuda安装失败问题\n一般cuda安装前需要安装Visual Studio，装个2017版本即可。\n\n### h、Ubuntu系统问题\n**所有代码在Ubuntu下可以使用，我两个系统都试过。**\n\n### i、VSCODE提示错误的问题\n**问：为什么在VSCODE里面提示一大堆的错误啊？\n答：我也提示一大堆的错误，但是不影响，是VSCODE的问题，如果不想看错误的话就装Pycharm。\n最好将设置里面的Python:Language Server，调整为Pylance。**\n\n### j、使用cpu进行训练与预测的问题\n**对于keras和tf2的代码而言，如果想用cpu进行训练和预测，直接装cpu版本的tensorflow就可以了。**\n\n**对于pytorch的代码而言，如果想用cpu进行训练和预测，需要将cuda=True修改成cuda=False。**\n\n### k、tqdm没有pos参数问题\n**问：运行代码提示'tqdm' object has no attribute 'pos'。\n答：重装tqdm，换个版本就可以了。**\n\n### l、提示decode(“utf-8”)的问题\n**由于h5py库的更新，安装过程中会自动安装h5py=3.0.0以上的版本，会导致decode(\"utf-8\")的错误！\n各位一定要在安装完tensorflow后利用命令装h5py=2.10.0！**\n```\npip install h5py==2.10.0\n```\n\n### m、提示TypeError: __array__() takes 1 positional argument but 2 were given错误\n可以修改pillow版本解决。\n```\npip install pillow==8.2.0\n```\n### n、如何查看当前cuda和cudnn\n**window下cuda版本查看方式如下：\n1、打开cmd窗口。\n2、输入nvcc -V。\n3、Cuda compilation tools, release XXXXXXXX中的XXXXXXXX即cuda版本。**\n![在这里插入图片描述](https://img-blog.csdnimg.cn/0389ea35107a408a80ab5cb6590d5a74.png?x-oss-process=image/watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBAQnViYmxpaWlpbmc=,size_20,color_FFFFFF,t_70,g_se,x_16)\nwindow下cudnn版本查看方式如下：\n1、进入cuda安装目录，进入incude文件夹。\n2、找到cudnn.h文件。\n3、右键文本打开，下拉，看到#define处可获得cudnn版本。\n```python\n#define CUDNN_MAJOR 7\n#define CUDNN_MINOR 4\n#define CUDNN_PATCHLEVEL 1\n```\n代表cudnn为7.4.1。\n![在这里插入图片描述](https://img-blog.csdnimg.cn/7a86b68b17c84feaa6fa95780d4ae4b4.png?x-oss-process=image/watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBAQnViYmxpaWlpbmc=,size_20,color_FFFFFF,t_70,g_se,x_16)\n![在这里插入图片描述](https://img-blog.csdnimg.cn/81bb7c3e13cc492292530e4b69df86a9.png?x-oss-process=image/watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBAQnViYmxpaWlpbmc=,size_20,color_FFFFFF,t_70,g_se,x_16)\n\n### o、为什么按照你的环境配置后还是不能使用\n**问：up主，为什么我按照你的环境配置后还是不能使用？\n答：请把你的GPU、CUDA、CUDNN、TF版本以及PYTORCH版本B站私聊告诉我。**\n\n### p、其它问题\n**问：为什么提示TypeError: cat() got an unexpected keyword argument 'axis'，Traceback (most recent call last)，AttributeError: 'Tensor' object has no attribute 'bool'？\n答：这是版本问题，建议使用torch1.2以上版本**\n\n**其它有很多稀奇古怪的问题，很多是版本问题，建议按照我的视频教程安装Keras和tensorflow。比如装的是tensorflow2，就不用问我说为什么我没法运行Keras-yolo啥的。那是必然不行的。**\n\n## 3、目标检测库问题汇总（人脸检测和分类库也可参考）\n### a、shape不匹配问题。\n#### 1）、训练时shape不匹配问题。\n**问：up主，为什么运行train.py会提示shape不匹配啊？\n答：在keras环境中，因为你训练的种类和原始的种类不同，网络结构会变化，所以最尾部的shape会有少量不匹配。**\n\n#### 2）、预测时shape不匹配问题。\n**问：为什么我运行predict.py会提示我说shape不匹配呀。**\n##### i、copying a param with shape torch.Size([75, 704, 1, 1]) from checkpoint\n在Pytorch里面是这样的：\n![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171631901.png)\n##### ii、Shapes are [1,1,1024,75] and [255,1024,1,1]. for 'Assign_360' (op: 'Assign') with input shapes: [1,1,1024,75], [255,1024,1,1].\n在Keras里面是这样的：\n![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171523380.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDc5MTk2NA==,size_16,color_FFFFFF,t_70)\n**答：原因主要有仨：\n1、训练的classes_path没改，就开始训练了。\n2、训练的model_path没改。\n3、训练的classes_path没改。\n请检查清楚了！确定自己所用的model_path和classes_path是对应的！训练的时候用到的num_classes或者classes_path也需要检查！**\n\n### b、显存不足问题（OOM、RuntimeError: CUDA out of memory）。\n**问：为什么我运行train.py下面的命令行闪的贼快，还提示OOM啥的？ \n答：这是在keras中出现的，爆显存了，可以改小batch_size，SSD的显存占用率是最小的，建议用SSD；\n2G显存：SSD、YOLOV4-TINY\n4G显存：YOLOV3\n6G显存：YOLOV4、Retinanet、M2det、Efficientdet、Faster RCNN等\n8G+显存：随便选吧。**\n**需要注意的是，受到BatchNorm2d影响，batch_size不可为1，至少为2。**\n\n**问：为什么提示 RuntimeError: CUDA out of memory. Tried to allocate 52.00 MiB (GPU 0; 15.90 GiB total capacity; 14.85 GiB already allocated; 51.88 MiB free; 15.07 GiB reserved in total by PyTorch)？ \n答：这是pytorch中出现的，爆显存了，同上。**\n\n**问：为什么我显存都没利用，就直接爆显存了？ \n答：都爆显存了，自然就不利用了，模型没有开始训练。**\n### c、为什么要进行冻结训练与解冻训练，不进行行吗？\n**问：为什么要冻结训练和解冻训练呀？\n答：可以不进行，本质上是为了保证性能不足的同学的训练，如果电脑性能完全不够，可以将Freeze_Epoch和UnFreeze_Epoch设置成一样，只进行冻结训练。**\n\n**同时这也是迁移学习的思想，因为神经网络主干特征提取部分所提取到的特征是通用的，我们冻结起来训练可以加快训练效率，也可以防止权值被破坏。**\n在冻结阶段，模型的主干被冻结了，特征提取网络不发生改变。占用的显存较小，仅对网络进行微调。\n在解冻阶段，模型的主干不被冻结了，特征提取网络会发生改变。占用的显存较大，网络所有的参数都会发生改变。\n\n### d、我的LOSS好大啊，有问题吗？（我的LOSS好小啊，有问题吗？）\n**问：为什么我的网络不收敛啊，LOSS是XXXX。\n答：不同网络的LOSS不同，LOSS只是一个参考指标，用于查看网络是否收敛，而非评价网络好坏，我的yolo代码都没有归一化，所以LOSS值看起来比较高，LOSS的值不重要，重要的是是否在变小，预测是否有效果。**\n\n### e、为什么我训练出来的模型没有预测结果？\n**问：为什么我的训练效果不好？预测了没有框（框不准）。\n答：**\n考虑几个问题：\n1、目标信息问题，查看2007_train.txt文件是否有目标信息，没有的话请修改voc_annotation.py。\n2、数据集问题，小于500的自行考虑增加数据集，同时测试不同的模型，确认数据集是好的。\n3、是否解冻训练，如果数据集分布与常规画面差距过大需要进一步解冻训练，调整主干，加强特征提取能力。\n4、网络问题，比如SSD不适合小目标，因为先验框固定了。\n5、训练时长问题，有些同学只训练了几代表示没有效果，按默认参数训练完。\n6、确认自己是否按照步骤去做了，如果比如voc_annotation.py里面的classes是否修改了等。\n7、不同网络的LOSS不同，LOSS只是一个参考指标，用于查看网络是否收敛，而非评价网络好坏，LOSS的值不重要，重要的是是否收敛。\n8、是否修改了网络的主干，如果修改了没有预训练权重，网络不容易收敛，自然效果不好。\n\n### f、为什么我计算出来的map是0？\n**问：为什么我的训练效果不好？没有map？\n答：**\n首先尝试利用predict.py预测一下，如果有效果的话应该是get_map.py里面的classes_path设置错误。如果没有预测结果的话，解决方法同e问题，对下面几点进行检查：\n1、目标信息问题，查看2007_train.txt文件是否有目标信息，没有的话请修改voc_annotation.py。\n2、数据集问题，小于500的自行考虑增加数据集，同时测试不同的模型，确认数据集是好的。\n3、是否解冻训练，如果数据集分布与常规画面差距过大需要进一步解冻训练，调整主干，加强特征提取能力。\n4、网络问题，比如SSD不适合小目标，因为先验框固定了。\n5、训练时长问题，有些同学只训练了几代表示没有效果，按默认参数训练完。\n6、确认自己是否按照步骤去做了，如果比如voc_annotation.py里面的classes是否修改了等。\n7、不同网络的LOSS不同，LOSS只是一个参考指标，用于查看网络是否收敛，而非评价网络好坏，LOSS的值不重要，重要的是是否收敛。\n8、是否修改了网络的主干，如果修改了没有预训练权重，网络不容易收敛，自然效果不好。\n\n### g、gbk编码错误（'gbk' codec can't decode byte）。\n**问：我怎么出现了gbk什么的编码错误啊：**\n```python\nUnicodeDecodeError: 'gbk' codec can't decode byte 0xa6 in position 446: illegal multibyte sequence\n```\n**答：标签和路径不要使用中文，如果一定要使用中文，请注意处理的时候编码的问题，改成打开文件的encoding方式改为utf-8。**\n\n### h、我的图片是xxx*xxx的分辨率的，可以用吗？\n**问：我的图片是xxx*xxx的分辨率的，可以用吗！**\n**答：可以用，代码里面会自动进行resize与数据增强。**\n\n### i、我想进行数据增强！怎么增强？\n**问：我想要进行数据增强！怎么做呢？**\n**答：可以用，代码里面会自动进行resize与数据增强。**\n\n### j、多GPU训练。\n**问：怎么进行多GPU训练？\n答：pytorch的大多数代码可以直接使用gpu训练，keras的话直接百度就好了，实现并不复杂，我没有多卡没法详细测试，还需要各位同学自己努力了。**\n\n### k、能不能训练灰度图？\n**问：能不能训练灰度图（预测灰度图）啊？\n答：我的大多数库会将灰度图转化成RGB进行训练和预测，如果遇到代码不能训练或者预测灰度图的情况，可以尝试一下在get_random_data里面将Image.open后的结果转换成RGB，预测的时候也这样试试。（仅供参考）**\n\n### l、断点续练问题。\n**问：我已经训练过几个世代了，能不能从这个基础上继续开始训练\n答：可以，你在训练前，和载入预训练权重一样载入训练过的权重就行了。一般训练好的权重会保存在logs文件夹里面，将model_path修改成你要开始的权值的路径即可。**\n\n### m、我要训练其它的数据集，预训练权重能不能用？\n**问：如果我要训练其它的数据集，预训练权重要怎么办啊？**\n**答：数据的预训练权重对不同数据集是通用的，因为特征是通用的，预训练权重对于99%的情况都必须要用，不用的话权值太过随机，特征提取效果不明显，网络训练的结果也不会好。**\n\n### n、网络如何从0开始训练？\n**问：我要怎么不使用预训练权重啊？\n答：看一看注释、大多数代码是model_path = ''，Freeze_Train = Fasle**，如果设置model_path无用，**那么把载入预训练权重的代码注释了就行。**\n\n### o、为什么从0开始训练效果这么差（修改了网络主干，效果不好怎么办）？\n**问：为什么我不使用预训练权重效果这么差啊？\n答：因为随机初始化的权值不好，提取的特征不好，也就导致了模型训练的效果不好，voc07+12、coco+voc07+12效果都不一样，预训练权重还是非常重要的。**\n\n**问：up，我修改了网络，预训练权重还能用吗？\n答：修改了主干的话，如果不是用的现有的网络，基本上预训练权重是不能用的，要么就自己判断权值里卷积核的shape然后自己匹配，要么只能自己预训练去了；修改了后半部分的话，前半部分的主干部分的预训练权重还是可以用的，如果是pytorch代码的话，需要自己修改一下载入权值的方式，判断shape后载入，如果是keras代码，直接by_name=True,skip_mismatch=True即可。**\n权值匹配的方式可以参考如下：\n```python\n# 加快模型训练的效率\nprint('Loading weights into state dict...')\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel_dict = model.state_dict()\npretrained_dict = torch.load(model_path, map_location=device)\na = {}\nfor k, v in pretrained_dict.items():\n    try:    \n        if np.shape(model_dict[k]) ==  np.shape(v):\n            a[k]=v\n    except:\n        pass\nmodel_dict.update(a)\nmodel.load_state_dict(model_dict)\nprint('Finished!')\n```\n\n**问：为什么从0开始训练效果这么差（我修改了网络主干，效果不好怎么办）？\n答：一般来讲，网络从0开始的训练效果会很差，因为权值太过随机，特征提取效果不明显，因此非常、非常、非常不建议大家从0开始训练！如果一定要从0开始，可以了解imagenet数据集，首先训练分类模型，获得网络的主干部分权值，分类模型的 主干部分 和该模型通用，基于此进行训练。\n网络修改了主干之后也是同样的问题，随机的权值效果很差。**\n\n**问：怎么在模型上从0开始训练？\n答：在算力不足与调参能力不足的情况下从0开始训练毫无意义。模型特征提取能力在随机初始化参数的情况下非常差。没有好的参数调节能力和算力，无法使得网络正常收敛。**\n如果一定要从0开始，那么训练的时候请注意几点：\n - 不载入预训练权重。 \n - 不要进行冻结训练，注释冻结模型的代码。\n\n**问：为什么我不使用预训练权重效果这么差啊？\n答：因为随机初始化的权值不好，提取的特征不好，也就导致了模型训练的效果不好，voc07+12、coco+voc07+12效果都不一样，预训练权重还是非常重要的。**\n\n### p、你的权值都是哪里来的？\n**问：如果网络不能从0开始训练的话你的权值哪里来的？\n答：有些权值是官方转换过来的，有些权值是自己训练出来的，我用到的主干的imagenet的权值都是官方的。**\n\n### q、视频检测与摄像头检测\n**问：怎么用摄像头检测呀？\n答：predict.py修改参数可以进行摄像头检测，也有视频详细解释了摄像头检测的思路。**\n\n**问：怎么用视频检测呀？\n答：同上**\n\n### r、如何保存检测出的图片\n**问：检测完的图片怎么保存？\n答：一般目标检测用的是Image，所以查询一下PIL库的Image如何进行保存。详细看看predict.py文件的注释。**\n\n**问：怎么用视频保存呀？\n答：详细看看predict.py文件的注释。**\n\n### s、遍历问题\n**问：如何对一个文件夹的图片进行遍历？\n答：一般使用os.listdir先找出文件夹里面的所有图片，然后根据predict.py文件里面的执行思路检测图片就行了，详细看看predict.py文件的注释。**\n\n**问：如何对一个文件夹的图片进行遍历？并且保存。\n答：遍历的话一般使用os.listdir先找出文件夹里面的所有图片，然后根据predict.py文件里面的执行思路检测图片就行了。保存的话一般目标检测用的是Image，所以查询一下PIL库的Image如何进行保存。如果有些库用的是cv2，那就是查一下cv2怎么保存图片。详细看看predict.py文件的注释。**\n\n### t、路径问题（No such file or directory、StopIteration: [Errno 13] Permission denied: 'XXXXXX'）\n**问：我怎么出现了这样的错误呀：**\n```python\nFileNotFoundError: 【Errno 2】 No such file or directory\nStopIteration: [Errno 13] Permission denied: 'D:\\\\Study\\\\Collection\\\\Dataset\\\\VOC07+12+test\\\\VOCdevkit/VOC2007'\n……………………………………\n……………………………………\n```\n**答：去检查一下文件夹路径，查看是否有对应文件；并且检查一下2007_train.txt，其中文件路径是否有错。**\n关于路径有几个重要的点：\n**文件夹名称中一定不要有空格。\n注意相对路径和绝对路径。\n多百度路径相关的知识。**\n\n**所有的路径问题基本上都是根目录问题，好好查一下相对目录的概念！**\n### u、和原版比较问题，你怎么和原版不一样啊？\n**问：原版的代码是XXX，为什么你的代码是XXX？\n答：是啊……这要不怎么说我不是原版呢……**\n\n**问：你这个代码和原版比怎么样，可以达到原版的效果么？\n答：基本上可以达到，我都用voc数据测过，我没有好显卡，没有能力在coco上测试与训练。**\n\n**问：你有没有实现yolov4所有的tricks，和原版差距多少？\n答：并没有实现全部的改进部分，由于YOLOV4使用的改进实在太多了，很难完全实现与列出来，这里只列出来了一些我比较感兴趣，而且非常有效的改进。论文中提到的SAM（注意力机制模块），作者自己的源码也没有使用。还有其它很多的tricks，不是所有的tricks都有提升，我也没法实现全部的tricks。至于和原版的比较，我没有能力训练coco数据集，根据使用过的同学反应差距不大。**\n\n### v、我的检测速度是xxx正常吗？我的检测速度还能增快吗？\n**问：你这个FPS可以到达多少，可以到 XX FPS么？\n答：FPS和机子的配置有关，配置高就快，配置低就慢。**\n\n**问：我的检测速度是xxx正常吗？我的检测速度还能增快吗？\n答：看配置，配置好速度就快，如果想要配置不变的情况下加快速度，就要修改网络了。**\n\n**问：为什么我用服务器去测试yolov4（or others）的FPS只有十几？\n答：检查是否正确安装了tensorflow-gpu或者pytorch的gpu版本，如果已经正确安装，可以去利用time.time()的方法查看detect_image里面，哪一段代码耗时更长（不仅只有网络耗时长，其它处理部分也会耗时，如绘图等）。**\n\n**问：为什么论文中说速度可以达到XX，但是这里却没有？\n答：检查是否正确安装了tensorflow-gpu或者pytorch的gpu版本，如果已经正确安装，可以去利用time.time()的方法查看detect_image里面，哪一段代码耗时更长（不仅只有网络耗时长，其它处理部分也会耗时，如绘图等）。有些论文还会使用多batch进行预测，我并没有去实现这个部分。**\n\n### w、预测图片不显示问题\n**问：为什么你的代码在预测完成后不显示图片？只是在命令行告诉我有什么目标。\n答：给系统安装一个图片查看器就行了。**\n\n### x、算法评价问题（目标检测的map、PR曲线、Recall、Precision等）\n**问：怎么计算map？\n答：看map视频，都一个流程。**\n\n**问：计算map的时候，get_map.py里面有一个MINOVERLAP是什么用的，是iou吗？\n答：是iou，它的作用是判断预测框和真实框的重合成度，如果重合程度大于MINOVERLAP，则预测正确。**\n\n**问：为什么get_map.py里面的self.confidence（self.score）要设置的那么小？\n答：看一下map的视频的原理部分，要知道所有的结果然后再进行pr曲线的绘制。**\n\n**问：能不能说说怎么绘制PR曲线啥的呀。\n答：可以看mAP视频，结果里面有PR曲线。**\n\n**问：怎么计算Recall、Precision指标。\n答：这俩指标应该是相对于特定的置信度的，计算map的时候也会获得。**\n\n### y、coco数据集训练问题\n**问：目标检测怎么训练COCO数据集啊？。\n答：coco数据训练所需要的txt文件可以参考qqwweee的yolo3的库，格式都是一样的。**\n\n### z、UP，怎么优化模型啊？我想提升效果\n**问：up，怎么修改模型啊，我想发个小论文！\n答：建议看看yolov3和yolov4的区别，然后看看yolov4的论文，作为一个大型调参现场非常有参考意义，使用了很多tricks。我能给的建议就是多看一些经典模型，然后拆解里面的亮点结构并使用。**\n\n### aa、UP，有Focal LOSS的代码吗？怎么改啊？\n**问：up，YOLO系列使用Focal LOSS的代码你有吗，有提升吗？\n答：很多人试过，提升效果也不大（甚至变的更Low），它自己有自己的正负样本的平衡方式**。改代码的事情，还是自己好好看看代码吧。\n\n### ab、部署问题（ONNX、TensorRT等）\n我没有具体部署到手机等设备上过，所以很多部署问题我并不了解……\n\n## 4、语义分割库问题汇总\n### a、shape不匹配问题\n#### 1）、训练时shape不匹配问题\n**问：up主，为什么运行train.py会提示shape不匹配啊？\n答：在keras环境中，因为你训练的种类和原始的种类不同，网络结构会变化，所以最尾部的shape会有少量不匹配。**\n\n#### 2）、预测时shape不匹配问题\n**问：为什么我运行predict.py会提示我说shape不匹配呀。**\n##### i、copying a param with shape torch.Size([75, 704, 1, 1]) from checkpoint\n在Pytorch里面是这样的：\n![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171631901.png)\n##### ii、Shapes are [1,1,1024,75] and [255,1024,1,1]. for 'Assign_360' (op: 'Assign') with input shapes: [1,1,1024,75], [255,1024,1,1].\n在Keras里面是这样的：\n![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171523380.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDc5MTk2NA==,size_16,color_FFFFFF,t_70)\n**答：原因主要有二：\n1、train.py里面的num_classes没改。\n2、预测时num_classes没改。\n3、预测时model_path没改。\n请检查清楚！训练和预测的时候用到的num_classes都需要检查！**\n\n### b、显存不足问题（OOM、RuntimeError: CUDA out of memory）。\n**问：为什么我运行train.py下面的命令行闪的贼快，还提示OOM啥的？ \n答：这是在keras中出现的，爆显存了，可以改小batch_size。**\n\n**需要注意的是，受到BatchNorm2d影响，batch_size不可为1，至少为2。**\n\n**问：为什么提示 RuntimeError: CUDA out of memory. Tried to allocate 52.00 MiB (GPU 0; 15.90 GiB total capacity; 14.85 GiB already allocated; 51.88 MiB free; 15.07 GiB reserved in total by PyTorch)？ \n答：这是pytorch中出现的，爆显存了，同上。**\n\n**问：为什么我显存都没利用，就直接爆显存了？ \n答：都爆显存了，自然就不利用了，模型没有开始训练。**\n\n### c、为什么要进行冻结训练与解冻训练，不进行行吗？\n**问：为什么要冻结训练和解冻训练呀？\n答：可以不进行，本质上是为了保证性能不足的同学的训练，如果电脑性能完全不够，可以将Freeze_Epoch和UnFreeze_Epoch设置成一样，只进行冻结训练。**\n\n**同时这也是迁移学习的思想，因为神经网络主干特征提取部分所提取到的特征是通用的，我们冻结起来训练可以加快训练效率，也可以防止权值被破坏。**\n在冻结阶段，模型的主干被冻结了，特征提取网络不发生改变。占用的显存较小，仅对网络进行微调。\n在解冻阶段，模型的主干不被冻结了，特征提取网络会发生改变。占用的显存较大，网络所有的参数都会发生改变。\n\n### d、我的LOSS好大啊，有问题吗？（我的LOSS好小啊，有问题吗？）\n**问：为什么我的网络不收敛啊，LOSS是XXXX。\n答：不同网络的LOSS不同，LOSS只是一个参考指标，用于查看网络是否收敛，而非评价网络好坏，我的yolo代码都没有归一化，所以LOSS值看起来比较高，LOSS的值不重要，重要的是是否在变小，预测是否有效果。**\n\n### e、为什么我训练出来的模型没有预测结果？\n**问：为什么我的训练效果不好？预测了没有框（框不准）。\n答：**\n**考虑几个问题：\n1、数据集问题，这是最重要的问题。小于500的自行考虑增加数据集；一定要检查数据集的标签，视频中详细解析了VOC数据集的格式，但并不是有输入图片有输出标签即可，还需要确认标签的每一个像素值是否为它对应的种类。很多同学的标签格式不对，最常见的错误格式就是标签的背景为黑，目标为白，此时目标的像素点值为255，无法正常训练，目标需要为1才行。\n2、是否解冻训练，如果数据集分布与常规画面差距过大需要进一步解冻训练，调整主干，加强特征提取能力。\n3、网络问题，可以尝试不同的网络。\n4、训练时长问题，有些同学只训练了几代表示没有效果，按默认参数训练完。\n5、确认自己是否按照步骤去做了。\n6、不同网络的LOSS不同，LOSS只是一个参考指标，用于查看网络是否收敛，而非评价网络好坏，LOSS的值不重要，重要的是是否收敛。**\n\n**问：为什么我的训练效果不好？对小目标预测不准确。\n答：对于deeplab和pspnet而言，可以修改一下downsample_factor，当downsample_factor为16的时候下采样倍数过多，效果不太好，可以修改为8。**\n\n### f、为什么我计算出来的miou是0？\n**问：为什么我的训练效果不好？计算出来的miou是0？。**\n答：\n与e类似，**考虑几个问题：\n1、数据集问题，这是最重要的问题。小于500的自行考虑增加数据集；一定要检查数据集的标签，视频中详细解析了VOC数据集的格式，但并不是有输入图片有输出标签即可，还需要确认标签的每一个像素值是否为它对应的种类。很多同学的标签格式不对，最常见的错误格式就是标签的背景为黑，目标为白，此时目标的像素点值为255，无法正常训练，目标需要为1才行。\n2、是否解冻训练，如果数据集分布与常规画面差距过大需要进一步解冻训练，调整主干，加强特征提取能力。\n3、网络问题，可以尝试不同的网络。\n4、训练时长问题，有些同学只训练了几代表示没有效果，按默认参数训练完。\n5、确认自己是否按照步骤去做了。\n6、不同网络的LOSS不同，LOSS只是一个参考指标，用于查看网络是否收敛，而非评价网络好坏，LOSS的值不重要，重要的是是否收敛。**\n\n### g、gbk编码错误（'gbk' codec can't decode byte）。\n**问：我怎么出现了gbk什么的编码错误啊：**\n```python\nUnicodeDecodeError: 'gbk' codec can't decode byte 0xa6 in position 446: illegal multibyte sequence\n```\n**答：标签和路径不要使用中文，如果一定要使用中文，请注意处理的时候编码的问题，改成打开文件的encoding方式改为utf-8。**\n\n### h、我的图片是xxx*xxx的分辨率的，可以用吗？\n**问：我的图片是xxx*xxx的分辨率的，可以用吗！**\n**答：可以用，代码里面会自动进行resize与数据增强。**\n\n### i、我想进行数据增强！怎么增强？\n**问：我想要进行数据增强！怎么做呢？**\n**答：可以用，代码里面会自动进行resize与数据增强。**\n\n### j、多GPU训练。\n**问：怎么进行多GPU训练？\n答：pytorch的大多数代码可以直接使用gpu训练，keras的话直接百度就好了，实现并不复杂，我没有多卡没法详细测试，还需要各位同学自己努力了。**\n\n### k、能不能训练灰度图？\n**问：能不能训练灰度图（预测灰度图）啊？\n答：我的大多数库会将灰度图转化成RGB进行训练和预测，如果遇到代码不能训练或者预测灰度图的情况，可以尝试一下在get_random_data里面将Image.open后的结果转换成RGB，预测的时候也这样试试。（仅供参考）**\n\n### l、断点续练问题。\n**问：我已经训练过几个世代了，能不能从这个基础上继续开始训练\n答：可以，你在训练前，和载入预训练权重一样载入训练过的权重就行了。一般训练好的权重会保存在logs文件夹里面，将model_path修改成你要开始的权值的路径即可。**\n\n### m、我要训练其它的数据集，预训练权重能不能用？\n**问：如果我要训练其它的数据集，预训练权重要怎么办啊？**\n**答：数据的预训练权重对不同数据集是通用的，因为特征是通用的，预训练权重对于99%的情况都必须要用，不用的话权值太过随机，特征提取效果不明显，网络训练的结果也不会好。**\n\n### n、网络如何从0开始训练？\n**问：我要怎么不使用预训练权重啊？\n答：看一看注释、大多数代码是model_path = ''，Freeze_Train = Fasle**，如果设置model_path无用，**那么把载入预训练权重的代码注释了就行。**\n\n### o、为什么从0开始训练效果这么差（修改了网络主干，效果不好怎么办）？\n**问：为什么我不使用预训练权重效果这么差啊？\n答：因为随机初始化的权值不好，提取的特征不好，也就导致了模型训练的效果不好，预训练权重还是非常重要的。**\n\n**问：up，我修改了网络，预训练权重还能用吗？\n答：修改了主干的话，如果不是用的现有的网络，基本上预训练权重是不能用的，要么就自己判断权值里卷积核的shape然后自己匹配，要么只能自己预训练去了；修改了后半部分的话，前半部分的主干部分的预训练权重还是可以用的，如果是pytorch代码的话，需要自己修改一下载入权值的方式，判断shape后载入，如果是keras代码，直接by_name=True,skip_mismatch=True即可。**\n权值匹配的方式可以参考如下：\n```python\n# 加快模型训练的效率\nprint('Loading weights into state dict...')\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel_dict = model.state_dict()\npretrained_dict = torch.load(model_path, map_location=device)\na = {}\nfor k, v in pretrained_dict.items():\n    try:    \n        if np.shape(model_dict[k]) ==  np.shape(v):\n            a[k]=v\n    except:\n        pass\nmodel_dict.update(a)\nmodel.load_state_dict(model_dict)\nprint('Finished!')\n```\n\n**问：为什么从0开始训练效果这么差（我修改了网络主干，效果不好怎么办）？\n答：一般来讲，网络从0开始的训练效果会很差，因为权值太过随机，特征提取效果不明显，因此非常、非常、非常不建议大家从0开始训练！如果一定要从0开始，可以了解imagenet数据集，首先训练分类模型，获得网络的主干部分权值，分类模型的 主干部分 和该模型通用，基于此进行训练。\n网络修改了主干之后也是同样的问题，随机的权值效果很差。**\n\n**问：怎么在模型上从0开始训练？\n答：在算力不足与调参能力不足的情况下从0开始训练毫无意义。模型特征提取能力在随机初始化参数的情况下非常差。没有好的参数调节能力和算力，无法使得网络正常收敛。**\n如果一定要从0开始，那么训练的时候请注意几点：\n - 不载入预训练权重。 \n - 不要进行冻结训练，注释冻结模型的代码。\n\n**问：为什么我不使用预训练权重效果这么差啊？\n答：因为随机初始化的权值不好，提取的特征不好，也就导致了模型训练的效果不好，voc07+12、coco+voc07+12效果都不一样，预训练权重还是非常重要的。**\n\n### p、你的权值都是哪里来的？\n**问：如果网络不能从0开始训练的话你的权值哪里来的？\n答：有些权值是官方转换过来的，有些权值是自己训练出来的，我用到的主干的imagenet的权值都是官方的。**\n\n\n### q、视频检测与摄像头检测\n**问：怎么用摄像头检测呀？\n答：predict.py修改参数可以进行摄像头检测，也有视频详细解释了摄像头检测的思路。**\n\n**问：怎么用视频检测呀？\n答：同上**\n\n### r、如何保存检测出的图片\n**问：检测完的图片怎么保存？\n答：一般目标检测用的是Image，所以查询一下PIL库的Image如何进行保存。详细看看predict.py文件的注释。**\n\n**问：怎么用视频保存呀？\n答：详细看看predict.py文件的注释。**\n\n### s、遍历问题\n**问：如何对一个文件夹的图片进行遍历？\n答：一般使用os.listdir先找出文件夹里面的所有图片，然后根据predict.py文件里面的执行思路检测图片就行了，详细看看predict.py文件的注释。**\n\n**问：如何对一个文件夹的图片进行遍历？并且保存。\n答：遍历的话一般使用os.listdir先找出文件夹里面的所有图片，然后根据predict.py文件里面的执行思路检测图片就行了。保存的话一般目标检测用的是Image，所以查询一下PIL库的Image如何进行保存。如果有些库用的是cv2，那就是查一下cv2怎么保存图片。详细看看predict.py文件的注释。**\n\n### t、路径问题（No such file or directory、StopIteration: [Errno 13] Permission denied: 'XXXXXX'）\n**问：我怎么出现了这样的错误呀：**\n```python\nFileNotFoundError: 【Errno 2】 No such file or directory\nStopIteration: [Errno 13] Permission denied: 'D:\\\\Study\\\\Collection\\\\Dataset\\\\VOC07+12+test\\\\VOCdevkit/VOC2007'\n……………………………………\n……………………………………\n```\n**答：去检查一下文件夹路径，查看是否有对应文件；并且检查一下2007_train.txt，其中文件路径是否有错。**\n关于路径有几个重要的点：\n**文件夹名称中一定不要有空格。\n注意相对路径和绝对路径。\n多百度路径相关的知识。**\n\n**所有的路径问题基本上都是根目录问题，好好查一下相对目录的概念！**\n### u、和原版比较问题，你怎么和原版不一样啊？\n**问：原版的代码是XXX，为什么你的代码是XXX？\n答：是啊……这要不怎么说我不是原版呢……**\n\n**问：你这个代码和原版比怎么样，可以达到原版的效果么？\n答：基本上可以达到，我都用voc数据测过，我没有好显卡，没有能力在coco上测试与训练。**\n\n### v、我的检测速度是xxx正常吗？我的检测速度还能增快吗？\n**问：你这个FPS可以到达多少，可以到 XX FPS么？\n答：FPS和机子的配置有关，配置高就快，配置低就慢。**\n\n**问：我的检测速度是xxx正常吗？我的检测速度还能增快吗？\n答：看配置，配置好速度就快，如果想要配置不变的情况下加快速度，就要修改网络了。**\n\n**问：为什么论文中说速度可以达到XX，但是这里却没有？\n答：检查是否正确安装了tensorflow-gpu或者pytorch的gpu版本，如果已经正确安装，可以去利用time.time()的方法查看detect_image里面，哪一段代码耗时更长（不仅只有网络耗时长，其它处理部分也会耗时，如绘图等）。有些论文还会使用多batch进行预测，我并没有去实现这个部分。**\n\n### w、预测图片不显示问题\n**问：为什么你的代码在预测完成后不显示图片？只是在命令行告诉我有什么目标。\n答：给系统安装一个图片查看器就行了。**\n\n### x、算法评价问题（miou）\n**问：怎么计算miou？\n答：参考视频里的miou测量部分。**\n\n**问：怎么计算Recall、Precision指标。\n答：现有的代码还无法获得，需要各位同学理解一下混淆矩阵的概念，然后自行计算一下。**\n\n### y、UP，怎么优化模型啊？我想提升效果\n**问：up，怎么修改模型啊，我想发个小论文！\n答：建议目标检测中的yolov4论文，作为一个大型调参现场非常有参考意义，使用了很多tricks。我能给的建议就是多看一些经典模型，然后拆解里面的亮点结构并使用。**\n\n### z、部署问题（ONNX、TensorRT等）\n我没有具体部署到手机等设备上过，所以很多部署问题我并不了解……\n\n## 5、交流群问题\n**问：up，有没有QQ群啥的呢？\n答：没有没有，我没有时间管理QQ群……**\n\n## 6、怎么学习的问题\n**问：up，你的学习路线怎么样的？我是个小白我要怎么学？\n答：这里有几点需要注意哈\n1、我不是高手，很多东西我也不会，我的学习路线也不一定适用所有人。\n2、我实验室不做深度学习，所以我很多东西都是自学，自己摸索，正确与否我也不知道。\n3、我个人觉得学习更靠自学**\n学习路线的话，我是先学习了莫烦的python教程，从tensorflow、keras、pytorch入门，入门完之后学的SSD，YOLO，然后了解了很多经典的卷积网，后面就开始学很多不同的代码了，我的学习方法就是一行一行的看，了解整个代码的执行流程，特征层的shape变化等，花了很多时间也没有什么捷径，就是要花时间吧。\n"
  }
]