[
  {
    "path": ".gitignore",
    "content": "*.pt\n*.pth\n*.txt\n*.pkl\n__pycache__\n.vscode\ndet_results"
  },
  {
    "path": "LICENSE",
    "content": "Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License."
  },
  {
    "path": "README.md",
    "content": "# Update\nRecently, I have released a new YOLO project:\n\nhttps://github.com/yjh0410/PyTorch_YOLO_Tutorial\n\nIn my new YOLO project, you can enjoy: \n- a new and stronger YOLOv1\n- a new and stronger YOLOv2\n- YOLOv3\n- YOLOv4\n- YOLOv5\n- YOLOv7\n- YOLOX\n- RTCDet\n\n\n# This project\nIn this project, you can enjoy: \n- YOLOv2 with DarkNet-19\n- YOLOv2 with ResNet-50\n- YOLOv2Slim\n- YOLOv3\n- YOLOv3-Spp\n- YOLOv3-Tiny\n\n\nI just want to provide a good YOLO project for everyone who is interested in Object Detection.\n\n# Weights\nGoogle Drive: https://drive.google.com/drive/folders/1T5hHyGICbFSdu6u2_vqvxn_puotvPsbd?usp=sharing \n\nBaiDuYunDisk: https://pan.baidu.com/s/1tSylvzOVFReUAvaAxKRSwg \nPassword d266\n\nYou can download all my models from the above links.\n\n# YOLOv2\n\n## YOLOv2 with DarkNet-19\n### Tricks\nTricks in official paper:\n- [x] batch norm\n- [x] hi-res classifier\n- [x] convolutional\n- [x] anchor boxes\n- [x] new network\n- [x] dimension priors\n- [x] location prediction\n- [x] passthrough\n- [x] multi-scale\n- [x] hi-red detector\n\n## VOC2007\n\n<table><tbody>\n<tr><th align=\"left\" bgcolor=#f8f8f8> </th>     <td bgcolor=white> size </td><td bgcolor=white> Original (darknet) </td><td bgcolor=white> Ours (pytorch) 160peochs </td><td bgcolor=white> Ours (pytorch) 250epochs </td></tr>\n<tr><th align=\"left\" bgcolor=#f8f8f8> VOC07 test</th><td bgcolor=white> 416 </td><td bgcolor=white> 76.8 </td><td bgcolor=white> 76.0 </td><td bgcolor=white> 77.1 </td></tr>\n<tr><th align=\"left\" bgcolor=#f8f8f8> VOC07 test</th><td bgcolor=white> 544 </td><td bgcolor=white> 78.6 </td><td bgcolor=white> 77.0 </td><td bgcolor=white> 78.1 </td></tr>\n</table></tbody>\n\n## COCO\n\n<table><tbody>\n<tr><th align=\"left\" bgcolor=#f8f8f8> </th>     <td bgcolor=white> data </td><td bgcolor=white> AP </td><td bgcolor=white> AP50 </td><td bgcolor=white> AP75 </td><td bgcolor=white> AP_S </td><td bgcolor=white> AP_M </td><td bgcolor=white> AP_L </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> Original (darknet)</th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> 21.6 </td><td bgcolor=white> 44.0 </td><td bgcolor=white> 19.2 </td><td bgcolor=white> 5.0 </td><td bgcolor=white> 22.4 </td><td bgcolor=white> 35.5 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> Ours (pytorch)</th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> 26.8 </td><td bgcolor=white> 46.6 </td><td bgcolor=white> 26.8 </td><td bgcolor=white> 5.8 </td><td bgcolor=white> 27.4 </td><td bgcolor=white> 45.2 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> Ours (pytorch)</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 26.6 </td><td bgcolor=white> 46.0 </td><td bgcolor=white> 26.7 </td><td bgcolor=white> 5.9 </td><td bgcolor=white> 27.8 </td><td bgcolor=white> 47.1 </td></tr>\n</table></tbody>\n\n\n## YOLOv2 with ResNet-50\n\nI replace darknet-19 with resnet-50 and get a better result on COCO-val\n\n<table><tbody>\n<tr><th align=\"left\" bgcolor=#f8f8f8> </th>     <td bgcolor=white> data </td><td bgcolor=white> AP </td><td bgcolor=white> AP50 </td><td bgcolor=white> AP75 </td><td bgcolor=white> AP_S </td><td bgcolor=white> AP_M </td><td bgcolor=white> AP_L </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> Our YOLOv2-320</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 25.8 </td><td bgcolor=white> 44.6 </td><td bgcolor=white> 25.9 </td><td bgcolor=white> 4.6 </td><td bgcolor=white> 26.8 </td><td bgcolor=white> 47.9 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> Our YOLOv2-416</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 29.0 </td><td bgcolor=white> 48.8 </td><td bgcolor=white> 29.7 </td><td bgcolor=white> 7.4 </td><td bgcolor=white> 31.9 </td><td bgcolor=white> 48.3 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> Our YOLOv2-512</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 30.4 </td><td bgcolor=white> 51.6 </td><td bgcolor=white> 30.9 </td><td bgcolor=white> 10.1 </td><td bgcolor=white> 34.9 </td><td bgcolor=white> 46.6 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> Our YOLOv2-544</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 30.4 </td><td bgcolor=white> 51.9 </td><td bgcolor=white> 30.9 </td><td bgcolor=white> 11.1 </td><td bgcolor=white> 35.8 </td><td bgcolor=white> 45.5 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> Our YOLOv2-608</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 29.2 </td><td bgcolor=white> 51.6 </td><td bgcolor=white> 29.1 </td><td bgcolor=white> 13.6 </td><td bgcolor=white> 36.8 </td><td bgcolor=white> 40.5 </td></tr>\n</table></tbody>\n\n# YOLOv3\n\n## VOC2007\n\n<table><tbody>\n<tr><th align=\"left\" bgcolor=#f8f8f8> </th>     <td bgcolor=white> size </td><td bgcolor=white> Original (darknet) </td><td bgcolor=white> Ours (pytorch) 250epochs </td></tr>\n<tr><th align=\"left\" bgcolor=#f8f8f8> VOC07 test</th><td bgcolor=white> 416 </td><td bgcolor=white> 80.25 </td><td bgcolor=white> 81.4 </td></tr>\n</table></tbody>\n\n# COCO\n\nOfficial YOLOv3:\n\n<table><tbody>\n<tr><th align=\"left\" bgcolor=#f8f8f8> </th>     <td bgcolor=white> data </td><td bgcolor=white> AP </td><td bgcolor=white> AP50 </td><td bgcolor=white> AP75 </td><td bgcolor=white> AP_S </td><td bgcolor=white> AP_M </td><td bgcolor=white> AP_L </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3-320</th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> 28.2 </td><td bgcolor=white> 51.5 </td><td bgcolor=white> - </td><td bgcolor=white> - </td><td bgcolor=white> - </td><td bgcolor=white> - </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3-416</th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> 31.0 </td><td bgcolor=white> 55.3 </td><td bgcolor=white> - </td><td bgcolor=white> - </td><td bgcolor=white> - </td><td bgcolor=white> - </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3-608</th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> 33.0 </td><td bgcolor=white> 57.0 </td><td bgcolor=white> 34.4 </td><td bgcolor=white> 18.3 </td><td bgcolor=white> 35.4 </td><td bgcolor=white> 41.9 </td></tr>\n</table></tbody>\n\nOur YOLOv3:\n\n<table><tbody>\n<tr><th align=\"left\" bgcolor=#f8f8f8> </th>     <td bgcolor=white> data </td><td bgcolor=white> AP </td><td bgcolor=white> AP50 </td><td bgcolor=white> AP75 </td><td bgcolor=white> AP_S </td><td bgcolor=white> AP_M </td><td bgcolor=white> AP_L </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3-320</th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> 33.1 </td><td bgcolor=white> 54.1 </td><td bgcolor=white> 34.5 </td><td bgcolor=white> 12.1 </td><td bgcolor=white> 34.5 </td><td bgcolor=white> 49.6 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3-416</th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> 36.0 </td><td bgcolor=white> 57.4 </td><td bgcolor=white> 37.0 </td><td bgcolor=white> 16.3 </td><td bgcolor=white> 37.5 </td><td bgcolor=white> 51.1 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3-608</th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> 37.6 </td><td bgcolor=white> 59.4 </td><td bgcolor=white> 39.9 </td><td bgcolor=white> 20.4 </td><td bgcolor=white> 39.9 </td><td bgcolor=white> 48.2 </td></tr>\n</table></tbody>\n\n# YOLOv3SPP\n## COCO:\n\n<table><tbody>\n<tr><th align=\"left\" bgcolor=#f8f8f8> </th>     <td bgcolor=white> data </td><td bgcolor=white> AP </td><td bgcolor=white> AP50 </td><td bgcolor=white> AP75 </td><td bgcolor=white> AP_S </td><td bgcolor=white> AP_M </td><td bgcolor=white> AP_L </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3Spp-320</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 32.78 </td><td bgcolor=white> 53.79 </td><td bgcolor=white> 33.9 </td><td bgcolor=white> 12.4 </td><td bgcolor=white> 35.5 </td><td bgcolor=white> 50.6 </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3Spp-416</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 35.66 </td><td bgcolor=white> 57.09 </td><td bgcolor=white> 37.4 </td><td bgcolor=white> 16.8 </td><td bgcolor=white> 38.1 </td><td bgcolor=white> 50.7 </td></tr>\n\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> YOLOv3Spp-608</th><td bgcolor=white> COCO eval </td><td bgcolor=white> 37.52 </td><td bgcolor=white> 59.44 </td><td bgcolor=white> 39.3 </td><td bgcolor=white> 21.5 </td><td bgcolor=white> 40.6 </td><td bgcolor=white> 49.6 </td></tr>\n\n</table></tbody>\n\n# YOLOv3Tiny\n<table><tbody>\n<tr><th align=\"left\" bgcolor=#f8f8f8> </th>     <td bgcolor=white> data </td><td bgcolor=white> AP </td><td bgcolor=white> AP50 </td><td bgcolor=white> AP75 </td><td bgcolor=white> AP_S </td><td bgcolor=white> AP_M </td><td bgcolor=white> AP_L </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> (official) YOLOv3Tiny </th><td bgcolor=white> COCO test-dev </td><td bgcolor=white> - </td><td bgcolor=white> 33.1 </td><td bgcolor=white> - </td><td bgcolor=white>- </td><td bgcolor=white> - </td><td bgcolor=white> - </td></tr>\n\n<tr><th align=\"left\" bgcolor=#f8f8f8> (Our) YOLOv3Tiny </th><td bgcolor=white> COCO val </td><td bgcolor=white> 15.9 </td><td bgcolor=white> 33.8 </td><td bgcolor=white> 12.8 </td><td bgcolor=white> 7.6 </td><td bgcolor=white> 17.7 </td><td bgcolor=white> 22.4 </td></tr>\n\n</table></tbody>\n\n\n# Installation\n- Pytorch-gpu 1.1.0/1.2.0/1.3.0\n- Tensorboard 1.14.\n- opencv-python, python3.6/3.7\n\n# Dataset\n\n## VOC Dataset\nI copy the download files from the following excellent project:\nhttps://github.com/amdegroot/ssd.pytorch\n\nI have uploaded the VOC2007 and VOC2012 to BaiDuYunDisk, so for researchers in China, you can download them from BaiDuYunDisk:\n\nLink：https://pan.baidu.com/s/1tYPGCYGyC0wjpC97H-zzMQ \n\nPassword：4la9\n\nYou will get a ```VOCdevkit.zip```, then what you need to do is just to unzip it and put it into ```data/```. After that, the whole path to VOC dataset is ```data/VOCdevkit/VOC2007``` and ```data/VOCdevkit/VOC2012```.\n\n### Download VOC2007 trainval & test\n\n```Shell\n# specify a directory for dataset to be downloaded into, else default is ~/data/\nsh data/scripts/VOC2007.sh # <directory>\n```\n\n### Download VOC2012 trainval\n```Shell\n# specify a directory for dataset to be downloaded into, else default is ~/data/\nsh data/scripts/VOC2012.sh # <directory>\n```\n\n## MSCOCO Dataset\nI copy the download files from the following excellent project:\nhttps://github.com/DeNA/PyTorch_YOLOv3\n\n### Download MSCOCO 2017 dataset\nJust run ```sh data/scripts/COCO2017.sh```. You will get COCO train2017, val2017, test2017.\n\n\n# Train\n## VOC\n```Shell\npython train.py -d voc --cuda -v [select a model] -hr -ms --ema\n```\n\nYou can run ```python train.py -h``` to check all optional argument.\n\n## COCO\nIf you have only one gpu:\n```Shell\npython train.py -d coco --cuda -v [select a model] -hr -ms --ema\n```\n\nIf you have multi gpus like 8, and you put 4 images on each gpu:\n```Shell\npython -m torch.distributed.launch --nproc_per_node=8 train.py -d coco --cuda -v [select a model] -hr -ms --ema \\\n                                                                        -dist \\\n                                                                        --sybn \\\n                                                                        --num_gpu 8\\\n                                                                        --batch_size 4\n```\n\n# Test\n## VOC\n```Shell\npython test.py -d voc --cuda -v [select a model] --trained_model [ Please input the path to model dir. ]\n```\n\n## COCO\n```Shell\npython test.py -d coco-val --cuda -v [select a model] --trained_model [ Please input the path to model dir. ]\n```\n\n\n# Evaluation\n## VOC\n```Shell\npython eval.py -d voc --cuda -v [select a model] --train_model [ Please input the path to model dir. ]\n```\n\n## COCO\nTo run on COCO_val:\n```Shell\npython eval.py -d coco-val --cuda -v [select a model] --train_model [ Please input the path to model dir. ]\n```\n\nTo run on COCO_test-dev(You must be sure that you have downloaded test2017):\n```Shell\npython eval.py -d coco-test --cuda -v [select a model] --train_model [ Please input the path to model dir. ]\n```\nYou will get a .json file which can be evaluated on COCO test server.\n"
  },
  {
    "path": "backbone/__init__.py",
    "content": "from .resnet import build_resnet\nfrom .darknet19 import build_darknet19\nfrom .darknet53 import build_darknet53\nfrom .darknet_tiny import build_darknet_tiny\n\n\ndef build_backbone(model_name='resnet18', pretrained=False):\n    if 'resnet' in model_name:\n        backbone = build_resnet(model_name, pretrained)\n\n    elif model_name == 'darknet19':\n        backbone = build_darknet19(pretrained)\n\n    elif model_name == 'darknet53':\n        backbone = build_darknet53(pretrained)\n\n    elif model_name == 'darknet19':\n        backbone = build_darknet_tiny(pretrained)\n                        \n    return backbone\n"
  },
  {
    "path": "backbone/darknet19.py",
    "content": "import torch\nimport torch.nn as nn\nimport os\n\n\nmodel_urls = {\n    \"darknet19\": \"https://github.com/yjh0410/image_classification_pytorch/releases/download/weight/darknet19.pth\",\n}\n\n\n__all__ = ['darknet19']\n\n\nclass Conv_BN_LeakyReLU(nn.Module):\n    def __init__(self, in_channels, out_channels, ksize, padding=0, stride=1, dilation=1):\n        super(Conv_BN_LeakyReLU, self).__init__()\n        self.convs = nn.Sequential(\n            nn.Conv2d(in_channels, out_channels, ksize, padding=padding, stride=stride, dilation=dilation),\n            nn.BatchNorm2d(out_channels),\n            nn.LeakyReLU(0.1, inplace=True)\n        )\n\n    def forward(self, x):\n        return self.convs(x)\n\n\nclass DarkNet_19(nn.Module):\n    def __init__(self):        \n        super(DarkNet_19, self).__init__()\n        # backbone network : DarkNet-19\n        # output : stride = 2, c = 32\n        self.conv_1 = nn.Sequential(\n            Conv_BN_LeakyReLU(3, 32, 3, 1),\n            nn.MaxPool2d((2,2), 2),\n        )\n\n        # output : stride = 4, c = 64\n        self.conv_2 = nn.Sequential(\n            Conv_BN_LeakyReLU(32, 64, 3, 1),\n            nn.MaxPool2d((2,2), 2)\n        )\n\n        # output : stride = 8, c = 128\n        self.conv_3 = nn.Sequential(\n            Conv_BN_LeakyReLU(64, 128, 3, 1),\n            Conv_BN_LeakyReLU(128, 64, 1),\n            Conv_BN_LeakyReLU(64, 128, 3, 1),\n            nn.MaxPool2d((2,2), 2)\n        )\n\n        # output : stride = 8, c = 256\n        self.conv_4 = nn.Sequential(\n            Conv_BN_LeakyReLU(128, 256, 3, 1),\n            Conv_BN_LeakyReLU(256, 128, 1),\n            Conv_BN_LeakyReLU(128, 256, 3, 1),\n        )\n\n        # output : stride = 16, c = 512\n        self.maxpool_4 = nn.MaxPool2d((2, 2), 2)\n        self.conv_5 = nn.Sequential(\n            Conv_BN_LeakyReLU(256, 512, 3, 1),\n            Conv_BN_LeakyReLU(512, 256, 1),\n            Conv_BN_LeakyReLU(256, 512, 3, 1),\n            Conv_BN_LeakyReLU(512, 256, 1),\n            Conv_BN_LeakyReLU(256, 512, 3, 1),\n        )\n        \n        # output : stride = 32, c = 1024\n        self.maxpool_5 = nn.MaxPool2d((2, 2), 2)\n        self.conv_6 = nn.Sequential(\n            Conv_BN_LeakyReLU(512, 1024, 3, 1),\n            Conv_BN_LeakyReLU(1024, 512, 1),\n            Conv_BN_LeakyReLU(512, 1024, 3, 1),\n            Conv_BN_LeakyReLU(1024, 512, 1),\n            Conv_BN_LeakyReLU(512, 1024, 3, 1)\n        )\n\n    def forward(self, x):\n        c1 = self.conv_1(x)\n        c2 = self.conv_2(c1)\n        c3 = self.conv_3(c2)\n        c3 = self.conv_4(c3)\n        c4 = self.conv_5(self.maxpool_4(c3))\n        c5 = self.conv_6(self.maxpool_5(c4))\n\n        output = {\n            'layer1': c3,\n            'layer2': c4,\n            'layer3': c5\n        }\n\n        return output\n\n\ndef build_darknet19(pretrained=False):\n    # model\n    model = DarkNet_19()\n\n    # load weight\n    if pretrained:\n        print('Loading pretrained weight ...')\n        url = model_urls['darknet19']\n        # checkpoint state dict\n        checkpoint_state_dict = torch.hub.load_state_dict_from_url(\n            url=url, map_location=\"cpu\", check_hash=True)\n        # model state dict\n        model_state_dict = model.state_dict()\n        # check\n        for k in list(checkpoint_state_dict.keys()):\n            if k in model_state_dict:\n                shape_model = tuple(model_state_dict[k].shape)\n                shape_checkpoint = tuple(checkpoint_state_dict[k].shape)\n                if shape_model != shape_checkpoint:\n                    checkpoint_state_dict.pop(k)\n            else:\n                checkpoint_state_dict.pop(k)\n                print(k)\n\n        model.load_state_dict(checkpoint_state_dict)\n\n    return model\n\n\nif __name__ == '__main__':\n    import time\n    net = build_darknet19(pretrained=True)\n    x = torch.randn(1, 3, 224, 224)\n    t0 = time.time()\n    output = net(x)\n    t1 = time.time()\n    print('Time: ', t1 - t0)\n\n    for k in output.keys():\n        print('{} : {}'.format(k, output[k].shape))\n"
  },
  {
    "path": "backbone/darknet53.py",
    "content": "import torch\nimport torch.nn as nn\n\n\nmodel_urls = {\n    \"darknet53\": \"https://github.com/yjh0410/image_classification_pytorch/releases/download/weight/darknet53.pth\",\n}\n\n\n__all__ = ['darknet53']\n\n\nclass Conv_BN_LeakyReLU(nn.Module):\n    def __init__(self, in_channels, out_channels, ksize, padding=0, stride=1, dilation=1):\n        super(Conv_BN_LeakyReLU, self).__init__()\n        self.convs = nn.Sequential(\n            nn.Conv2d(in_channels, out_channels, ksize, padding=padding, stride=stride, dilation=dilation),\n            nn.BatchNorm2d(out_channels),\n            nn.LeakyReLU(0.1, inplace=True)\n        )\n\n    def forward(self, x):\n        return self.convs(x)\n\n\nclass ResBlock(nn.Module):\n    def __init__(self, ch, nblocks=1):\n        super().__init__()\n        self.module_list = nn.ModuleList()\n        for _ in range(nblocks):\n            resblock_one = nn.Sequential(\n                Conv_BN_LeakyReLU(ch, ch//2, 1),\n                Conv_BN_LeakyReLU(ch//2, ch, 3, padding=1)\n            )\n            self.module_list.append(resblock_one)\n\n    def forward(self, x):\n        for module in self.module_list:\n            x = module(x) + x\n        return x\n\n\nclass DarkNet_53(nn.Module):\n    \"\"\"\n    DarkNet-53.\n    \"\"\"\n    def __init__(self):\n        super(DarkNet_53, self).__init__()\n        # stride = 2\n        self.layer_1 = nn.Sequential(\n            Conv_BN_LeakyReLU(3, 32, 3, padding=1),\n            Conv_BN_LeakyReLU(32, 64, 3, padding=1, stride=2),\n            ResBlock(64, nblocks=1)\n        )\n        # stride = 4\n        self.layer_2 = nn.Sequential(\n            Conv_BN_LeakyReLU(64, 128, 3, padding=1, stride=2),\n            ResBlock(128, nblocks=2)\n        )\n        # stride = 8\n        self.layer_3 = nn.Sequential(\n            Conv_BN_LeakyReLU(128, 256, 3, padding=1, stride=2),\n            ResBlock(256, nblocks=8)\n        )\n        # stride = 16\n        self.layer_4 = nn.Sequential(\n            Conv_BN_LeakyReLU(256, 512, 3, padding=1, stride=2),\n            ResBlock(512, nblocks=8)\n        )\n        # stride = 32\n        self.layer_5 = nn.Sequential(\n            Conv_BN_LeakyReLU(512, 1024, 3, padding=1, stride=2),\n            ResBlock(1024, nblocks=4)\n        )\n\n\n    def forward(self, x, targets=None):\n        c1 = self.layer_1(x)\n        c2 = self.layer_2(c1)\n        c3 = self.layer_3(c2)\n        c4 = self.layer_4(c3)\n        c5 = self.layer_5(c4)\n\n        output = {\n            'layer1': c3,\n            'layer2': c4,\n            'layer3': c5\n        }\n\n        return output\n\n\ndef build_darknet53(pretrained=False):\n    # model\n    model = DarkNet_53()\n\n    # load weight\n    if pretrained:\n        print('Loading pretrained weight ...')\n        url = model_urls['darknet53']\n        # checkpoint state dict\n        checkpoint_state_dict = torch.hub.load_state_dict_from_url(\n            url=url, map_location=\"cpu\", check_hash=True)\n        # model state dict\n        model_state_dict = model.state_dict()\n        # check\n        for k in list(checkpoint_state_dict.keys()):\n            if k in model_state_dict:\n                shape_model = tuple(model_state_dict[k].shape)\n                shape_checkpoint = tuple(checkpoint_state_dict[k].shape)\n                if shape_model != shape_checkpoint:\n                    checkpoint_state_dict.pop(k)\n            else:\n                checkpoint_state_dict.pop(k)\n                print(k)\n\n        model.load_state_dict(checkpoint_state_dict)\n\n    return model\n\n\nif __name__ == '__main__':\n    import time\n    net = build_darknet53(pretrained=True)\n    x = torch.randn(1, 3, 224, 224)\n    t0 = time.time()\n    output = net(x)\n    t1 = time.time()\n    print('Time: ', t1 - t0)\n\n    for k in output.keys():\n        print('{} : {}'.format(k, output[k].shape))\n"
  },
  {
    "path": "backbone/darknet_tiny.py",
    "content": "import torch\nimport torch.nn as nn\n\n\nmodel_urls = {\n    \"darknet_tiny\": \"https://github.com/yjh0410/image_classification_pytorch/releases/download/weight/darknet_tiny.pth\",\n}\n\n\n__all__ = ['darknet_tiny']\n\n\nclass Conv_BN_LeakyReLU(nn.Module):\n    def __init__(self, in_channels, out_channels, ksize, padding=0, stride=1, dilation=1):\n        super(Conv_BN_LeakyReLU, self).__init__()\n        self.convs = nn.Sequential(\n            nn.Conv2d(in_channels, out_channels, ksize, padding=padding, stride=stride, dilation=dilation),\n            nn.BatchNorm2d(out_channels),\n            nn.LeakyReLU(0.1, inplace=True)\n        )\n\n    def forward(self, x):\n        return self.convs(x)\n\n\nclass DarkNet_Tiny(nn.Module):\n    def __init__(self):\n        \n        super(DarkNet_Tiny, self).__init__()\n        # backbone network : DarkNet_Tiny\n        self.conv_1 = Conv_BN_LeakyReLU(3, 16, 3, 1)\n        self.maxpool_1 = nn.MaxPool2d((2, 2), 2)              # stride = 2\n\n        self.conv_2 = Conv_BN_LeakyReLU(16, 32, 3, 1)\n        self.maxpool_2 = nn.MaxPool2d((2, 2), 2)              # stride = 4\n\n        self.conv_3 = Conv_BN_LeakyReLU(32, 64, 3, 1)\n        self.maxpool_3 = nn.MaxPool2d((2, 2), 2)              # stride = 8\n\n        self.conv_4 = Conv_BN_LeakyReLU(64, 128, 3, 1)\n        self.maxpool_4 = nn.MaxPool2d((2, 2), 2)              # stride = 16\n\n        self.conv_5 = Conv_BN_LeakyReLU(128, 256, 3, 1)\n        self.maxpool_5 = nn.MaxPool2d((2, 2), 2)              # stride = 32\n\n        self.conv_6 = Conv_BN_LeakyReLU(256, 512, 3, 1)\n        self.maxpool_6 = nn.Sequential(\n            nn.ZeroPad2d((0, 1, 0, 1)),\n            nn.MaxPool2d((2, 2), 1)                           # stride = 32\n        )\n\n        self.conv_7 = Conv_BN_LeakyReLU(512, 1024, 3, 1)\n\n\n    def forward(self, x):\n        x = self.conv_1(x)\n        c1 = self.maxpool_1(x)\n        c1 = self.conv_2(c1)\n        c2 = self.maxpool_2(c1)\n        c2 = self.conv_3(c2)\n        c3 = self.maxpool_3(c2)\n        c3 = self.conv_4(c3)\n        c4 = self.maxpool_4(c3)\n        c4 = self.conv_5(c4)       # stride = 16\n        c5 = self.maxpool_5(c4)  \n        c5 = self.conv_6(c5)\n        c5 = self.maxpool_6(c5)\n        c5 = self.conv_7(c5)       # stride = 32\n\n        output = {\n            'layer1': c3,\n            'layer2': c4,\n            'layer3': c5\n        }\n\n        return output\n\n\ndef build_darknet_tiny(pretrained=False):\n    # model\n    model = DarkNet_Tiny()\n\n    # load weight\n    if pretrained:\n        print('Loading pretrained weight ...')\n        url = model_urls['darknet_tiny']\n        # checkpoint state dict\n        checkpoint_state_dict = torch.hub.load_state_dict_from_url(\n            url=url, map_location=\"cpu\", check_hash=True)\n        # model state dict\n        model_state_dict = model.state_dict()\n        # check\n        for k in list(checkpoint_state_dict.keys()):\n            if k in model_state_dict:\n                shape_model = tuple(model_state_dict[k].shape)\n                shape_checkpoint = tuple(checkpoint_state_dict[k].shape)\n                if shape_model != shape_checkpoint:\n                    checkpoint_state_dict.pop(k)\n            else:\n                checkpoint_state_dict.pop(k)\n                print(k)\n\n        model.load_state_dict(checkpoint_state_dict)\n\n    return model\n\n\nif __name__ == '__main__':\n    import time\n    net = build_darknet_tiny(pretrained=True)\n    x = torch.randn(1, 3, 224, 224)\n    t0 = time.time()\n    output = net(x)\n    t1 = time.time()\n    print('Time: ', t1 - t0)\n\n    for k in output.keys():\n        print('{} : {}'.format(k, output[k].shape))\n"
  },
  {
    "path": "backbone/resnet.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.utils.model_zoo as model_zoo\n\n\n__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',\n           'resnet152']\n\n\nmodel_urls = {\n    'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',\n    'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',\n    'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',\n    'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',\n    'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',\n}\n\n\ndef conv3x3(in_planes, out_planes, stride=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                     padding=1, bias=False)\n\ndef conv1x1(in_planes, out_planes, stride=1):\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)\n\nclass BasicBlock(nn.Module):\n    expansion = 1\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(BasicBlock, self).__init__()\n        self.conv1 = conv3x3(inplanes, planes, stride)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.relu = nn.ReLU(inplace=True)\n        self.conv2 = conv3x3(planes, planes)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\nclass Bottleneck(nn.Module):\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(Bottleneck, self).__init__()\n        self.conv1 = conv1x1(inplanes, planes)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.conv2 = conv3x3(planes, planes, stride)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv3 = conv1x1(planes, planes * self.expansion)\n        self.bn3 = nn.BatchNorm2d(planes * self.expansion)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\nclass ResNet(nn.Module):\n\n    def __init__(self, block, layers, zero_init_residual=False):\n        super(ResNet, self).__init__()\n        self.inplanes = 64\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')\n            elif isinstance(m, nn.BatchNorm2d):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n\n        # Zero-initialize the last BN in each residual branch,\n        # so that the residual branch starts with zeros, and each residual block behaves like an identity.\n        # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677\n        if zero_init_residual:\n            for m in self.modules():\n                if isinstance(m, Bottleneck):\n                    nn.init.constant_(m.bn3.weight, 0)\n                elif isinstance(m, BasicBlock):\n                    nn.init.constant_(m.bn2.weight, 0)\n\n    def _make_layer(self, block, planes, blocks, stride=1):\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                conv1x1(self.inplanes, planes * block.expansion, stride),\n                nn.BatchNorm2d(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample))\n        self.inplanes = planes * block.expansion\n        for _ in range(1, blocks):\n            layers.append(block(self.inplanes, planes))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        c1 = self.conv1(x)\n        c1 = self.bn1(c1)\n        c1 = self.relu(c1)\n        c1 = self.maxpool(c1)\n\n        c2 = self.layer1(c1)\n        c3 = self.layer2(c2)\n        c4 = self.layer3(c3)\n        c5 = self.layer4(c4)\n\n        output = {\n            'layer1': c3,\n            'layer2': c4,\n            'layer3': c5\n        }\n\n        return output\n\n\ndef resnet18(pretrained=False, **kwargs):\n    \"\"\"Constructs a ResNet-18 model.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n    \"\"\"\n    model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)\n    if pretrained:\n        # strict = False as we don't need fc layer params.\n        model.load_state_dict(model_zoo.load_url(model_urls['resnet18']), strict=False)\n    return model\n\ndef resnet34(pretrained=False, **kwargs):\n    \"\"\"Constructs a ResNet-34 model.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n    \"\"\"\n    model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)\n    if pretrained:\n        model.load_state_dict(model_zoo.load_url(model_urls['resnet34']), strict=False)\n    return model\n\ndef resnet50(pretrained=False, **kwargs):\n    \"\"\"Constructs a ResNet-50 model.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n    \"\"\"\n    model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)\n    if pretrained:\n        model.load_state_dict(model_zoo.load_url(model_urls['resnet50']), strict=False)\n    return model\n\ndef resnet101(pretrained=False, **kwargs):\n    \"\"\"Constructs a ResNet-101 model.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n    \"\"\"\n    model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)\n    if pretrained:\n        model.load_state_dict(model_zoo.load_url(model_urls['resnet101']), strict=False)\n    return model\n\ndef resnet152(pretrained=False, **kwargs):\n    \"\"\"Constructs a ResNet-152 model.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n    \"\"\"\n    model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)\n    if pretrained:\n        model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))\n    return model\n\n\ndef build_resnet(model_name='resnet18', pretrained=False):\n    \n    if model_name == 'resnet18':\n        model = resnet18(pretrained=pretrained)\n    \n    elif model_name == 'resnet34':\n        model = resnet34(pretrained=pretrained)\n    \n    elif model_name == 'resnet50':\n        model = resnet50(pretrained=pretrained)\n    \n    elif model_name == 'resnet101':\n        model = resnet101(pretrained=pretrained)\n\n    elif model_name == 'resnet152':\n        model = resnet152(pretrained=pretrained)\n    \n\n    return model\n\n\nif __name__ == \"__main__\":\n    import time\n\n    model = build_resnet(model_name='resnet18', pretrained=True)\n    x = torch.randn(1, 3, 224, 224)\n    t0 = time.time()\n    output = model(x)\n    t1 = time.time()\n    print('Time: ', t1 - t0)\n\n    for k in output.keys():\n        print('{} : {}'.format(k, output[k].shape))\n"
  },
  {
    "path": "backbone/weights/README.md",
    "content": "# darknet19, darknet53, darknet-tiny, darknet-light\ndarknet-tiny is designed by myself. It is a very simple and lightweight backbone.\n\ndarknet-light is same to the backbone used in official TinyYOLOv3.\n\nFor researchers in China, you can download them from BaiduYunDisk:\n\nlink：https://pan.baidu.com/s/1Rm87Fcj1RXZFmeTUrDWANA \n\npassword：qgzn\n\n\nAlso, you can download them from Google Drive:\n\nlink: https://drive.google.com/drive/folders/15saMtvYiz3yfFNu5EnC7GSltEAvTImMB?usp=sharing\n"
  },
  {
    "path": "data/__init__.py",
    "content": "from .voc0712 import VOCDetection, VOCAnnotationTransform, VOC_CLASSES\nfrom .coco2017 import COCODataset, coco_class_labels, coco_class_index\nfrom .config import *\nimport torch\nimport cv2\nimport numpy as np\n\n\ndef detection_collate(batch):\n    \"\"\"Custom collate fn for dealing with batches of images that have a different\n    number of associated object annotations (bounding boxes).\n\n    Arguments:\n        batch: (tuple) A tuple of tensor images and lists of annotations\n\n    Return:\n        A tuple containing:\n            1) (tensor) batch of images stacked on their 0 dim\n            2) (list of tensors) annotations for a given image are stacked on\n                                 0 dim\n    \"\"\"\n    targets = []\n    imgs = []\n    for sample in batch:\n        imgs.append(sample[0])\n        targets.append(torch.FloatTensor(sample[1]))\n    return torch.stack(imgs, 0), targets\n\n\ndef base_transform(image, size, mean, std):\n    x = cv2.resize(image, (size, size)).astype(np.float32)\n    x /= 255.\n    x -= mean\n    x /= std\n    return x\n\n\nclass BaseTransform:\n    def __init__(self, size, mean=(0.406, 0.456, 0.485), std=(0.225, 0.224, 0.229)):\n        self.size = size\n        self.mean = np.array(mean, dtype=np.float32)\n        self.std = np.array(std, dtype=np.float32)\n\n    def __call__(self, image, boxes=None, labels=None):\n        return base_transform(image, self.size, self.mean, self.std), boxes, labels\n"
  },
  {
    "path": "data/coco2017.py",
    "content": "import os\nimport numpy as np\nimport random\n\nimport torch\nfrom torch.utils.data import Dataset\nimport cv2\nfrom pycocotools.coco import COCO\n\n\ncoco_class_labels = ('background',\n                        'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck',\n                        'boat', 'traffic light', 'fire hydrant', 'street sign', 'stop sign',\n                        'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',\n                        'elephant', 'bear', 'zebra', 'giraffe', 'hat', 'backpack', 'umbrella',\n                        'shoe', 'eye glasses', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis',\n                        'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove',\n                        'skateboard', 'surfboard', 'tennis racket', 'bottle', 'plate', 'wine glass',\n                        'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich',\n                        'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair',\n                        'couch', 'potted plant', 'bed', 'mirror', 'dining table', 'window', 'desk',\n                        'toilet', 'door', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',\n                        'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'blender', 'book',\n                        'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush')\n\ncoco_class_index = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20,\n                    21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,\n                    46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67,\n                    70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]\n\n\nclass COCODataset(Dataset):\n    \"\"\"\n    COCO dataset class.\n    \"\"\"\n    def __init__(self, \n                 data_dir=None, \n                 transform=None, \n                 json_file='instances_train2017.json',\n                 name='train2017'):\n        \"\"\"\n        COCO dataset initialization. Annotation data are read into memory by COCO API.\n        Args:\n            data_dir (str): dataset root directory\n            json_file (str): COCO json file name\n            name (str): COCO data name (e.g. 'train2017' or 'val2017')\n            img_size (int): target image size after pre-processing\n            min_size (int): bounding boxes smaller than this are ignored\n            debug (bool): if True, only one data id is selected from the dataset\n        \"\"\"\n        self.data_dir = data_dir\n        self.json_file = json_file\n        self.coco = COCO(os.path.join(self.data_dir, 'annotations', self.json_file))\n        self.ids = self.coco.getImgIds()\n        self.class_ids = sorted(self.coco.getCatIds())\n        self.name = name\n        self.transform = transform\n\n\n    def __len__(self):\n        return len(self.ids)\n\n\n    def pull_image(self, index):\n        id_ = self.ids[index]\n        img_file = os.path.join(self.data_dir, self.name,\n                                '{:012}'.format(id_) + '.jpg')\n        img = cv2.imread(img_file)\n\n        if self.json_file == 'instances_val5k.json' and img is None:\n            img_file = os.path.join(self.data_dir, 'train2017',\n                                    '{:012}'.format(id_) + '.jpg')\n            img = cv2.imread(img_file)\n\n        return img, id_\n\n\n    def pull_anno(self, index):\n        id_ = self.ids[index]\n\n        anno_ids = self.coco.getAnnIds(imgIds=[int(id_)], iscrowd=None)\n        annotations = self.coco.loadAnns(anno_ids)\n        \n        target = []\n        for anno in annotations:\n            if 'bbox' in anno:\n                xmin = np.max((0, anno['bbox'][0]))\n                ymin = np.max((0, anno['bbox'][1]))\n                xmax = xmin + anno['bbox'][2]\n                ymax = ymin + anno['bbox'][3]\n                \n                if anno['area'] > 0 and xmax >= xmin and ymax >= ymin:\n                    label_ind = anno['category_id']\n                    cls_id = self.class_ids.index(label_ind)\n\n                    target.append([xmin, ymin, xmax, ymax, cls_id])  # [xmin, ymin, xmax, ymax, label_ind]\n            else:\n                print('No bbox !!')\n        return target\n\n\n    def __getitem__(self, index):\n        img, gt, h, w = self.pull_item(index)\n\n        return img, gt\n\n\n    def pull_item(self, index):\n        id_ = self.ids[index]\n\n        anno_ids = self.coco.getAnnIds(imgIds=[int(id_)], iscrowd=None)\n        annotations = self.coco.loadAnns(anno_ids)\n\n        # load an image\n        img_file = os.path.join(self.data_dir, self.name,\n                                '{:012}'.format(id_) + '.jpg')\n        img = cv2.imread(img_file)\n        \n        if self.json_file == 'instances_val5k.json' and img is None:\n            img_file = os.path.join(self.data_dir, 'train2017',\n                                    '{:012}'.format(id_) + '.jpg')\n            img = cv2.imread(img_file)\n\n        assert img is not None\n\n        height, width, channels = img.shape\n        \n        # load a target\n        target = []\n        for anno in annotations:\n            if 'bbox' in anno and anno['area'] > 0:   \n                xmin = np.max((0, anno['bbox'][0]))\n                ymin = np.max((0, anno['bbox'][1]))\n                xmax = np.min((width - 1, xmin + np.max((0, anno['bbox'][2] - 1))))\n                ymax = np.min((height - 1, ymin + np.max((0, anno['bbox'][3] - 1))))\n                if xmax > xmin and ymax > ymin:\n                    label_ind = anno['category_id']\n                    cls_id = self.class_ids.index(label_ind)\n                    xmin /= width\n                    ymin /= height\n                    xmax /= width\n                    ymax /= height\n\n                    target.append([xmin, ymin, xmax, ymax, cls_id])  # [xmin, ymin, xmax, ymax, label_ind]\n            else:\n                print('No bbox !!!')\n\n        # check target\n        if len(target) == 0:\n            target = np.zeros([1, 5])\n        else:\n            target = np.array(target)\n        # transform\n        if self.transform is not None:\n            img, boxes, labels = self.transform(img, target[:, :4], target[:, 4])\n            # to rgb\n            img = img[:, :, (2, 1, 0)]\n            # to tensor\n            img = torch.from_numpy(img).permute(2, 0, 1).float()\n            target = np.hstack((boxes, np.expand_dims(labels, axis=1)))\n\n        return img, target, height, width\n\n\nif __name__ == \"__main__\":\n    def base_transform(image, size, mean):\n        x = cv2.resize(image, (size, size)).astype(np.float32)\n        x -= mean\n        x = x.astype(np.float32)\n        return x\n\n    class BaseTransform:\n        def __init__(self, size, mean):\n            self.size = size\n            self.mean = np.array(mean, dtype=np.float32)\n\n        def __call__(self, image, boxes=None, labels=None):\n            return base_transform(image, self.size, self.mean), boxes, labels\n\n    img_size = 640\n    dataset = COCODataset(\n                data_dir='/mnt/share/ssd2/dataset/COCO/',\n                transform=BaseTransform(img_size, (0, 0, 0)))\n    \n    for i in range(1000):\n        im, gt, h, w = dataset.pull_item(i)\n        img = im.permute(1,2,0).numpy()[:, :, (2, 1, 0)].astype(np.uint8)\n        img = img.copy()\n\n        for box in gt:\n            xmin, ymin, xmax, ymax, _ = box\n            xmin *= img_size\n            ymin *= img_size\n            xmax *= img_size\n            ymax *= img_size\n            img = cv2.rectangle(img, (int(xmin), int(ymin)), (int(xmax), int(ymax)), (0,0,255), 2)\n        cv2.imshow('gt', img)\n        # cv2.imwrite(str(i)+'.jpg', img)\n        cv2.waitKey(0)\n"
  },
  {
    "path": "data/config.py",
    "content": "# config.py\n\n# YOLOv2 with darknet-19\nyolov2_d19_cfg = {\n    # network\n    'backbone': 'd19',\n    # for multi-scale trick\n    'train_size': 640,\n    'val_size': 416,\n    'random_size_range': [10, 19],\n    # anchor size\n    'anchor_size_voc': [[1.19, 1.98], [2.79, 4.59], [4.53, 8.92], [8.06, 5.29], [10.32, 10.65]],\n    'anchor_size_coco': [[0.53, 0.79], [1.71, 2.36], [2.89, 6.44], [6.33, 3.79], [9.03, 9.74]],\n    # train\n    'lr_epoch': (150, 200),\n    'max_epoch': 250,\n    'ignore_thresh': 0.5\n}\n\n# YOLOv2 with resnet-50\nyolov2_r50_cfg = {\n    # network\n    'backbone': 'r50',\n    # for multi-scale trick\n    'train_size': 640,\n    'val_size': 416,\n    'random_size_range': [10, 19],\n    # anchor size\n    'anchor_size_voc': [[1.19, 1.98], [2.79, 4.59], [4.53, 8.92], [8.06, 5.29], [10.32, 10.65]],\n    'anchor_size_coco': [[0.53, 0.79], [1.71, 2.36], [2.89, 6.44], [6.33, 3.79], [9.03, 9.74]],\n    # train\n    'lr_epoch': (150, 200),\n    'max_epoch': 250,\n    'ignore_thresh': 0.5\n}\n\n# YOLOv3 / YOLOv3Spp\nyolov3_d53_cfg = {\n    # network\n    'backbone': 'd53',\n    # for multi-scale trick\n    'train_size': 640,\n    'val_size': 416,\n    'random_size_range': [10, 19],\n    # anchor size\n    'anchor_size_voc': [[32.64, 47.68], [50.24, 108.16], [126.72, 96.32],     \n                        [78.4, 201.92], [178.24, 178.56], [129.6, 294.72],     \n                        [331.84, 194.56], [227.84, 325.76], [365.44, 358.72]],\n    'anchor_size_coco': [[12.48, 19.2], [31.36, 46.4],[46.4, 113.92],\n                         [97.28, 55.04], [133.12, 127.36], [79.04, 224.],\n                         [301.12, 150.4 ], [172.16, 285.76], [348.16, 341.12]],\n    # train\n    'lr_epoch': (150, 200),\n    'max_epoch': 250,\n    'ignore_thresh': 0.5\n}\n\n# YOLOv3Tiny\nyolov3_tiny_cfg = {\n    # network\n    'backbone': 'd-light',\n    # for multi-scale trick\n    'train_size': 640,\n    'val_size': 416,\n    'random_size_range':[10, 19],\n    # anchor size\n    'anchor_size_voc': [[34.01, 61.79],   [86.94, 109.68],  [93.49, 227.46],     \n                        [246.38, 163.33], [178.68, 306.55], [344.89, 337.14]],\n    'anchor_size_coco': [[15.09, 23.25],  [46.36, 61.47],   [68.41, 161.84],\n                         [168.88, 93.59], [154.96, 257.45], [334.74, 302.47]],\n    # train\n    'lr_epoch': (150, 200),\n    'max_epoch': 250,\n    'ignore_thresh': 0.5\n}\n"
  },
  {
    "path": "data/scripts/COCO2017.sh",
    "content": "mkdir COCO\ncd COCO\n\nwget http://images.cocodataset.org/zips/train2017.zip\nwget http://images.cocodataset.org/zips/val2017.zip\nwget http://images.cocodataset.org/annotations/annotations_trainval2017.zip\nwget http://images.cocodataset.org/zips/test2017.zip\nwget http://images.cocodataset.org/annotations/image_info_test2017.zip \n\nunzip train2017.zip\nunzip val2017.zip\nunzip annotations_trainval2017.zip\nunzip test2017.zip\nunzip image_info_test2017.zip\n\n# rm -f train2017.zip\n# rm -f val2017.zip\n# rm -f annotations_trainval2017.zip\n# rm -f test2017.zip\n# rm -f image_info_test2017.zip\n"
  },
  {
    "path": "data/scripts/VOC2007.sh",
    "content": "#!/bin/bash\n# Ellis Brown\n\nstart=`date +%s`\n\n# handle optional download dir\nif [ -z \"$1\" ]\n  then\n    # navigate to ~/data\n    echo \"navigating to ~/data/ ...\" \n    mkdir -p ~/data\n    cd ~/data/\n  else\n    # check if is valid directory\n    if [ ! -d $1 ]; then\n        echo $1 \"is not a valid directory\"\n        exit 0\n    fi\n    echo \"navigating to\" $1 \"...\"\n    cd $1\nfi\n\necho \"Downloading VOC2007 trainval ...\"\n# Download the data.\ncurl -LO http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar\necho \"Downloading VOC2007 test data ...\"\ncurl -LO http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar\necho \"Done downloading.\"\n\n# Extract data\necho \"Extracting trainval ...\"\ntar -xvf VOCtrainval_06-Nov-2007.tar\necho \"Extracting test ...\"\ntar -xvf VOCtest_06-Nov-2007.tar\necho \"removing tars ...\"\nrm VOCtrainval_06-Nov-2007.tar\nrm VOCtest_06-Nov-2007.tar\n\nend=`date +%s`\nruntime=$((end-start))\n\necho \"Completed in\" $runtime \"seconds\""
  },
  {
    "path": "data/scripts/VOC2012.sh",
    "content": "#!/bin/bash\n# Ellis Brown\n\nstart=`date +%s`\n\n# handle optional download dir\nif [ -z \"$1\" ]\n  then\n    # navigate to ~/data\n    echo \"navigating to ~/data/ ...\" \n    mkdir -p ~/data\n    cd ~/data/\n  else\n    # check if is valid directory\n    if [ ! -d $1 ]; then\n        echo $1 \"is not a valid directory\"\n        exit 0\n    fi\n    echo \"navigating to\" $1 \"...\"\n    cd $1\nfi\n\necho \"Downloading VOC2012 trainval ...\"\n# Download the data.\ncurl -LO http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar\necho \"Done downloading.\"\n\n\n# Extract data\necho \"Extracting trainval ...\"\ntar -xvf VOCtrainval_11-May-2012.tar\necho \"removing tar ...\"\nrm VOCtrainval_11-May-2012.tar\n\nend=`date +%s`\nruntime=$((end-start))\n\necho \"Completed in\" $runtime \"seconds\""
  },
  {
    "path": "data/voc0712.py",
    "content": "\"\"\"VOC Dataset Classes\n\nOriginal author: Francisco Massa\nhttps://github.com/fmassa/vision/blob/voc_dataset/torchvision/datasets/voc.py\n\nUpdated by: Ellis Brown, Max deGroot\n\"\"\"\nimport os.path as osp\nimport sys\nimport torch\nimport torch.utils.data as data\nimport cv2\nimport numpy as np\nimport random\nimport xml.etree.ElementTree as ET\n\n\nVOC_CLASSES = (  # always index 0\n    'aeroplane', 'bicycle', 'bird', 'boat',\n    'bottle', 'bus', 'car', 'cat', 'chair',\n    'cow', 'diningtable', 'dog', 'horse',\n    'motorbike', 'person', 'pottedplant',\n    'sheep', 'sofa', 'train', 'tvmonitor')\n\n\nclass VOCAnnotationTransform(object):\n    \"\"\"Transforms a VOC annotation into a Tensor of bbox coords and label index\n    Initilized with a dictionary lookup of classnames to indexes\n\n    Arguments:\n        class_to_ind (dict, optional): dictionary lookup of classnames -> indexes\n            (default: alphabetic indexing of VOC's 20 classes)\n        keep_difficult (bool, optional): keep difficult instances or not\n            (default: False)\n        height (int): height\n        width (int): width\n    \"\"\"\n\n    def __init__(self, class_to_ind=None, keep_difficult=False):\n        self.class_to_ind = class_to_ind or dict(\n            zip(VOC_CLASSES, range(len(VOC_CLASSES))))\n        self.keep_difficult = keep_difficult\n\n    def __call__(self, target, width, height):\n        \"\"\"\n        Arguments:\n            target (annotation) : the target annotation to be made usable\n                will be an ET.Element\n        Returns:\n            a list containing lists of bounding boxes  [bbox coords, class name]\n        \"\"\"\n        res = []\n        for obj in target.iter('object'):\n            difficult = int(obj.find('difficult').text) == 1\n            if not self.keep_difficult and difficult:\n                continue\n            name = obj.find('name').text.lower().strip()\n            bbox = obj.find('bndbox')\n\n            pts = ['xmin', 'ymin', 'xmax', 'ymax']\n            bndbox = []\n            for i, pt in enumerate(pts):\n                cur_pt = int(bbox.find(pt).text) - 1\n                # scale height or width\n                cur_pt = cur_pt / width if i % 2 == 0 else cur_pt / height\n                bndbox.append(cur_pt)\n            label_idx = self.class_to_ind[name]\n            bndbox.append(label_idx)\n            res += [bndbox]  # [xmin, ymin, xmax, ymax, label_ind]\n            # img_id = target.find('filename').text[:-4]\n\n        return res  # [[xmin, ymin, xmax, ymax, label_ind], ... ]\n\n\nclass VOCDetection(data.Dataset):\n    \"\"\"VOC Detection Dataset Object\n\n    input is image, target is annotation\n\n    Arguments:\n        root (string): filepath to VOCdevkit folder.\n        image_set (string): imageset to use (eg. 'train', 'val', 'test')\n        transform (callable, optional): transformation to perform on the\n            input image\n        target_transform (callable, optional): transformation to perform on the\n            target `annotation`\n            (eg: take in caption string, return tensor of word indices)\n        dataset_name (string, optional): which dataset to load\n            (default: 'VOC2007')\n    \"\"\"\n\n    def __init__(self, \n                 data_dir=None,\n                 image_sets=[('2007', 'trainval'), ('2012', 'trainval')],\n                 transform=None, \n                 target_transform=VOCAnnotationTransform(),\n                 dataset_name='VOC0712'):\n        self.root = data_dir\n        self.image_set = image_sets\n        self.transform = transform\n        self.target_transform = target_transform\n        self.name = dataset_name\n        self._annopath = osp.join('%s', 'Annotations', '%s.xml')\n        self._imgpath = osp.join('%s', 'JPEGImages', '%s.jpg')\n        self.ids = list()\n        for (year, name) in image_sets:\n            rootpath = osp.join(self.root, 'VOC' + year)\n            for line in open(osp.join(rootpath, 'ImageSets', 'Main', name + '.txt')):\n                self.ids.append((rootpath, line.strip()))\n\n\n    def __getitem__(self, index):\n        im, gt, h, w = self.pull_item(index)\n\n        return im, gt\n\n\n    def __len__(self):\n        return len(self.ids)\n\n\n    def pull_item(self, index):\n        # load an image\n        img_id = self.ids[index]\n        img = cv2.imread(self._imgpath % img_id)\n        height, width, channels = img.shape\n\n        # load a target\n        target = ET.parse(self._annopath % img_id).getroot()\n        if self.target_transform is not None:\n            target = self.target_transform(target, width, height)\n\n        # check target\n        if len(target) == 0:\n            target = np.zeros([1, 5])\n        else:\n            target = np.array(target)\n        # transform\n        if self.transform is not None:\n            img, boxes, labels = self.transform(img, target[:, :4], target[:, 4])\n            # to rgb\n            img = img[:, :, (2, 1, 0)]\n            # to tensor\n            img = torch.from_numpy(img).permute(2, 0, 1).float()\n            # target\n            target = np.hstack((boxes, np.expand_dims(labels, axis=1)))\n\n        return img, target, height, width\n\n\n    def pull_image(self, index):\n        '''Returns the original image object at index in PIL form\n\n        Note: not using self.__getitem__(), as any transformations passed in\n        could mess up this functionality.\n\n        Argument:\n            index (int): index of img to show\n        Return:\n            PIL img\n        '''\n        img_id = self.ids[index]\n        return cv2.imread(self._imgpath % img_id, cv2.IMREAD_COLOR), img_id\n\n\n    def pull_anno(self, index):\n        '''Returns the original annotation of image at index\n\n        Note: not using self.__getitem__(), as any transformations passed in\n        could mess up this functionality.\n\n        Argument:\n            index (int): index of img to get annotation of\n        Return:\n            list:  [img_id, [(label, bbox coords),...]]\n                eg: ('001718', [('dog', (96, 13, 438, 332))])\n        '''\n        img_id = self.ids[index]\n        anno = ET.parse(self._annopath % img_id).getroot()\n        gt = self.target_transform(anno, 1, 1)\n        return img_id[1], gt\n\n\n    def pull_tensor(self, index):\n        '''Returns the original image at an index in tensor form\n\n        Note: not using self.__getitem__(), as any transformations passed in\n        could mess up this functionality.\n\n        Argument:\n            index (int): index of img to show\n        Return:\n            tensorized version of img, squeezed\n        '''\n        return torch.Tensor(self.pull_image(index)).unsqueeze_(0)\n\n\nif __name__ == \"__main__\":\n    def base_transform(image, size, mean):\n        x = cv2.resize(image, (size, size)).astype(np.float32)\n        x -= mean\n        x = x.astype(np.float32)\n        return x\n\n    class BaseTransform:\n        def __init__(self, size, mean):\n            self.size = size\n            self.mean = np.array(mean, dtype=np.float32)\n\n        def __call__(self, image, boxes=None, labels=None):\n            return base_transform(image, self.size, self.mean), boxes, labels\n\n    img_size = 640\n    # dataset\n    dataset = VOCDetection(data_dir='/mnt/share/ssd2/dataset/VOCdevkit/', \n                           image_sets=[('2007', 'trainval')],\n                           transform=BaseTransform(img_size, (0, 0, 0)))\n    for i in range(1000):\n        im, gt, h, w = dataset.pull_item(i)\n        img = im.permute(1,2,0).numpy()[:, :, (2, 1, 0)].astype(np.uint8)\n        img = img.copy()\n        for box in gt:\n            xmin, ymin, xmax, ymax, _ = box\n            xmin *= img_size\n            ymin *= img_size\n            xmax *= img_size\n            ymax *= img_size\n            img = cv2.rectangle(img, (int(xmin), int(ymin)), (int(xmax), int(ymax)), (0,0,255), 2)\n        cv2.imshow('gt', img)\n        cv2.waitKey(0)\n"
  },
  {
    "path": "demo.py",
    "content": "import argparse\nimport os\nimport numpy as np\nimport cv2\nimport time\nimport torch\nfrom data.coco2017 import coco_class_index, coco_class_labels\nfrom data import config, BaseTransform\n\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description='YOLO Demo Detection')\n    # basic\n    parser.add_argument('--mode', default='image',\n                        type=str, help='Use the data from image, video or camera')\n    parser.add_argument('-size', '--input_size', default=416, type=int,\n                        help='input_size')\n    parser.add_argument('--cuda', action='store_true', default=False,\n                        help='Use cuda')\n    parser.add_argument('--path_to_img', default='data/demo/images/',\n                        type=str, help='The path to image files')\n    parser.add_argument('--path_to_vid', default='data/demo/videos/',\n                        type=str, help='The path to video files')\n    parser.add_argument('--path_to_save', default='det_results/',\n                        type=str, help='The path to save the detection results')\n    parser.add_argument('-vs', '--visual_threshold', default=0.3,\n                        type=float, help='visual threshold')\n    # model\n    parser.add_argument('-v', '--version', default='yolo_v2',\n                        help='yolov2_d19, yolov2_r50, yolov2_slim, yolov3, yolov3_spp, yolov3_tiny')\n    parser.add_argument('--conf_thresh', default=0.1, type=float,\n                        help='NMS threshold')\n    parser.add_argument('--nms_thresh', default=0.45, type=float,\n                        help='NMS threshold')\n    parser.add_argument('--trained_model', default='weights/',\n                        type=str, help='Trained state_dict file path to open')\n    \n    return parser.parse_args()\n                    \n\ndef plot_bbox_labels(img, bbox, label, cls_color, test_scale=0.4):\n    x1, y1, x2, y2 = bbox\n    x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)\n    t_size = cv2.getTextSize(label, 0, fontScale=1, thickness=2)[0]\n    # plot bbox\n    cv2.rectangle(img, (x1, y1), (x2, y2), cls_color, 2)\n    # plot title bbox\n    cv2.rectangle(img, (x1, y1-t_size[1]), (int(x1 + t_size[0] * test_scale), y1), cls_color, -1)\n    # put the test on the title bbox\n    cv2.putText(img, label, (int(x1), int(y1 - 5)), 0, test_scale, (0, 0, 0), 1, lineType=cv2.LINE_AA)\n\n    return img\n\n\ndef visualize(img, bboxes, scores, cls_inds, class_colors, vis_thresh=0.3):\n    ts = 0.4\n    for i, bbox in enumerate(bboxes):\n        if scores[i] > vis_thresh:\n            cls_color = class_colors[int(cls_inds[i])]\n            cls_id = coco_class_index[int(cls_inds[i])]\n            mess = '%s: %.2f' % (coco_class_labels[cls_id], scores[i])\n            img = plot_bbox_labels(img, bbox, mess, cls_color, test_scale=ts)\n\n    return img\n\n\ndef detect(net, \n           device, \n           transform, \n           vis_thresh, \n           mode='image', \n           path_to_img=None, \n           path_to_vid=None, \n           path_to_save=None):\n    # class color\n    class_colors = [(np.random.randint(255),\n                     np.random.randint(255),\n                     np.random.randint(255)) for _ in range(80)]\n    save_path = os.path.join(path_to_save, mode)\n    os.makedirs(save_path, exist_ok=True)\n\n    # ------------------------- Camera ----------------------------\n    if mode == 'camera':\n        print('use camera !!!')\n        cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)\n        while True:\n            ret, frame = cap.read()\n            if ret:\n                if cv2.waitKey(1) == ord('q'):\n                    break\n                img_h, img_w = frame.shape[:2]\n                scale = np.array([[img_w, img_h, img_w, img_h]])\n\n                # prepare\n                x = torch.from_numpy(transform(frame)[0][:, :, ::-1]).permute(2, 0, 1)\n                x = x.unsqueeze(0).to(device)\n                # inference\n                t0 = time.time()\n                bboxes, scores, cls_inds = net(x)\n                t1 = time.time()\n                print(\"detection time used \", t1-t0, \"s\")\n\n                # rescale\n                bboxes *= scale\n\n                frame_processed = visualize(img=frame, \n                                            bboxes=bboxes,\n                                            scores=scores, \n                                            cls_inds=cls_inds,\n                                            class_colors=class_colors,\n                                            vis_thresh=vis_thresh)\n                cv2.imshow('detection result', frame_processed)\n                cv2.waitKey(1)\n            else:\n                break\n        cap.release()\n        cv2.destroyAllWindows()\n\n    # ------------------------- Image ----------------------------\n    elif mode == 'image':\n        for i, img_id in enumerate(os.listdir(path_to_img)):\n            img = cv2.imread(path_to_img + '/' + img_id, cv2.IMREAD_COLOR)\n            img_h, img_w = img.shape[:2]\n            scale = np.array([[img_w, img_h, img_w, img_h]])\n            \n            # prepare\n            x = torch.from_numpy(transform(img)[0][:, :, ::-1]).permute(2, 0, 1)\n            x = x.unsqueeze(0).to(device)\n            # inference\n            t0 = time.time()\n            bboxes, scores, cls_inds = net(x)\n            t1 = time.time()\n            print(\"detection time used \", t1-t0, \"s\")\n\n            # rescale\n            bboxes *= scale\n\n            img_processed = visualize(img=img, \n                                    bboxes=bboxes,\n                                    scores=scores, \n                                    cls_inds=cls_inds,\n                                    class_colors=class_colors,\n                                    vis_thresh=vis_thresh)\n\n            cv2.imshow('detection', img_processed)\n            cv2.imwrite(os.path.join(save_path, str(i).zfill(6)+'.jpg'), img_processed)\n            cv2.waitKey(0)\n\n    # ------------------------- Video ---------------------------\n    elif mode == 'video':\n        video = cv2.VideoCapture(path_to_vid)\n        fourcc = cv2.VideoWriter_fourcc(*'XVID')\n        save_size = (640, 480)\n        save_path = os.path.join(save_path, 'det.avi')\n        fps = 15.0\n        out = cv2.VideoWriter(save_path, fourcc, fps, save_size)\n\n        while(True):\n            ret, frame = video.read()\n            \n            if ret:\n                # ------------------------- Detection ---------------------------\n                img_h, img_w = frame.shape[:2]\n                scale = np.array([[img_w, img_h, img_w, img_h]])\n                # prepare\n                x = torch.from_numpy(transform(frame)[0][:, :, ::-1]).permute(2, 0, 1)\n                x = x.unsqueeze(0).to(device)\n                # inference\n                t0 = time.time()\n                bboxes, scores, cls_inds = net(x)\n                t1 = time.time()\n                print(\"detection time used \", t1-t0, \"s\")\n\n                # rescale\n                bboxes *= scale\n                \n                frame_processed = visualize(img=frame, \n                                            bboxes=bboxes,\n                                            scores=scores, \n                                            cls_inds=cls_inds,\n                                            class_colors=class_colors,\n                                            vis_thresh=vis_thresh)\n\n                frame_processed_resize = cv2.resize(frame_processed, save_size)\n                out.write(frame_processed_resize)\n                cv2.imshow('detection', frame_processed)\n                cv2.waitKey(1)\n            else:\n                break\n        video.release()\n        out.release()\n        cv2.destroyAllWindows()\n\n\ndef run():\n    args = parse_args()\n\n    # use cuda\n    if args.cuda:\n        device = torch.device(\"cuda\")\n    else:\n        device = torch.device(\"cpu\")\n\n    # model\n    model_name = args.version\n    print('Model: ', model_name)\n\n    # load model and config file\n    if model_name == 'yolov2_d19':\n        from models.yolov2_d19 import YOLOv2D19 as yolo_net\n        cfg = config.yolov2_d19_cfg\n\n    elif model_name == 'yolov2_r50':\n        from models.yolov2_r50 import YOLOv2R50 as yolo_net\n        cfg = config.yolov2_r50_cfg\n\n    elif model_name == 'yolov2_slim':\n        from models.yolov2_slim import YOLOv2Slim as yolo_net\n        cfg = config.yolov2_slim_cfg\n\n    elif model_name == 'yolov3':\n        from models.yolov3 import YOLOv3 as yolo_net\n        cfg = config.yolov3_d53_cfg\n\n    elif model_name == 'yolov3_spp':\n        from models.yolov3_spp import YOLOv3Spp as yolo_net\n        cfg = config.yolov3_d53_cfg\n\n    elif model_name == 'yolov3_tiny':\n        from models.yolov3_tiny import YOLOv3tiny as yolo_net\n        cfg = config.yolov3_tiny_cfg\n    else:\n        print('Unknown model name...')\n        exit(0)\n\n    input_size = [args.input_size, args.input_size]\n    \n    # build model\n    anchor_size = cfg['anchor_size_coco']\n    net = yolo_net(device=device, \n                   input_size=input_size, \n                   num_classes=80, \n                   trainable=False, \n                   conf_thresh=args.conf_thresh,\n                   nms_thresh=args.nms_thresh,\n                   anchor_size=anchor_size)\n\n    # load weight\n    net.load_state_dict(torch.load(args.trained_model, map_location=device))\n    net.to(device).eval()\n    print('Finished loading model!')\n\n    # run\n    detect(net=net, \n            device=device,\n            transform=BaseTransform(input_size),\n            mode=args.mode,\n            path_to_img=args.path_to_img,\n            path_to_vid=args.path_to_vid,\n            path_to_save=args.path_to_save,\n            thresh=args.visual_threshold\n            )\n\n\nif __name__ == '__main__':\n    run()\n"
  },
  {
    "path": "eval.py",
    "content": "import argparse\nimport os\nimport torch\n\nfrom utils.vocapi_evaluator import VOCAPIEvaluator\nfrom utils.cocoapi_evaluator import COCOAPIEvaluator\nfrom data import BaseTransform, config\n\n\n\nparser = argparse.ArgumentParser(description='YOLO Detector Evaluation')\nparser.add_argument('-v', '--version', default='yolo_v2',\n                    help='yolov2_d19, yolov2_r50, yolov2_slim, yolov3, yolov3_spp, yolov3_tiny')\nparser.add_argument('--trained_model', type=str, default='weights/', \n                    help='Trained state_dict file path to open')\nparser.add_argument('-size', '--input_size', default=416, type=int,\n                    help='input_size')\nparser.add_argument('--cuda', action='store_true', default=False,\n                    help='Use cuda')\n# dataset\nparser.add_argument('--root', default='/mnt/share/ssd2/dataset',\n                    help='data root')\nparser.add_argument('-d', '--dataset', default='coco-val',\n                    help='voc, coco-val, coco-test.')\n\nargs = parser.parse_args()\n\n\n\ndef voc_test(model, data_dir, device, input_size):\n    evaluator = VOCAPIEvaluator(data_root=data_dir,\n                                img_size=input_size,\n                                device=device,\n                                transform=BaseTransform(input_size),\n                                display=True)\n\n    # VOC evaluation\n    evaluator.evaluate(model)\n\n\ndef coco_test(model, data_dir, device, input_size, test=False):\n    if test:\n        # test-dev\n        print('test on test-dev 2017')\n        evaluator = COCOAPIEvaluator(\n                        data_dir=data_dir,\n                        img_size=input_size,\n                        device=device,\n                        testset=True,\n                        transform=BaseTransform(input_size)\n                        )\n\n    else:\n        # eval\n        evaluator = COCOAPIEvaluator(\n                        data_dir=data_dir,\n                        img_size=input_size,\n                        device=device,\n                        testset=False,\n                        transform=BaseTransform(input_size)\n                        )\n\n    # COCO evaluation\n    evaluator.evaluate(model)\n\n\nif __name__ == '__main__':\n    # dataset\n    if args.dataset == 'voc':\n        print('eval on voc ...')\n        num_classes = 20\n        data_dir = os.path.join(args.root, 'VOCdevkit')\n    elif args.dataset == 'coco-val':\n        print('eval on coco-val ...')\n        num_classes = 80\n        data_dir = os.path.join(args.root, 'COCO')\n    elif args.dataset == 'coco-test':\n        print('eval on coco-test-dev ...')\n        num_classes = 80\n        data_dir = os.path.join(args.root, 'COCO')\n    else:\n        print('unknow dataset !! we only support voc, coco-val, coco-test !!!')\n        exit(0)\n\n    # cuda\n    if args.cuda:\n        print('use cuda')\n        torch.backends.cudnn.benchmark = True\n        device = torch.device(\"cuda\")\n    else:\n        device = torch.device(\"cpu\")\n\n\n    # model\n    model_name = args.version\n    print('Model: ', model_name)\n\n    # load model and config file\n    if model_name == 'yolov2_d19':\n        from models.yolov2_d19 import YOLOv2D19 as yolo_net\n        cfg = config.yolov2_d19_cfg\n\n    elif model_name == 'yolov2_r50':\n        from models.yolov2_r50 import YOLOv2R50 as yolo_net\n        cfg = config.yolov2_r50_cfg\n\n    elif model_name == 'yolov2_slim':\n        from models.yolov2_slim import YOLOv2Slim as yolo_net\n        cfg = config.yolov2_slim_cfg\n\n    elif model_name == 'yolov3':\n        from models.yolov3 import YOLOv3 as yolo_net\n        cfg = config.yolov3_d53_cfg\n\n    elif model_name == 'yolov3_spp':\n        from models.yolov3_spp import YOLOv3Spp as yolo_net\n        cfg = config.yolov3_d53_cfg\n\n    elif model_name == 'yolov3_tiny':\n        from models.yolov3_tiny import YOLOv3tiny as yolo_net\n        cfg = config.yolov3_tiny_cfg\n    else:\n        print('Unknown model name...')\n        exit(0)\n\n    # input size\n    input_size = args.input_size\n\n    # build model\n    anchor_size = cfg['anchor_size_voc'] if args.dataset == 'voc' else cfg['anchor_size_coco']\n    net = yolo_net(device=device, \n                   input_size=input_size, \n                   num_classes=num_classes, \n                   trainable=False, \n                   anchor_size=anchor_size)\n\n    # load net\n    net.load_state_dict(torch.load(args.trained_model, map_location='cuda'))\n    net.eval()\n    print('Finished loading model!')\n    net = net.to(device)\n    \n    # evaluation\n    with torch.no_grad():\n        if args.dataset == 'voc':\n            voc_test(net, data_dir, device, input_size)\n        elif args.dataset == 'coco-val':\n            coco_test(net, data_dir, device, input_size, test=False)\n        elif args.dataset == 'coco-test':\n            coco_test(net, data_dir, device, input_size, test=True)\n"
  },
  {
    "path": "models/yolov2_d19.py",
    "content": "import numpy as np\nimport torch\nimport torch.nn as nn\nfrom utils.modules import Conv, reorg_layer\n\nfrom backbone import build_backbone\nimport tools\n\n\nclass YOLOv2D19(nn.Module):\n    def __init__(self, device, input_size=None, num_classes=20, trainable=False, conf_thresh=0.001, nms_thresh=0.5, anchor_size=None):\n        super(YOLOv2D19, self).__init__()\n        self.device = device\n        self.input_size = input_size\n        self.num_classes = num_classes\n        self.trainable = trainable\n        self.conf_thresh = conf_thresh\n        self.nms_thresh = nms_thresh\n        self.anchor_size = torch.tensor(anchor_size)\n        self.num_anchors = len(anchor_size)\n        self.stride = 32\n        self.grid_cell, self.all_anchor_wh = self.create_grid(input_size)\n\n        # backbone darknet-19\n        self.backbone = build_backbone(model_name='darknet19', pretrained=trainable)\n        \n        # detection head\n        self.convsets_1 = nn.Sequential(\n            Conv(1024, 1024, k=3, p=1),\n            Conv(1024, 1024, k=3, p=1)\n        )\n\n        self.route_layer = Conv(512, 64, k=1)\n        self.reorg = reorg_layer(stride=2)\n\n        self.convsets_2 = Conv(1280, 1024, k=3, p=1)\n        \n        # prediction layer\n        self.pred = nn.Conv2d(1024, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n\n\n    def create_grid(self, input_size):\n        w, h = input_size, input_size\n        # generate grid cells\n        ws, hs = w // self.stride, h // self.stride\n        grid_y, grid_x = torch.meshgrid([torch.arange(hs), torch.arange(ws)])\n        grid_xy = torch.stack([grid_x, grid_y], dim=-1).float()\n        grid_xy = grid_xy.view(1, hs*ws, 1, 2).to(self.device)\n\n        # generate anchor_wh tensor\n        anchor_wh = self.anchor_size.repeat(hs*ws, 1, 1).unsqueeze(0).to(self.device)\n\n        return grid_xy, anchor_wh\n\n\n    def set_grid(self, input_size):\n        self.input_size = input_size\n        self.grid_cell, self.all_anchor_wh = self.create_grid(input_size)\n\n\n    def decode_xywh(self, txtytwth_pred):\n        \"\"\"\n            Input: \\n\n                txtytwth_pred : [B, H*W, anchor_n, 4] \\n\n            Output: \\n\n                xywh_pred : [B, H*W*anchor_n, 4] \\n\n        \"\"\"\n        B, HW, ab_n, _ = txtytwth_pred.size()\n        # b_x = sigmoid(tx) + gride_x\n        # b_y = sigmoid(ty) + gride_y\n        xy_pred = torch.sigmoid(txtytwth_pred[..., :2]) + self.grid_cell\n        # b_w = anchor_w * exp(tw)\n        # b_h = anchor_h * exp(th)\n        wh_pred = torch.exp(txtytwth_pred[..., 2:]) * self.all_anchor_wh\n        # [B, H*W, anchor_n, 4] -> [B, H*W*anchor_n, 4]\n        xywh_pred = torch.cat([xy_pred, wh_pred], -1).view(B, -1, 4) * self.stride\n\n        return xywh_pred\n    \n\n    def decode_boxes(self, txtytwth_pred):\n        \"\"\"\n            Input: \\n\n                txtytwth_pred : [B, H*W, anchor_n, 4] \\n\n            Output: \\n\n                x1y1x2y2_pred : [B, H*W*anchor_n, 4] \\n\n        \"\"\"\n        # txtytwth -> cxcywh\n        xywh_pred = self.decode_xywh(txtytwth_pred)\n\n        # cxcywh -> x1y1x2y2\n        x1y1x2y2_pred = torch.zeros_like(xywh_pred)\n        x1y1_pred = xywh_pred[..., :2] - xywh_pred[..., 2:] * 0.5\n        x2y2_pred = xywh_pred[..., :2] + xywh_pred[..., 2:] * 0.5\n        x1y1x2y2_pred = torch.cat([x1y1_pred, x2y2_pred], dim=-1)\n        \n        return x1y1x2y2_pred\n\n\n    def nms(self, dets, scores):\n        \"\"\"\"Pure Python NMS baseline.\"\"\"\n        x1 = dets[:, 0]  #xmin\n        y1 = dets[:, 1]  #ymin\n        x2 = dets[:, 2]  #xmax\n        y2 = dets[:, 3]  #ymax\n\n        areas = (x2 - x1) * (y2 - y1)\n        order = scores.argsort()[::-1]\n\n        keep = []\n        while order.size > 0:\n            i = order[0]\n            keep.append(i)\n            xx1 = np.maximum(x1[i], x1[order[1:]])\n            yy1 = np.maximum(y1[i], y1[order[1:]])\n            xx2 = np.minimum(x2[i], x2[order[1:]])\n            yy2 = np.minimum(y2[i], y2[order[1:]])\n\n            w = np.maximum(1e-10, xx2 - xx1)\n            h = np.maximum(1e-10, yy2 - yy1)\n            inter = w * h\n\n            # Cross Area / (bbox + particular area - Cross Area)\n            ovr = inter / (areas[i] + areas[order[1:]] - inter)\n            #reserve all the boundingbox whose ovr less than thresh\n            inds = np.where(ovr <= self.nms_thresh)[0]\n            order = order[inds + 1]\n\n        return keep\n\n\n    def postprocess(self, bboxes, scores):\n        \"\"\"\n        bboxes: (HxW, 4), bsize = 1\n        scores: (HxW, num_classes), bsize = 1\n        \"\"\"\n\n        cls_inds = np.argmax(scores, axis=1)\n        scores = scores[(np.arange(scores.shape[0]), cls_inds)]\n        \n        # threshold\n        keep = np.where(scores >= self.conf_thresh)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        # NMS\n        keep = np.zeros(len(bboxes), dtype=np.int)\n        for i in range(self.num_classes):\n            inds = np.where(cls_inds == i)[0]\n            if len(inds) == 0:\n                continue\n            c_bboxes = bboxes[inds]\n            c_scores = scores[inds]\n            c_keep = self.nms(c_bboxes, c_scores)\n            keep[inds[c_keep]] = 1\n\n        keep = np.where(keep > 0)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        return bboxes, scores, cls_inds\n\n\n    @ torch.no_grad()\n    def inference(self, x):\n        # backbone\n        feats = self.backbone(x)\n\n        # reorg layer\n        p5 = self.convsets_1(feats['layer3'])\n        p4 = self.reorg(self.route_layer(feats['layer2']))\n        p5 = torch.cat([p4, p5], dim=1)\n\n        # head\n        p5 = self.convsets_2(p5)\n\n        # pred\n        pred = self.pred(p5)\n\n        B, abC, H, W = pred.size()\n\n        # [B, num_anchor * C, H, W] -> [B, H, W, num_anchor * C] -> [B, H*W, num_anchor*C]\n        pred = pred.permute(0, 2, 3, 1).contiguous().view(B, H*W, abC)\n\n        # [B, H*W*num_anchor, 1]\n        conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, H*W*self.num_anchors, 1)\n        # [B, H*W, num_anchor, num_cls]\n        cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, H*W*self.num_anchors, self.num_classes)\n        # [B, H*W, num_anchor, 4]\n        reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n        # decode box\n        reg_pred = reg_pred.view(B, H*W, self.num_anchors, 4)\n        box_pred = self.decode_boxes(reg_pred)\n\n        # batch size = 1\n        conf_pred = conf_pred[0]\n        cls_pred = cls_pred[0]\n        box_pred = box_pred[0]\n\n        # score\n        scores = torch.sigmoid(conf_pred) * torch.softmax(cls_pred, dim=-1)\n\n        # normalize bbox\n        bboxes = torch.clamp(box_pred / self.input_size, 0., 1.)\n\n        # to cpu\n        scores = scores.to('cpu').numpy()\n        bboxes = bboxes.to('cpu').numpy()\n\n        # post-process\n        bboxes, scores, cls_inds = self.postprocess(bboxes, scores)\n\n        return bboxes, scores, cls_inds\n\n\n    def forward(self, x, target=None):\n        if not self.trainable:\n            return self.inference(x)\n        else:\n            # backbone\n            feats = self.backbone(x)\n\n            # reorg layer\n            p5 = self.convsets_1(feats['layer3'])\n            p4 = self.reorg(self.route_layer(feats['layer2']))\n            p5 = torch.cat([p4, p5], dim=1)\n\n            # head\n            p5 = self.convsets_2(p5)\n\n            # pred\n            pred = self.pred(p5)\n\n            B, abC, H, W = pred.size()\n\n            # [B, num_anchor * C, H, W] -> [B, H, W, num_anchor * C] -> [B, H*W, num_anchor*C]\n            pred = pred.permute(0, 2, 3, 1).contiguous().view(B, H*W, abC)\n\n            # [B, H*W*num_anchor, 1]\n            conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, H*W*self.num_anchors, 1)\n            # [B, H*W, num_anchor, num_cls]\n            cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, H*W*self.num_anchors, self.num_classes)\n            # [B, H*W, num_anchor, 4]\n            reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n            reg_pred = reg_pred.view(B, H*W, self.num_anchors, 4)\n\n            # decode bbox\n            x1y1x2y2_pred = (self.decode_boxes(reg_pred) / self.input_size).view(-1, 4)\n            x1y1x2y2_gt = target[:, :, 7:].view(-1, 4)\n            reg_pred = reg_pred.view(B, H*W*self.num_anchors, 4)\n\n            # set conf target\n            iou_pred = tools.iou_score(x1y1x2y2_pred, x1y1x2y2_gt).view(B, -1, 1)\n            gt_conf = iou_pred.clone().detach()\n\n            # [obj, cls, txtytwth, x1y1x2y2] -> [conf, obj, cls, txtytwth]\n            target = torch.cat([gt_conf, target[:, :, :7]], dim=2)\n\n            # loss\n            (\n                conf_loss,\n                cls_loss,\n                bbox_loss,\n                iou_loss\n            ) = tools.loss(pred_conf=conf_pred,\n                           pred_cls=cls_pred,\n                           pred_txtytwth=reg_pred,\n                           pred_iou=iou_pred,\n                           label=target\n                           )\n\n            return conf_loss, cls_loss, bbox_loss, iou_loss   \n"
  },
  {
    "path": "models/yolov2_r50.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom utils.modules import Conv, reorg_layer\nfrom backbone import build_backbone\nimport numpy as np\nimport tools\n\n\nclass YOLOv2R50(nn.Module):\n    def __init__(self, device, input_size=None, num_classes=20, trainable=False, conf_thresh=0.001, nms_thresh=0.6, anchor_size=None, hr=False):\n        super(YOLOv2R50, self).__init__()\n        self.device = device\n        self.input_size = input_size\n        self.num_classes = num_classes\n        self.trainable = trainable\n        self.conf_thresh = conf_thresh\n        self.nms_thresh = nms_thresh\n        self.anchor_size = torch.tensor(anchor_size)\n        self.num_anchors = len(anchor_size)\n        self.stride = 32\n        self.grid_cell, self.all_anchor_wh = self.create_grid(input_size)\n\n        # backbone\n        self.backbone = build_backbone(model_name='resnet50', pretrained=trainable)\n        \n        # head\n        self.convsets_1 = nn.Sequential(\n            Conv(2048, 1024, k=1),\n            Conv(1024, 1024, k=3, p=1),\n            Conv(1024, 1024, k=3, p=1)\n        )\n\n        # reorg\n        self.route_layer = Conv(1024, 128, k=1)\n        self.reorg = reorg_layer(stride=2)\n\n        # head\n        self.convsets_2 = Conv(1024+128*4, 1024, k=3, p=1)\n        \n        # pred\n        self.pred = nn.Conv2d(1024, self.num_anchors*(1 + 4 + self.num_classes), 1)\n\n\n        if self.trainable:\n            # init bias\n            self.init_bias()\n\n\n    def init_bias(self):               \n        # init bias\n        init_prob = 0.01\n        bias_value = -torch.log(torch.tensor((1. - init_prob) / init_prob))\n        nn.init.constant_(self.pred.bias[..., :self.num_anchors], bias_value)\n\n\n    def create_grid(self, input_size):\n        w, h = input_size, input_size\n        # generate grid cells\n        ws, hs = w // self.stride, h // self.stride\n        grid_y, grid_x = torch.meshgrid([torch.arange(hs), torch.arange(ws)])\n        grid_xy = torch.stack([grid_x, grid_y], dim=-1).float()\n        grid_xy = grid_xy.view(1, hs*ws, 1, 2).to(self.device)\n\n        # generate anchor_wh tensor\n        anchor_wh = self.anchor_size.repeat(hs*ws, 1, 1).unsqueeze(0).to(self.device)\n\n\n        return grid_xy, anchor_wh\n\n\n    def set_grid(self, input_size):\n        self.input_size = input_size\n        self.grid_cell, self.all_anchor_wh = self.create_grid(input_size)\n\n\n    def decode_xywh(self, txtytwth_pred):\n        \"\"\"\n            Input: \\n\n                txtytwth_pred : [B, H*W, anchor_n, 4] \\n\n            Output: \\n\n                xywh_pred : [B, H*W*anchor_n, 4] \\n\n        \"\"\"\n        B, HW, ab_n, _ = txtytwth_pred.size()\n        # b_x = sigmoid(tx) + gride_x\n        # b_y = sigmoid(ty) + gride_y\n        xy_pred = torch.sigmoid(txtytwth_pred[:, :, :, :2]) + self.grid_cell\n        # b_w = anchor_w * exp(tw)\n        # b_h = anchor_h * exp(th)\n        wh_pred = torch.exp(txtytwth_pred[:, :, :, 2:]) * self.all_anchor_wh\n        # [H*W, anchor_n, 4] -> [H*W*anchor_n, 4]\n        xywh_pred = torch.cat([xy_pred, wh_pred], -1).view(B, -1, 4) * self.stride\n\n        return xywh_pred\n    \n\n    def decode_boxes(self, txtytwth_pred):\n        \"\"\"\n            Input: \\n\n                txtytwth_pred : [B, H*W, anchor_n, 4] \\n\n            Output: \\n\n                x1y1x2y2_pred : [B, H*W*anchor_n, 4] \\n\n        \"\"\"\n        # txtytwth -> cxcywh\n        xywh_pred = self.decode_xywh(txtytwth_pred)\n\n        # cxcywh -> x1y1x2y2\n        x1y1x2y2_pred = torch.zeros_like(xywh_pred)\n        x1y1_pred = xywh_pred[..., :2] - xywh_pred[..., 2:] * 0.5\n        x2y2_pred = xywh_pred[..., :2] + xywh_pred[..., 2:] * 0.5\n        x1y1x2y2_pred = torch.cat([x1y1_pred, x2y2_pred], dim=-1)\n        \n        return x1y1x2y2_pred\n\n\n    def nms(self, dets, scores):\n        \"\"\"\"Pure Python NMS baseline.\"\"\"\n        x1 = dets[:, 0]  #xmin\n        y1 = dets[:, 1]  #ymin\n        x2 = dets[:, 2]  #xmax\n        y2 = dets[:, 3]  #ymax\n\n        areas = (x2 - x1) * (y2 - y1)\n        order = scores.argsort()[::-1]\n\n        keep = []\n        while order.size > 0:\n            i = order[0]\n            keep.append(i)\n            xx1 = np.maximum(x1[i], x1[order[1:]])\n            yy1 = np.maximum(y1[i], y1[order[1:]])\n            xx2 = np.minimum(x2[i], x2[order[1:]])\n            yy2 = np.minimum(y2[i], y2[order[1:]])\n\n            w = np.maximum(1e-10, xx2 - xx1)\n            h = np.maximum(1e-10, yy2 - yy1)\n            inter = w * h\n\n            # Cross Area / (bbox + particular area - Cross Area)\n            ovr = inter / (areas[i] + areas[order[1:]] - inter)\n            #reserve all the boundingbox whose ovr less than thresh\n            inds = np.where(ovr <= self.nms_thresh)[0]\n            order = order[inds + 1]\n\n        return keep\n\n\n    def postprocess(self, bboxes, scores):\n        \"\"\"\n        bboxes: (HxW, 4), bsize = 1\n        scores: (HxW, num_classes), bsize = 1\n        \"\"\"\n\n        cls_inds = np.argmax(scores, axis=1)\n        scores = scores[(np.arange(scores.shape[0]), cls_inds)]\n        \n        # threshold\n        keep = np.where(scores >= self.conf_thresh)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        # NMS\n        keep = np.zeros(len(bboxes), dtype=np.int)\n        for i in range(self.num_classes):\n            inds = np.where(cls_inds == i)[0]\n            if len(inds) == 0:\n                continue\n            c_bboxes = bboxes[inds]\n            c_scores = scores[inds]\n            c_keep = self.nms(c_bboxes, c_scores)\n            keep[inds[c_keep]] = 1\n\n        keep = np.where(keep > 0)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        return bboxes, scores, cls_inds\n\n\n    @ torch.no_grad()\n    def inference(self, x):\n        # backbone\n        feats = self.backbone(x)\n\n        # reorg layer\n        p5 = self.convsets_1(feats['layer3'])\n        p4 = self.reorg(self.route_layer(feats['layer2']))\n        p5 = torch.cat([p4, p5], dim=1)\n\n        # head\n        p5 = self.convsets_2(p5)\n\n        # pred\n        pred = self.pred(p5)\n\n        B, abC, H, W = pred.size()\n\n        # [B, num_anchor * C, H, W] -> [B, H, W, num_anchor * C] -> [B, H*W, num_anchor*C]\n        pred = pred.permute(0, 2, 3, 1).contiguous().view(B, H*W, abC)\n\n        # [B, H*W*num_anchor, 1]\n        conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, H*W*self.num_anchors, 1)\n        # [B, H*W, num_anchor, num_cls]\n        cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, H*W*self.num_anchors, self.num_classes)\n        # [B, H*W, num_anchor, 4]\n        reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n        # decode box\n        reg_pred = reg_pred.view(B, H*W, self.num_anchors, 4)\n        box_pred = self.decode_boxes(reg_pred)\n\n        # batch size = 1\n        conf_pred = conf_pred[0]\n        cls_pred = cls_pred[0]\n        box_pred = box_pred[0]\n\n        # score\n        scores = torch.sigmoid(conf_pred) * torch.softmax(cls_pred, dim=-1)\n\n        # normalize bbox\n        bboxes = torch.clamp(box_pred / self.input_size, 0., 1.)\n\n        # to cpu\n        scores = scores.to('cpu').numpy()\n        bboxes = bboxes.to('cpu').numpy()\n\n        # post-process\n        bboxes, scores, cls_inds = self.postprocess(bboxes, scores)\n\n        return bboxes, scores, cls_inds\n\n\n    def forward(self, x, target=None):\n        if not self.trainable:\n            return self.inference(x)\n        else:\n            # backbone\n            feats = self.backbone(x)\n\n            # reorg layer\n            p5 = self.convsets_1(feats['layer3'])\n            p4 = self.reorg(self.route_layer(feats['layer2']))\n            p5 = torch.cat([p4, p5], dim=1)\n\n            # head\n            p5 = self.convsets_2(p5)\n\n            # pred\n            pred = self.pred(p5)\n\n            B, abC, H, W = pred.size()\n\n            # [B, num_anchor * C, H, W] -> [B, H, W, num_anchor * C] -> [B, H*W, num_anchor*C]\n            pred = pred.permute(0, 2, 3, 1).contiguous().view(B, H*W, abC)\n\n            # [B, H*W*num_anchor, 1]\n            conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, H*W*self.num_anchors, 1)\n            # [B, H*W, num_anchor, num_cls]\n            cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, H*W*self.num_anchors, self.num_classes)\n            # [B, H*W, num_anchor, 4]\n            reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n            reg_pred = reg_pred.view(B, H*W, self.num_anchors, 4)\n\n            # decode bbox\n            x1y1x2y2_pred = (self.decode_boxes(reg_pred) / self.input_size).view(-1, 4)\n            x1y1x2y2_gt = target[:, :, 7:].view(-1, 4)\n            reg_pred = reg_pred.view(B, H*W*self.num_anchors, 4)\n\n            # set conf target\n            iou_pred = tools.iou_score(x1y1x2y2_pred, x1y1x2y2_gt).view(B, -1, 1)\n            gt_conf = iou_pred.clone().detach()\n\n            # [obj, cls, txtytwth, x1y1x2y2] -> [conf, obj, cls, txtytwth]\n            target = torch.cat([gt_conf, target[:, :, :7]], dim=2)\n\n            # loss\n            (\n                conf_loss,\n                cls_loss,\n                bbox_loss,\n                iou_loss\n            ) = tools.loss(pred_conf=conf_pred,\n                           pred_cls=cls_pred,\n                           pred_txtytwth=reg_pred,\n                           pred_iou=iou_pred,\n                           label=target\n                           )\n\n            return conf_loss, cls_loss, bbox_loss, iou_loss   \n"
  },
  {
    "path": "models/yolov3.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom utils.modules import Conv\nfrom backbone import build_backbone\nimport numpy as np\nimport tools\n\n\nclass YOLOv3(nn.Module):\n    def __init__(self, \n                 device, \n                 input_size=None, \n                 num_classes=20, \n                 trainable=False, \n                 conf_thresh=0.001, \n                 nms_thresh=0.50, \n                 anchor_size=None):\n        super(YOLOv3, self).__init__()\n        self.device = device\n        self.input_size = input_size\n        self.num_classes = num_classes\n        self.trainable = trainable\n        self.conf_thresh = conf_thresh\n        self.nms_thresh = nms_thresh\n        self.topk = 3000\n        self.stride = [8, 16, 32]\n        self.anchor_size = torch.tensor(anchor_size).view(3, len(anchor_size) // 3, 2)\n        self.num_anchors = self.anchor_size.size(1)\n\n        self.grid_cell, self.stride_tensor, self.all_anchors_wh = self.create_grid(input_size)\n\n        # backbone\n        self.backbone = build_backbone(model_name='darknet53', pretrained=trainable)\n        \n        # s = 32\n        self.conv_set_3 = nn.Sequential(\n            Conv(1024, 512, k=1),\n            Conv(512, 1024, k=3, p=1),\n            Conv(1024, 512, k=1),\n            Conv(512, 1024, k=3, p=1),\n            Conv(1024, 512, k=1)\n        )\n        self.conv_1x1_3 = Conv(512, 256, k=1)\n        self.extra_conv_3 = Conv(512, 1024, k=3, p=1)\n        self.pred_3 = nn.Conv2d(1024, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n\n        # s = 16\n        self.conv_set_2 = nn.Sequential(\n            Conv(768, 256, k=1),\n            Conv(256, 512, k=3, p=1),\n            Conv(512, 256, k=1),\n            Conv(256, 512, k=3, p=1),\n            Conv(512, 256, k=1)\n        )\n        self.conv_1x1_2 = Conv(256, 128, k=1)\n        self.extra_conv_2 = Conv(256, 512, k=3, p=1)\n        self.pred_2 = nn.Conv2d(512, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n\n        # s = 8\n        self.conv_set_1 = nn.Sequential(\n            Conv(384, 128, k=1),\n            Conv(128, 256, k=3, p=1),\n            Conv(256, 128, k=1),\n            Conv(128, 256, k=3, p=1),\n            Conv(256, 128, k=1)\n        )\n        self.extra_conv_1 = Conv(128, 256, k=3, p=1)\n        self.pred_1 = nn.Conv2d(256, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n    \n        self.init_yolo()\n\n\n    def init_yolo(self):  \n        # Init head\n        init_prob = 0.01\n        bias_value = -torch.log(torch.tensor((1. - init_prob) / init_prob))\n        # init obj&cls pred\n        for pred in [self.pred_1, self.pred_2, self.pred_3]:\n            nn.init.constant_(pred.bias[..., :self.num_anchors], bias_value)\n            nn.init.constant_(pred.bias[..., self.num_anchors : (1 + self.num_classes) * self.num_anchors], bias_value)\n\n\n    def create_grid(self, input_size):\n        total_grid_xy = []\n        total_stride = []\n        total_anchor_wh = []\n        w, h = input_size, input_size\n        for ind, s in enumerate(self.stride):\n            # generate grid cells\n            ws, hs = w // s, h // s\n            grid_y, grid_x = torch.meshgrid([torch.arange(hs), torch.arange(ws)])\n            grid_xy = torch.stack([grid_x, grid_y], dim=-1).float()\n            grid_xy = grid_xy.view(1, hs*ws, 1, 2)\n\n            # generate stride tensor\n            stride_tensor = torch.ones([1, hs*ws, self.num_anchors, 2]) * s\n\n            # generate anchor_wh tensor\n            anchor_wh = self.anchor_size[ind].repeat(hs*ws, 1, 1)\n\n            total_grid_xy.append(grid_xy)\n            total_stride.append(stride_tensor)\n            total_anchor_wh.append(anchor_wh)\n\n        total_grid_xy = torch.cat(total_grid_xy, dim=1).to(self.device)\n        total_stride = torch.cat(total_stride, dim=1).to(self.device)\n        total_anchor_wh = torch.cat(total_anchor_wh, dim=0).to(self.device).unsqueeze(0)\n\n        return total_grid_xy, total_stride, total_anchor_wh\n\n\n    def set_grid(self, input_size):\n        self.input_size = input_size\n        self.grid_cell, self.stride_tensor, self.all_anchors_wh = self.create_grid(input_size)\n\n\n    def decode_xywh(self, txtytwth_pred):\n        \"\"\"\n            Input:\n                txtytwth_pred : [B, H*W, anchor_n, 4] containing [tx, ty, tw, th]\n            Output:\n                xywh_pred : [B, H*W*anchor_n, 4] containing [x, y, w, h]\n        \"\"\"\n        # b_x = sigmoid(tx) + gride_x,  b_y = sigmoid(ty) + gride_y\n        B, HW, ab_n, _ = txtytwth_pred.size()\n        c_xy_pred = (torch.sigmoid(txtytwth_pred[..., :2]) + self.grid_cell) * self.stride_tensor\n        # b_w = anchor_w * exp(tw),     b_h = anchor_h * exp(th)\n        b_wh_pred = torch.exp(txtytwth_pred[..., 2:]) * self.all_anchors_wh\n        # [B, H*W, anchor_n, 4] -> [B, H*W*anchor_n, 4]\n        xywh_pred = torch.cat([c_xy_pred, b_wh_pred], -1).view(B, HW*ab_n, 4)\n\n        return xywh_pred\n\n\n    def decode_boxes(self, txtytwth_pred):\n        \"\"\"\n            Input: \\n\n                txtytwth_pred : [B, H*W, anchor_n, 4] \\n\n            Output: \\n\n                x1y1x2y2_pred : [B, H*W*anchor_n, 4] \\n\n        \"\"\"\n        # txtytwth -> cxcywh\n        xywh_pred = self.decode_xywh(txtytwth_pred)\n\n        # cxcywh -> x1y1x2y2\n        x1y1x2y2_pred = torch.zeros_like(xywh_pred)\n        x1y1_pred = xywh_pred[..., :2] - xywh_pred[..., 2:] * 0.5\n        x2y2_pred = xywh_pred[..., :2] + xywh_pred[..., 2:] * 0.5\n        x1y1x2y2_pred = torch.cat([x1y1_pred, x2y2_pred], dim=-1)\n        \n        return x1y1x2y2_pred\n\n\n    def nms(self, dets, scores):\n        \"\"\"\"Pure Python NMS baseline.\"\"\"\n        x1 = dets[:, 0]  #xmin\n        y1 = dets[:, 1]  #ymin\n        x2 = dets[:, 2]  #xmax\n        y2 = dets[:, 3]  #ymax\n\n        areas = (x2 - x1) * (y2 - y1)\n        order = scores.argsort()[::-1]\n\n        keep = []\n        while order.size > 0:\n            i = order[0]\n            keep.append(i)\n            xx1 = np.maximum(x1[i], x1[order[1:]])\n            yy1 = np.maximum(y1[i], y1[order[1:]])\n            xx2 = np.minimum(x2[i], x2[order[1:]])\n            yy2 = np.minimum(y2[i], y2[order[1:]])\n\n            w = np.maximum(1e-10, xx2 - xx1)\n            h = np.maximum(1e-10, yy2 - yy1)\n            inter = w * h\n\n            # Cross Area / (bbox + particular area - Cross Area)\n            ovr = inter / (areas[i] + areas[order[1:]] - inter)\n            #reserve all the boundingbox whose ovr less than thresh\n            inds = np.where(ovr <= self.nms_thresh)[0]\n            order = order[inds + 1]\n\n        return keep\n\n\n    def postprocess(self, bboxes, scores):\n        \"\"\"\n        bboxes: (HxW, 4), bsize = 1\n        scores: (HxW, num_classes), bsize = 1\n        \"\"\"\n\n        cls_inds = np.argmax(scores, axis=1)\n        scores = scores[(np.arange(scores.shape[0]), cls_inds)]\n        \n        # threshold\n        keep = np.where(scores >= self.conf_thresh)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        # NMS\n        keep = np.zeros(len(bboxes), dtype=np.int)\n        for i in range(self.num_classes):\n            inds = np.where(cls_inds == i)[0]\n            if len(inds) == 0:\n                continue\n            c_bboxes = bboxes[inds]\n            c_scores = scores[inds]\n            c_keep = self.nms(c_bboxes, c_scores)\n            keep[inds[c_keep]] = 1\n\n        keep = np.where(keep > 0)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        # topk\n        scores_sorted, scores_sorted_inds = np.sort(scores), np.argsort(scores)\n        topk_scores, topk_scores_inds = scores_sorted[:self.topk], scores_sorted_inds[:self.topk]\n        topk_bboxes = bboxes[topk_scores_inds]\n        topk_cls_inds = cls_inds[topk_scores_inds]\n\n        return topk_bboxes, topk_scores, topk_cls_inds\n\n\n    @torch.no_grad()\n    def inference(self, x):\n        B = x.size(0)\n        # backbone\n        feats = self.backbone(x)\n        c3, c4, c5 = feats['layer1'], feats['layer2'], feats['layer3']\n\n        # FPN\n        p5 = self.conv_set_3(c5)\n        p5_up = F.interpolate(self.conv_1x1_3(p5), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n        p4 = torch.cat([c4, p5_up], 1)\n        p4 = self.conv_set_2(p4)\n        p4_up = F.interpolate(self.conv_1x1_2(p4), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n        p3 = torch.cat([c3, p4_up], 1)\n        p3 = self.conv_set_1(p3)\n\n        # head\n        # s = 32\n        p5 = self.extra_conv_3(p5)\n        pred_3 = self.pred_3(p5)\n\n        # s = 16\n        p4 = self.extra_conv_2(p4)\n        pred_2 = self.pred_2(p4)\n\n        # s = 8\n        p3 = self.extra_conv_1(p3)\n        pred_1 = self.pred_1(p3)\n\n        preds = [pred_1, pred_2, pred_3]\n        total_conf_pred = []\n        total_cls_pred = []\n        total_reg_pred = []\n        for pred in preds:\n            C = pred.size(1)\n\n            # [B, anchor_n * C, H, W] -> [B, H, W, anchor_n * C] -> [B, H*W, anchor_n*C]\n            pred = pred.permute(0, 2, 3, 1).contiguous().view(B, -1, C)\n\n            # [B, H*W*anchor_n, 1]\n            conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, -1, 1)\n            # [B, H*W*anchor_n, num_cls]\n            cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, -1, self.num_classes)\n            # [B, H*W*anchor_n, 4]\n            reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n\n            total_conf_pred.append(conf_pred)\n            total_cls_pred.append(cls_pred)\n            total_reg_pred.append(reg_pred)\n        \n        conf_pred = torch.cat(total_conf_pred, dim=1)\n        cls_pred = torch.cat(total_cls_pred, dim=1)\n        reg_pred = torch.cat(total_reg_pred, dim=1)\n        # decode bbox\n        reg_pred = reg_pred.view(B, -1, self.num_anchors, 4)\n        box_pred = self.decode_boxes(reg_pred)\n\n        # batch size = 1\n        conf_pred = conf_pred[0]\n        cls_pred = cls_pred[0]\n        box_pred = box_pred[0]\n\n        # score\n        scores = torch.sigmoid(conf_pred) * torch.softmax(cls_pred, dim=-1)\n\n        # normalize bbox\n        bboxes = torch.clamp(box_pred / self.input_size, 0., 1.)\n\n        # to cpu\n        scores = scores.to('cpu').numpy()\n        bboxes = bboxes.to('cpu').numpy()\n\n        # post-process\n        bboxes, scores, cls_inds = self.postprocess(bboxes, scores)\n\n        return bboxes, scores, cls_inds\n        \n\n    def forward(self, x, target=None):\n        if not self.trainable:\n            return self.inference(x)\n        else:\n            # backbone\n            B = x.size(0)\n            # backbone\n            feats = self.backbone(x)\n            c3, c4, c5 = feats['layer1'], feats['layer2'], feats['layer3']\n\n            # FPN\n            p5 = self.conv_set_3(c5)\n            p5_up = F.interpolate(self.conv_1x1_3(p5), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n            p4 = torch.cat([c4, p5_up], 1)\n            p4 = self.conv_set_2(p4)\n            p4_up = F.interpolate(self.conv_1x1_2(p4), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n            p3 = torch.cat([c3, p4_up], 1)\n            p3 = self.conv_set_1(p3)\n\n            # head\n            # s = 32\n            p5 = self.extra_conv_3(p5)\n            pred_3 = self.pred_3(p5)\n\n            # s = 16\n            p4 = self.extra_conv_2(p4)\n            pred_2 = self.pred_2(p4)\n\n            # s = 8\n            p3 = self.extra_conv_1(p3)\n            pred_1 = self.pred_1(p3)\n\n            preds = [pred_1, pred_2, pred_3]\n            total_conf_pred = []\n            total_cls_pred = []\n            total_reg_pred = []\n            for pred in preds:\n                C = pred.size(1)\n\n                # [B, anchor_n * C, H, W] -> [B, H, W, anchor_n * C] -> [B, H*W, anchor_n*C]\n                pred = pred.permute(0, 2, 3, 1).contiguous().view(B, -1, C)\n\n                # [B, H*W*anchor_n, 1]\n                conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, -1, 1)\n                # [B, H*W*anchor_n, num_cls]\n                cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, -1, self.num_classes)\n                # [B, H*W*anchor_n, 4]\n                reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n\n                total_conf_pred.append(conf_pred)\n                total_cls_pred.append(cls_pred)\n                total_reg_pred.append(reg_pred)\n            \n            conf_pred = torch.cat(total_conf_pred, dim=1)\n            cls_pred = torch.cat(total_cls_pred, dim=1)\n            reg_pred = torch.cat(total_reg_pred, dim=1)\n\n            # decode bbox\n            reg_pred = reg_pred.view(B, -1, self.num_anchors, 4)\n            x1y1x2y2_pred = (self.decode_boxes(reg_pred) / self.input_size).view(-1, 4)\n            reg_pred = reg_pred.view(B, -1, 4)\n            x1y1x2y2_gt = target[:, :, 7:].view(-1, 4)\n                \n            # set conf target\n            iou_pred = tools.iou_score(x1y1x2y2_pred, x1y1x2y2_gt).view(B, -1, 1)\n            gt_conf = iou_pred.clone().detach()\n\n            # [obj, cls, txtytwth, scale_weight, x1y1x2y2] -> [conf, obj, cls, txtytwth, scale_weight]\n            target = torch.cat([gt_conf, target[:, :, :7]], dim=2)\n\n            # loss\n            (\n                conf_loss,\n                cls_loss,\n                bbox_loss,\n                iou_loss\n            ) = tools.loss(pred_conf=conf_pred,\n                            pred_cls=cls_pred,\n                            pred_txtytwth=reg_pred,\n                            pred_iou=iou_pred,\n                            label=target\n                            )\n\n            return conf_loss, cls_loss, bbox_loss, iou_loss   \n"
  },
  {
    "path": "models/yolov3_spp.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\n\nfrom utils.modules import Conv, SPP\nfrom backbone import build_backbone\nimport tools\n\n\n# YOLOv3 SPP\nclass YOLOv3Spp(nn.Module):\n    def __init__(self,\n                 device,\n                 input_size=None,\n                 num_classes=20,\n                 trainable=False,\n                 conf_thresh=0.001,\n                 nms_thresh=0.50,\n                 anchor_size=None):\n        super(YOLOv3Spp, self).__init__()\n        self.device = device\n        self.input_size = input_size\n        self.num_classes = num_classes\n        self.trainable = trainable\n        self.conf_thresh = conf_thresh\n        self.nms_thresh = nms_thresh\n        self.stride = [8, 16, 32]\n        self.anchor_size = torch.tensor(anchor_size).view(3, len(anchor_size) // 3, 2)\n        self.num_anchors = self.anchor_size.size(1)\n\n        self.grid_cell, self.stride_tensor, self.all_anchors_wh = self.create_grid(input_size)\n\n        # backbone\n        self.backbone = build_backbone(model_name='darknet53', pretrained=trainable)\n        \n        # s = 32\n        self.conv_set_3 = nn.Sequential(\n            SPP(),\n            Conv(1024*4, 512, k=1),\n            Conv(512, 1024, k=3, p=1),\n            Conv(1024, 512, k=1),\n            Conv(512, 1024, k=3, p=1),\n            Conv(1024, 512, k=1)\n        )\n        self.conv_1x1_3 = Conv(512, 256, k=1)\n        self.extra_conv_3 = Conv(512, 1024, k=3, p=1)\n        self.pred_3 = nn.Conv2d(1024, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n\n        # s = 16\n        self.conv_set_2 = nn.Sequential(\n            Conv(768, 256, k=1),\n            Conv(256, 512, k=3, p=1),\n            Conv(512, 256, k=1),\n            Conv(256, 512, k=3, p=1),\n            Conv(512, 256, k=1)\n        )\n        self.conv_1x1_2 = Conv(256, 128, k=1)\n        self.extra_conv_2 = Conv(256, 512, k=3, p=1)\n        self.pred_2 = nn.Conv2d(512, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n\n        # s = 8\n        self.conv_set_1 = nn.Sequential(\n            Conv(384, 128, k=1),\n            Conv(128, 256, k=3, p=1),\n            Conv(256, 128, k=1),\n            Conv(128, 256, k=3, p=1),\n            Conv(256, 128, k=1)\n        )\n        self.extra_conv_1 = Conv(128, 256, k=3, p=1)\n        self.pred_1 = nn.Conv2d(256, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n\n    \n        self.init_yolo()\n\n\n    def init_yolo(self):  \n        # Init head\n        init_prob = 0.01\n        bias_value = -torch.log(torch.tensor((1. - init_prob) / init_prob))\n        # init obj&cls pred\n        for pred in [self.pred_1, self.pred_2, self.pred_3]:\n            nn.init.constant_(pred.bias[..., :self.num_anchors], bias_value)\n            nn.init.constant_(pred.bias[..., self.num_anchors : (1 + self.num_classes) * self.num_anchors], bias_value)\n\n\n    def create_grid(self, input_size):\n        total_grid_xy = []\n        total_stride = []\n        total_anchor_wh = []\n        w, h = input_size, input_size\n        for ind, s in enumerate(self.stride):\n            # generate grid cells\n            ws, hs = w // s, h // s\n            grid_y, grid_x = torch.meshgrid([torch.arange(hs), torch.arange(ws)])\n            grid_xy = torch.stack([grid_x, grid_y], dim=-1).float()\n            grid_xy = grid_xy.view(1, hs*ws, 1, 2)\n\n            # generate stride tensor\n            stride_tensor = torch.ones([1, hs*ws, self.num_anchors, 2]) * s\n\n            # generate anchor_wh tensor\n            anchor_wh = self.anchor_size[ind].repeat(hs*ws, 1, 1)\n\n            total_grid_xy.append(grid_xy)\n            total_stride.append(stride_tensor)\n            total_anchor_wh.append(anchor_wh)\n\n        total_grid_xy = torch.cat(total_grid_xy, dim=1).to(self.device)\n        total_stride = torch.cat(total_stride, dim=1).to(self.device)\n        total_anchor_wh = torch.cat(total_anchor_wh, dim=0).to(self.device).unsqueeze(0)\n\n        return total_grid_xy, total_stride, total_anchor_wh\n\n\n    def set_grid(self, input_size):\n        self.input_size = input_size\n        self.grid_cell, self.stride_tensor, self.all_anchors_wh = self.create_grid(input_size)\n\n\n    def decode_xywh(self, txtytwth_pred):\n        \"\"\"\n            Input:\n                txtytwth_pred : [B, H*W, anchor_n, 4] containing [tx, ty, tw, th]\n            Output:\n                xywh_pred : [B, H*W*anchor_n, 4] containing [x, y, w, h]\n        \"\"\"\n        # b_x = sigmoid(tx) + gride_x,  b_y = sigmoid(ty) + gride_y\n        B, HW, ab_n, _ = txtytwth_pred.size()\n        c_xy_pred = (torch.sigmoid(txtytwth_pred[:, :, :, :2]) + self.grid_cell) * self.stride_tensor\n        # b_w = anchor_w * exp(tw),     b_h = anchor_h * exp(th)\n        b_wh_pred = torch.exp(txtytwth_pred[:, :, :, 2:]) * self.all_anchors_wh\n        # [B, H*W, anchor_n, 4] -> [B, H*W*anchor_n, 4]\n        xywh_pred = torch.cat([c_xy_pred, b_wh_pred], -1).view(B, HW*ab_n, 4)\n\n        return xywh_pred\n\n\n    def decode_boxes(self, txtytwth_pred):\n        \"\"\"\n            Input: \\n\n                txtytwth_pred : [B, H*W, anchor_n, 4] \\n\n            Output: \\n\n                x1y1x2y2_pred : [B, H*W*anchor_n, 4] \\n\n        \"\"\"\n        # txtytwth -> cxcywh\n        xywh_pred = self.decode_xywh(txtytwth_pred)\n\n        # cxcywh -> x1y1x2y2\n        x1y1x2y2_pred = torch.zeros_like(xywh_pred)\n        x1y1_pred = xywh_pred[..., :2] - xywh_pred[..., 2:] * 0.5\n        x2y2_pred = xywh_pred[..., :2] + xywh_pred[..., 2:] * 0.5\n        x1y1x2y2_pred = torch.cat([x1y1_pred, x2y2_pred], dim=-1)\n        \n        return x1y1x2y2_pred\n\n\n    def nms(self, dets, scores):\n        \"\"\"\"Pure Python NMS baseline.\"\"\"\n        x1 = dets[:, 0]  #xmin\n        y1 = dets[:, 1]  #ymin\n        x2 = dets[:, 2]  #xmax\n        y2 = dets[:, 3]  #ymax\n\n        areas = (x2 - x1) * (y2 - y1)\n        order = scores.argsort()[::-1]\n\n        keep = []\n        while order.size > 0:\n            i = order[0]\n            keep.append(i)\n            xx1 = np.maximum(x1[i], x1[order[1:]])\n            yy1 = np.maximum(y1[i], y1[order[1:]])\n            xx2 = np.minimum(x2[i], x2[order[1:]])\n            yy2 = np.minimum(y2[i], y2[order[1:]])\n\n            w = np.maximum(1e-10, xx2 - xx1)\n            h = np.maximum(1e-10, yy2 - yy1)\n            inter = w * h\n\n            # Cross Area / (bbox + particular area - Cross Area)\n            ovr = inter / (areas[i] + areas[order[1:]] - inter)\n            #reserve all the boundingbox whose ovr less than thresh\n            inds = np.where(ovr <= self.nms_thresh)[0]\n            order = order[inds + 1]\n\n        return keep\n\n\n    def postprocess(self, bboxes, scores):\n        \"\"\"\n        bboxes: (HxW, 4), bsize = 1\n        scores: (HxW, num_classes), bsize = 1\n        \"\"\"\n\n        cls_inds = np.argmax(scores, axis=1)\n        scores = scores[(np.arange(scores.shape[0]), cls_inds)]\n        \n        # threshold\n        keep = np.where(scores >= self.conf_thresh)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        # NMS\n        keep = np.zeros(len(bboxes), dtype=np.int)\n        for i in range(self.num_classes):\n            inds = np.where(cls_inds == i)[0]\n            if len(inds) == 0:\n                continue\n            c_bboxes = bboxes[inds]\n            c_scores = scores[inds]\n            c_keep = self.nms(c_bboxes, c_scores)\n            keep[inds[c_keep]] = 1\n\n        keep = np.where(keep > 0)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        return bboxes, scores, cls_inds\n\n\n    @torch.no_grad()\n    def inference(self, x):\n        B = x.size(0)\n        # backbone\n        feats = self.backbone(x)\n        c3, c4, c5 = feats['layer1'], feats['layer2'], feats['layer3']\n\n        # FPN\n        p5 = self.conv_set_3(c5)\n        p5_up = F.interpolate(self.conv_1x1_3(p5), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n        p4 = torch.cat([c4, p5_up], 1)\n        p4 = self.conv_set_2(p4)\n        p4_up = F.interpolate(self.conv_1x1_2(p4), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n        p3 = torch.cat([c3, p4_up], 1)\n        p3 = self.conv_set_1(p3)\n\n        # head\n        # s = 32\n        p5 = self.extra_conv_3(p5)\n        pred_3 = self.pred_3(p5)\n\n        # s = 16\n        p4 = self.extra_conv_2(p4)\n        pred_2 = self.pred_2(p4)\n\n        # s = 8\n        p3 = self.extra_conv_1(p3)\n        pred_1 = self.pred_1(p3)\n\n        preds = [pred_1, pred_2, pred_3]\n        total_conf_pred = []\n        total_cls_pred = []\n        total_reg_pred = []\n        for pred in preds:\n            C = pred.size(1)\n\n            # [B, anchor_n * C, H, W] -> [B, H, W, anchor_n * C] -> [B, H*W, anchor_n*C]\n            pred = pred.permute(0, 2, 3, 1).contiguous().view(B, -1, C)\n\n            # [B, H*W*anchor_n, 1]\n            conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, -1, 1)\n            # [B, H*W*anchor_n, num_cls]\n            cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, -1, self.num_classes)\n            # [B, H*W*anchor_n, 4]\n            reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n\n            total_conf_pred.append(conf_pred)\n            total_cls_pred.append(cls_pred)\n            total_reg_pred.append(reg_pred)\n        \n        conf_pred = torch.cat(total_conf_pred, dim=1)\n        cls_pred = torch.cat(total_cls_pred, dim=1)\n        reg_pred = torch.cat(total_reg_pred, dim=1)\n        # decode bbox\n        reg_pred = reg_pred.view(B, -1, self.num_anchors, 4)\n        box_pred = self.decode_boxes(reg_pred)\n\n        # batch size = 1\n        conf_pred = conf_pred[0]\n        cls_pred = cls_pred[0]\n        box_pred = box_pred[0]\n\n        # score\n        scores = torch.sigmoid(conf_pred) * torch.softmax(cls_pred, dim=-1)\n\n        # normalize bbox\n        bboxes = torch.clamp(box_pred / self.input_size, 0., 1.)\n\n        # to cpu\n        scores = scores.to('cpu').numpy()\n        bboxes = bboxes.to('cpu').numpy()\n\n        # post-process\n        bboxes, scores, cls_inds = self.postprocess(bboxes, scores)\n\n        return bboxes, scores, cls_inds\n        \n\n    def forward(self, x, target=None):\n        if not self.trainable:\n            return self.inference(x)\n        else:\n            # backbone\n            B = x.size(0)\n            # backbone\n            feats = self.backbone(x)\n            c3, c4, c5 = feats['layer1'], feats['layer2'], feats['layer3']\n\n            # FPN\n            p5 = self.conv_set_3(c5)\n            p5_up = F.interpolate(self.conv_1x1_3(p5), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n            p4 = torch.cat([c4, p5_up], 1)\n            p4 = self.conv_set_2(p4)\n            p4_up = F.interpolate(self.conv_1x1_2(p4), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n            p3 = torch.cat([c3, p4_up], 1)\n            p3 = self.conv_set_1(p3)\n\n            # head\n            # s = 32\n            p5 = self.extra_conv_3(p5)\n            pred_3 = self.pred_3(p5)\n\n            # s = 16\n            p4 = self.extra_conv_2(p4)\n            pred_2 = self.pred_2(p4)\n\n            # s = 8\n            p3 = self.extra_conv_1(p3)\n            pred_1 = self.pred_1(p3)\n\n            preds = [pred_1, pred_2, pred_3]\n            total_conf_pred = []\n            total_cls_pred = []\n            total_reg_pred = []\n            for pred in preds:\n                C = pred.size(1)\n\n                # [B, anchor_n * C, H, W] -> [B, H, W, anchor_n * C] -> [B, H*W, anchor_n*C]\n                pred = pred.permute(0, 2, 3, 1).contiguous().view(B, -1, C)\n\n                # [B, H*W*anchor_n, 1]\n                conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, -1, 1)\n                # [B, H*W*anchor_n, num_cls]\n                cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, -1, self.num_classes)\n                # [B, H*W*anchor_n, 4]\n                reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n\n                total_conf_pred.append(conf_pred)\n                total_cls_pred.append(cls_pred)\n                total_reg_pred.append(reg_pred)\n            \n            conf_pred = torch.cat(total_conf_pred, dim=1)\n            cls_pred = torch.cat(total_cls_pred, dim=1)\n            reg_pred = torch.cat(total_reg_pred, dim=1)\n\n            # decode bbox\n            reg_pred = reg_pred.view(B, -1, self.num_anchors, 4)\n            x1y1x2y2_pred = (self.decode_boxes(reg_pred) / self.input_size).view(-1, 4)\n            reg_pred = reg_pred.view(B, -1, 4)\n            x1y1x2y2_gt = target[:, :, 7:].view(-1, 4)\n                \n            # set conf target\n            iou_pred = tools.iou_score(x1y1x2y2_pred, x1y1x2y2_gt).view(B, -1, 1)\n            gt_conf = iou_pred.clone().detach()\n\n            # [obj, cls, txtytwth, scale_weight, x1y1x2y2] -> [conf, obj, cls, txtytwth, scale_weight]\n            target = torch.cat([gt_conf, target[:, :, :7]], dim=2)\n\n            # loss\n            (\n                conf_loss,\n                cls_loss,\n                bbox_loss,\n                iou_loss\n            ) = tools.loss(pred_conf=conf_pred,\n                            pred_cls=cls_pred,\n                            pred_txtytwth=reg_pred,\n                            pred_iou=iou_pred,\n                            label=target\n                            )\n\n            return conf_loss, cls_loss, bbox_loss, iou_loss   \n"
  },
  {
    "path": "models/yolov3_tiny.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\n\nfrom utils.modules import Conv\nfrom backbone import build_backbone\nimport tools\n\n\n# YOLOv3 Tiny\nclass YOLOv3tiny(nn.Module):\n    def __init__(self, device, input_size=None, num_classes=20, trainable=False, conf_thresh=0.01, nms_thresh=0.50, anchor_size=None, hr=False):\n        super(YOLOv3tiny, self).__init__()\n        self.device = device\n        self.input_size = input_size\n        self.num_classes = num_classes\n        self.trainable = trainable\n        self.conf_thresh = conf_thresh\n        self.nms_thresh = nms_thresh\n        self.stride = [16, 32]\n        self.anchor_size = torch.tensor(anchor_size).view(2, len(anchor_size) // 2, 2)\n        self.num_anchors = self.anchor_size.size(1)\n\n        self.grid_cell, self.stride_tensor, self.all_anchors_wh = self.create_grid(input_size)\n\n        # backbone\n        self.backbone = build_backbone(model_name='darknet_tiny', pretrained=trainable)\n        \n        # s = 32\n        self.conv_set_2 = Conv(1024, 256, k=3, p=1)\n\n        self.conv_1x1_2 = Conv(256, 128, k=1)\n\n        self.extra_conv_2 = Conv(256, 512, k=3, p=1)\n        self.pred_2 = nn.Conv2d(512, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n\n        # s = 16\n        self.conv_set_1 = Conv(384, 256, k=3, p=1)\n        self.pred_1 = nn.Conv2d(256, self.num_anchors*(1 + 4 + self.num_classes), kernel_size=1)\n    \n    \n        self.init_yolo()\n\n\n    def init_yolo(self):  \n        # Init head\n        init_prob = 0.01\n        bias_value = -torch.log(torch.tensor((1. - init_prob) / init_prob))\n        # init obj&cls pred\n        for pred in [self.pred_1, self.pred_2, self.pred_3]:\n            nn.init.constant_(pred.bias[..., :self.num_anchors], bias_value)\n            nn.init.constant_(pred.bias[..., self.num_anchors : (1 + self.num_classes) * self.num_anchors], bias_value)\n\n\n    def create_grid(self, input_size):\n        total_grid_xy = []\n        total_stride = []\n        total_anchor_wh = []\n        w, h = input_size, input_size\n        for ind, s in enumerate(self.stride):\n            # generate grid cells\n            ws, hs = w // s, h // s\n            grid_y, grid_x = torch.meshgrid([torch.arange(hs), torch.arange(ws)])\n            grid_xy = torch.stack([grid_x, grid_y], dim=-1).float()\n            grid_xy = grid_xy.view(1, hs*ws, 1, 2)\n\n            # generate stride tensor\n            stride_tensor = torch.ones([1, hs*ws, self.num_anchors, 2]) * s\n\n            # generate anchor_wh tensor\n            anchor_wh = self.anchor_size[ind].repeat(hs*ws, 1, 1)\n\n            total_grid_xy.append(grid_xy)\n            total_stride.append(stride_tensor)\n            total_anchor_wh.append(anchor_wh)\n\n        total_grid_xy = torch.cat(total_grid_xy, dim=1).to(self.device)\n        total_stride = torch.cat(total_stride, dim=1).to(self.device)\n        total_anchor_wh = torch.cat(total_anchor_wh, dim=0).to(self.device).unsqueeze(0)\n\n        return total_grid_xy, total_stride, total_anchor_wh\n\n\n    def set_grid(self, input_size):\n        self.input_size = input_size\n        self.grid_cell, self.stride_tensor, self.all_anchors_wh = self.create_grid(input_size)\n\n\n    def decode_xywh(self, txtytwth_pred):\n        \"\"\"\n            Input:\n                txtytwth_pred : [B, H*W, anchor_n, 4] containing [tx, ty, tw, th]\n            Output:\n                xywh_pred : [B, H*W*anchor_n, 4] containing [x, y, w, h]\n        \"\"\"\n        # b_x = sigmoid(tx) + gride_x,  b_y = sigmoid(ty) + gride_y\n        B, HW, ab_n, _ = txtytwth_pred.size()\n        c_xy_pred = (torch.sigmoid(txtytwth_pred[:, :, :, :2]) + self.grid_cell) * self.stride_tensor\n        # b_w = anchor_w * exp(tw),     b_h = anchor_h * exp(th)\n        b_wh_pred = torch.exp(txtytwth_pred[:, :, :, 2:]) * self.all_anchors_wh\n        # [B, H*W, anchor_n, 4] -> [B, H*W*anchor_n, 4]\n        xywh_pred = torch.cat([c_xy_pred, b_wh_pred], -1).view(B, HW*ab_n, 4)\n\n        return xywh_pred\n\n\n    def decode_boxes(self, txtytwth_pred):\n        \"\"\"\n            Input: \\n\n                txtytwth_pred : [B, H*W, anchor_n, 4] \\n\n            Output: \\n\n                x1y1x2y2_pred : [B, H*W*anchor_n, 4] \\n\n        \"\"\"\n        # txtytwth -> cxcywh\n        xywh_pred = self.decode_xywh(txtytwth_pred)\n\n        # cxcywh -> x1y1x2y2\n        x1y1x2y2_pred = torch.zeros_like(xywh_pred)\n        x1y1_pred = xywh_pred[..., :2] - xywh_pred[..., 2:] * 0.5\n        x2y2_pred = xywh_pred[..., :2] + xywh_pred[..., 2:] * 0.5\n        x1y1x2y2_pred = torch.cat([x1y1_pred, x2y2_pred], dim=-1)\n        \n        return x1y1x2y2_pred\n\n\n    def nms(self, dets, scores):\n        \"\"\"\"Pure Python NMS baseline.\"\"\"\n        x1 = dets[:, 0]  #xmin\n        y1 = dets[:, 1]  #ymin\n        x2 = dets[:, 2]  #xmax\n        y2 = dets[:, 3]  #ymax\n\n        areas = (x2 - x1) * (y2 - y1)\n        order = scores.argsort()[::-1]\n\n        keep = []\n        while order.size > 0:\n            i = order[0]\n            keep.append(i)\n            xx1 = np.maximum(x1[i], x1[order[1:]])\n            yy1 = np.maximum(y1[i], y1[order[1:]])\n            xx2 = np.minimum(x2[i], x2[order[1:]])\n            yy2 = np.minimum(y2[i], y2[order[1:]])\n\n            w = np.maximum(1e-10, xx2 - xx1)\n            h = np.maximum(1e-10, yy2 - yy1)\n            inter = w * h\n\n            # Cross Area / (bbox + particular area - Cross Area)\n            ovr = inter / (areas[i] + areas[order[1:]] - inter)\n            #reserve all the boundingbox whose ovr less than thresh\n            inds = np.where(ovr <= self.nms_thresh)[0]\n            order = order[inds + 1]\n\n        return keep\n\n\n    def postprocess(self, bboxes, scores):\n        \"\"\"\n        bboxes: (HxW, 4), bsize = 1\n        scores: (HxW, num_classes), bsize = 1\n        \"\"\"\n\n        cls_inds = np.argmax(scores, axis=1)\n        scores = scores[(np.arange(scores.shape[0]), cls_inds)]\n        \n        # threshold\n        keep = np.where(scores >= self.conf_thresh)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        # NMS\n        keep = np.zeros(len(bboxes), dtype=np.int)\n        for i in range(self.num_classes):\n            inds = np.where(cls_inds == i)[0]\n            if len(inds) == 0:\n                continue\n            c_bboxes = bboxes[inds]\n            c_scores = scores[inds]\n            c_keep = self.nms(c_bboxes, c_scores)\n            keep[inds[c_keep]] = 1\n\n        keep = np.where(keep > 0)\n        bboxes = bboxes[keep]\n        scores = scores[keep]\n        cls_inds = cls_inds[keep]\n\n        return bboxes, scores, cls_inds\n\n\n    @torch.no_grad()\n    def inference(self, x):\n        B = x.size(0)\n        # backbone\n        feats = self.backbone(x)\n        c4, c5 = feats['layer2'], feats['layer3']\n\n        # FPN\n        p5 = self.conv_set_2(c5)\n        p5_up = F.interpolate(self.conv_1x1_2(p5), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n        p4 = torch.cat([c4, p5_up], dim=1)\n        p4 = self.conv_set_1(p4)\n\n        # head\n        # s = 32\n        p5 = self.extra_conv_2(p5)\n        pred_2 = self.pred_2(p5)\n\n        # s = 16\n        pred_1 = self.pred_1(p4)\n\n\n        preds = [pred_1, pred_2]\n        total_conf_pred = []\n        total_cls_pred = []\n        total_reg_pred = []\n        for pred in preds:\n            C = pred.size(1)\n\n            # [B, anchor_n * C, H, W] -> [B, H, W, anchor_n * C] -> [B, H*W, anchor_n*C]\n            pred = pred.permute(0, 2, 3, 1).contiguous().view(B, -1, C)\n\n            # Divide prediction to obj_pred, xywh_pred and cls_pred   \n            # [B, H*W*anchor_n, 1]\n            conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, -1, 1)\n            # [B, H*W*anchor_n, num_cls]\n            cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, -1, self.num_classes)\n            # [B, H*W*anchor_n, 4]\n            reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n\n            total_conf_pred.append(conf_pred)\n            total_cls_pred.append(cls_pred)\n            total_reg_pred.append(reg_pred)\n        \n        conf_pred = torch.cat(total_conf_pred, dim=1)\n        cls_pred = torch.cat(total_cls_pred, dim=1)\n        reg_pred = torch.cat(total_reg_pred, dim=1)\n        # decode bbox\n        reg_pred = reg_pred.view(B, -1, self.num_anchors, 4)\n        box_pred = self.decode_boxes(reg_pred)\n\n        # batch size = 1\n        conf_pred = conf_pred[0]\n        cls_pred = cls_pred[0]\n        box_pred = box_pred[0]\n\n        # score\n        scores = torch.sigmoid(conf_pred) * torch.softmax(cls_pred, dim=-1)\n\n        # normalize bbox\n        bboxes = torch.clamp(box_pred / self.input_size, 0., 1.)\n\n        # to cpu\n        scores = scores.to('cpu').numpy()\n        bboxes = bboxes.to('cpu').numpy()\n\n        # post-process\n        bboxes, scores, cls_inds = self.postprocess(bboxes, scores)\n\n        return bboxes, scores, cls_inds\n\n\n    def forward(self, x, target=None):\n        if not self.trainable:\n            return self.inference(x)\n        else:\n            # backbone\n            B = x.size(0)\n            # backbone\n            feats = self.backbone(x)\n            c4, c5 = feats['layer2'], feats['layer3']\n\n            # FPN\n            p5 = self.conv_set_2(c5)\n            p5_up = F.interpolate(self.conv_1x1_2(p5), scale_factor=2.0, mode='bilinear', align_corners=True)\n\n            p4 = torch.cat([c4, p5_up], dim=1)\n            p4 = self.conv_set_1(p4)\n\n            # head\n            # s = 32\n            p5 = self.extra_conv_2(p5)\n            pred_2 = self.pred_2(p5)\n\n            # s = 16\n            pred_1 = self.pred_1(p4)\n\n            preds = [pred_1, pred_2]\n            total_conf_pred = []\n            total_cls_pred = []\n            total_reg_pred = []\n            for pred in preds:\n                C = pred.size(1)\n\n                # [B, anchor_n * C, H, W] -> [B, H, W, anchor_n * C] -> [B, H*W, anchor_n*C]\n                pred = pred.permute(0, 2, 3, 1).contiguous().view(B, -1, C)\n\n                # Divide prediction to obj_pred, xywh_pred and cls_pred   \n                # [B, H*W*anchor_n, 1]\n                conf_pred = pred[:, :, :1 * self.num_anchors].contiguous().view(B, -1, 1)\n                # [B, H*W*anchor_n, num_cls]\n                cls_pred = pred[:, :, 1 * self.num_anchors : (1 + self.num_classes) * self.num_anchors].contiguous().view(B, -1, self.num_classes)\n                # [B, H*W*anchor_n, 4]\n                reg_pred = pred[:, :, (1 + self.num_classes) * self.num_anchors:].contiguous()\n\n                total_conf_pred.append(conf_pred)\n                total_cls_pred.append(cls_pred)\n                total_reg_pred.append(reg_pred)\n            \n            conf_pred = torch.cat(total_conf_pred, dim=1)\n            cls_pred = torch.cat(total_cls_pred, dim=1)\n            reg_pred = torch.cat(total_reg_pred, dim=1)\n\n            # decode bbox\n            reg_pred = reg_pred.view(B, -1, self.num_anchors, 4)\n            x1y1x2y2_pred = (self.decode_boxes(reg_pred) / self.input_size).view(-1, 4)\n            reg_pred = reg_pred.view(B, -1, 4)\n            x1y1x2y2_gt = target[:, :, 7:].view(-1, 4)\n                \n            # set conf target\n            iou_pred = tools.iou_score(x1y1x2y2_pred, x1y1x2y2_gt).view(B, -1, 1)\n            gt_conf = iou_pred.clone().detach()\n\n            # [obj, cls, txtytwth, scale_weight, x1y1x2y2] -> [conf, obj, cls, txtytwth, scale_weight]\n            target = torch.cat([gt_conf, target[:, :, :7]], dim=2)\n\n            # loss\n            (\n                conf_loss,\n                cls_loss,\n                bbox_loss,\n                iou_loss\n            ) = tools.loss(pred_conf=conf_pred,\n                            pred_cls=cls_pred,\n                            pred_txtytwth=reg_pred,\n                            pred_iou=iou_pred,\n                            label=target\n                            )\n\n            return conf_loss, cls_loss, bbox_loss, iou_loss   \n"
  },
  {
    "path": "test.py",
    "content": "import os\nimport argparse\nimport torch\nimport torch.backends.cudnn as cudnn\nfrom data.voc0712 import VOC_CLASSES, VOCDetection\nfrom data.coco2017 import COCODataset, coco_class_index, coco_class_labels\nfrom data import config, BaseTransform\nimport numpy as np\nimport cv2\nimport time\n\n\nparser = argparse.ArgumentParser(description='YOLO Detection')\n# basic\nparser.add_argument('-size', '--input_size', default=416, type=int,\n                    help='input_size')\nparser.add_argument('--cuda', action='store_true', default=False, \n                    help='use cuda.')\n# model\nparser.add_argument('-v', '--version', default='yolo_v2',\n                    help='yolov2_d19, yolov2_r50, yolov2_slim, yolov3, yolov3_spp, yolov3_tiny')\nparser.add_argument('--trained_model', default='weight/',\n                    type=str, help='Trained state_dict file path to open')\nparser.add_argument('--conf_thresh', default=0.1, type=float,\n                    help='Confidence threshold')\nparser.add_argument('--nms_thresh', default=0.50, type=float,\n                    help='NMS threshold')\n# dataset\nparser.add_argument('-root', '--data_root', default='/mnt/share/ssd2/dataset',\n                    help='dataset root')\nparser.add_argument('-d', '--dataset', default='voc',\n                    help='voc or coco')\n# visualize\nparser.add_argument('-vs', '--visual_threshold', default=0.25, type=float,\n                    help='Final confidence threshold')\nparser.add_argument('--show', action='store_true', default=False,\n                    help='show the visulization results.')\n\n\nargs = parser.parse_args()\n\n\ndef plot_bbox_labels(img, bbox, label=None, cls_color=None, text_scale=0.4):\n    x1, y1, x2, y2 = bbox\n    x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)\n    t_size = cv2.getTextSize(label, 0, fontScale=1, thickness=2)[0]\n    # plot bbox\n    cv2.rectangle(img, (x1, y1), (x2, y2), cls_color, 2)\n    \n    if label is not None:\n        # plot title bbox\n        cv2.rectangle(img, (x1, y1-t_size[1]), (int(x1 + t_size[0] * text_scale), y1), cls_color, -1)\n        # put the test on the title bbox\n        cv2.putText(img, label, (int(x1), int(y1 - 5)), 0, text_scale, (0, 0, 0), 1, lineType=cv2.LINE_AA)\n\n    return img\n\n\ndef visualize(img, \n              bboxes, \n              scores, \n              cls_inds, \n              vis_thresh, \n              class_colors, \n              class_names, \n              class_indexs=None, \n              dataset_name='voc'):\n    ts = 0.4\n    for i, bbox in enumerate(bboxes):\n        if scores[i] > vis_thresh:\n            cls_id = int(cls_inds[i])\n            if dataset_name == 'coco':\n                cls_color = class_colors[cls_id]\n                cls_id = class_indexs[cls_id]\n            else:\n                cls_color = class_colors[cls_id]\n                \n            if len(class_names) > 1:\n                mess = '%s: %.2f' % (class_names[cls_id], scores[i])\n            else:\n                cls_color = [255, 0, 0]\n                mess = None\n            img = plot_bbox_labels(img, bbox, mess, cls_color, text_scale=ts)\n\n    return img\n        \n\ndef test(net, \n         device, \n         dataset, \n         transform, \n         vis_thresh, \n         class_colors=None, \n         class_names=None, \n         class_indexs=None, \n         dataset_name='voc'):\n\n    num_images = len(dataset)\n    save_path = os.path.join('det_results/', args.dataset, args.version)\n    os.makedirs(save_path, exist_ok=True)\n\n    for index in range(num_images):\n        print('Testing image {:d}/{:d}....'.format(index+1, num_images))\n        image, _ = dataset.pull_image(index)\n        h, w, _ = image.shape\n        scale = np.array([[w, h, w, h]])\n\n        # to tensor\n        x = torch.from_numpy(transform(image)[0][:, :, (2, 1, 0)]).permute(2, 0, 1)\n        x = x.unsqueeze(0).to(device)\n\n        t0 = time.time()\n        # forward\n        bboxes, scores, cls_inds = net(x)\n        print(\"detection time used \", time.time() - t0, \"s\")\n        \n        # rescale\n        bboxes *= scale\n\n        # vis detection\n        img_processed = visualize(\n                            img=image,\n                            bboxes=bboxes,\n                            scores=scores,\n                            cls_inds=cls_inds,\n                            vis_thresh=vis_thresh,\n                            class_colors=class_colors,\n                            class_names=class_names,\n                            class_indexs=class_indexs,\n                            dataset_name=dataset_name\n                            )\n        if args.show:\n            cv2.imshow('detection', img_processed)\n            cv2.waitKey(0)\n        # save result\n        cv2.imwrite(os.path.join(save_path, str(index).zfill(6) +'.jpg'), img_processed)\n\n\nif __name__ == '__main__':\n    # cuda\n    if args.cuda:\n        print('use cuda')\n        cudnn.benchmark = True\n        device = torch.device(\"cuda\")\n    else:\n        device = torch.device(\"cpu\")\n\n    # input size\n    input_size = args.input_size\n\n    # dataset\n    if args.dataset == 'voc':\n        print('test on voc ...')\n        data_dir = os.path.join(args.data_root, 'VOCdevkit')\n        class_names = VOC_CLASSES\n        class_indexs = None\n        num_classes = 20\n        dataset = VOCDetection(root=data_dir, \n                                image_sets=[('2007', 'test')])\n\n    elif args.dataset == 'coco':\n        print('test on coco-val ...')\n        data_dir = os.path.join(args.data_root, 'COCO')\n        class_names = coco_class_labels\n        class_indexs = coco_class_index\n        num_classes = 80\n        dataset = COCODataset(\n                    data_dir=data_dir,\n                    json_file='instances_val2017.json',\n                    name='val2017')\n\n    class_colors = [(np.random.randint(255), \n                     np.random.randint(255),\n                     np.random.randint(255)) for _ in range(num_classes)]\n\n    # model\n    model_name = args.version\n    print('Model: ', model_name)\n\n    # load model and config file\n    if model_name == 'yolov2_d19':\n        from models.yolov2_d19 import YOLOv2D19 as yolo_net\n        cfg = config.yolov2_d19_cfg\n\n    elif model_name == 'yolov2_r50':\n        from models.yolov2_r50 import YOLOv2R50 as yolo_net\n        cfg = config.yolov2_r50_cfg\n\n    elif model_name == 'yolov3':\n        from models.yolov3 import YOLOv3 as yolo_net\n        cfg = config.yolov3_d53_cfg\n\n    elif model_name == 'yolov3_spp':\n        from models.yolov3_spp import YOLOv3Spp as yolo_net\n        cfg = config.yolov3_d53_cfg\n\n    elif model_name == 'yolov3_tiny':\n        from models.yolov3_tiny import YOLOv3tiny as yolo_net\n        cfg = config.yolov3_tiny_cfg\n    else:\n        print('Unknown model name...')\n        exit(0)\n\n    # build model\n    anchor_size = cfg['anchor_size_voc'] if args.dataset == 'voc' else cfg['anchor_size_coco']\n    net = yolo_net(device=device, \n                   input_size=input_size, \n                   num_classes=num_classes, \n                   trainable=False, \n                   conf_thresh=args.conf_thresh,\n                   nms_thresh=args.nms_thresh,\n                   anchor_size=anchor_size)\n\n    # load weight\n    net.load_state_dict(torch.load(args.trained_model, map_location=device))\n    net.to(device).eval()\n    print('Finished loading model!')\n\n    # evaluation\n    test(net=net, \n        device=device, \n        dataset=dataset,\n        transform=BaseTransform(input_size),\n        vis_thresh=args.visual_threshold,\n        class_colors=class_colors,\n        class_names=class_names,\n        class_indexs=class_indexs,\n        dataset_name=args.dataset\n        )\n"
  },
  {
    "path": "tools.py",
    "content": "import numpy as np\nfrom data import *\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n# We use ignore thresh to decide which anchor box can be kept.\nignore_thresh = 0.5\n\n\nclass MSEWithLogitsLoss(nn.Module):\n    def __init__(self, reduction='mean'):\n        super(MSEWithLogitsLoss, self).__init__()\n        self.reduction = reduction\n\n    def forward(self, logits, targets, mask):\n        inputs = torch.sigmoid(logits)\n\n        # We ignore those whose tarhets == -1.0. \n        pos_id = (mask==1.0).float()\n        neg_id = (mask==0.0).float()\n        pos_loss = pos_id * (inputs - targets)**2\n        neg_loss = neg_id * (inputs)**2\n        loss = 5.0*pos_loss + 1.0*neg_loss\n\n        if self.reduction == 'mean':\n            batch_size = logits.size(0)\n            loss = torch.sum(loss) / batch_size\n\n            return loss\n\n        else:\n            return loss\n\n\ndef compute_iou(anchor_boxes, gt_box):\n    \"\"\"\n    Input:\n        anchor_boxes : ndarray -> [[c_x_s, c_y_s, anchor_w, anchor_h], ..., [c_x_s, c_y_s, anchor_w, anchor_h]].\n        gt_box : ndarray -> [c_x_s, c_y_s, anchor_w, anchor_h].\n    Output:\n        iou : ndarray -> [iou_1, iou_2, ..., iou_m], and m is equal to the number of anchor boxes.\n    \"\"\"\n    # compute the iou between anchor box and gt box\n    # First, change [c_x_s, c_y_s, anchor_w, anchor_h] ->  [xmin, ymin, xmax, ymax]\n    # anchor box :\n    ab_x1y1_x2y2 = np.zeros([len(anchor_boxes), 4])\n    ab_x1y1_x2y2[:, 0] = anchor_boxes[:, 0] - anchor_boxes[:, 2] / 2  # xmin\n    ab_x1y1_x2y2[:, 1] = anchor_boxes[:, 1] - anchor_boxes[:, 3] / 2  # ymin\n    ab_x1y1_x2y2[:, 2] = anchor_boxes[:, 0] + anchor_boxes[:, 2] / 2  # xmax\n    ab_x1y1_x2y2[:, 3] = anchor_boxes[:, 1] + anchor_boxes[:, 3] / 2  # ymax\n    w_ab, h_ab = anchor_boxes[:, 2], anchor_boxes[:, 3]\n    \n    # gt_box : \n    # We need to expand gt_box(ndarray) to the shape of anchor_boxes(ndarray), in order to compute IoU easily. \n    gt_box_expand = np.repeat(gt_box, len(anchor_boxes), axis=0)\n\n    gb_x1y1_x2y2 = np.zeros([len(anchor_boxes), 4])\n    gb_x1y1_x2y2[:, 0] = gt_box_expand[:, 0] - gt_box_expand[:, 2] / 2 # xmin\n    gb_x1y1_x2y2[:, 1] = gt_box_expand[:, 1] - gt_box_expand[:, 3] / 2 # ymin\n    gb_x1y1_x2y2[:, 2] = gt_box_expand[:, 0] + gt_box_expand[:, 2] / 2 # xmax\n    gb_x1y1_x2y2[:, 3] = gt_box_expand[:, 1] + gt_box_expand[:, 3] / 2 # ymin\n    w_gt, h_gt = gt_box_expand[:, 2], gt_box_expand[:, 3]\n\n    # Then we compute IoU between anchor_box and gt_box\n    S_gt = w_gt * h_gt\n    S_ab = w_ab * h_ab\n    I_w = np.minimum(gb_x1y1_x2y2[:, 2], ab_x1y1_x2y2[:, 2]) - np.maximum(gb_x1y1_x2y2[:, 0], ab_x1y1_x2y2[:, 0])\n    I_h = np.minimum(gb_x1y1_x2y2[:, 3], ab_x1y1_x2y2[:, 3]) - np.maximum(gb_x1y1_x2y2[:, 1], ab_x1y1_x2y2[:, 1])\n    S_I = I_h * I_w\n    U = S_gt + S_ab - S_I + 1e-20\n    IoU = S_I / U\n    \n    return IoU\n\n\ndef set_anchors(anchor_size):\n    \"\"\"\n    Input:\n        anchor_size : list -> [[h_1, w_1], [h_2, w_2], ..., [h_n, w_n]].\n    Output:\n        anchor_boxes : ndarray -> [[0, 0, anchor_w, anchor_h],\n                                   [0, 0, anchor_w, anchor_h],\n                                   ...\n                                   [0, 0, anchor_w, anchor_h]].\n    \"\"\"\n    anchor_number = len(anchor_size)\n    anchor_boxes = np.zeros([anchor_number, 4])\n    for index, size in enumerate(anchor_size): \n        anchor_w, anchor_h = size\n        anchor_boxes[index] = np.array([0, 0, anchor_w, anchor_h])\n    \n    return anchor_boxes\n\n\ndef generate_txtytwth(gt_label, w, h, s, all_anchor_size):\n    xmin, ymin, xmax, ymax = gt_label[:-1]\n    # compute the center, width and height\n    c_x = (xmax + xmin) / 2 * w\n    c_y = (ymax + ymin) / 2 * h\n    box_w = (xmax - xmin) * w\n    box_h = (ymax - ymin) * h\n\n    if box_w < 1. or box_h < 1.:\n        # print('A dirty data !!!')\n        return False    \n\n    # map the center, width and height to the feature map size\n    c_x_s = c_x / s\n    c_y_s = c_y / s\n    box_ws = box_w / s\n    box_hs = box_h / s\n    \n    # the grid cell location\n    grid_x = int(c_x_s)\n    grid_y = int(c_y_s)\n    # generate anchor boxes\n    anchor_boxes = set_anchors(all_anchor_size)\n    gt_box = np.array([[0, 0, box_ws, box_hs]])\n    # compute the IoU\n    iou = compute_iou(anchor_boxes, gt_box)\n    # We consider those anchor boxes whose IoU is more than ignore thresh,\n    iou_mask = (iou > ignore_thresh)\n\n    result = []\n    if iou_mask.sum() == 0:\n        # We assign the anchor box with highest IoU score.\n        index = np.argmax(iou)\n        p_w, p_h = all_anchor_size[index]\n        tx = c_x_s - grid_x\n        ty = c_y_s - grid_y\n        tw = np.log(box_ws / p_w)\n        th = np.log(box_hs / p_h)\n        weight = 2.0 - (box_w / w) * (box_h / h)\n        \n        result.append([index, grid_x, grid_y, tx, ty, tw, th, weight, xmin, ymin, xmax, ymax])\n        \n        return result\n    \n    else:\n        # There are more than one anchor boxes whose IoU are higher than ignore thresh.\n        # But we only assign only one anchor box whose IoU is the best(objectness target is 1) and ignore other \n        # anchor boxes whose(we set their objectness as -1 which means we will ignore them during computing obj loss )\n        # iou_ = iou * iou_mask\n        \n        # We get the index of the best IoU\n        best_index = np.argmax(iou)\n        for index, iou_m in enumerate(iou_mask):\n            if iou_m:\n                if index == best_index:\n                    p_w, p_h = all_anchor_size[index]\n                    tx = c_x_s - grid_x\n                    ty = c_y_s - grid_y\n                    tw = np.log(box_ws / p_w)\n                    th = np.log(box_hs / p_h)\n                    weight = 2.0 - (box_w / w) * (box_h / h)\n                    \n                    result.append([index, grid_x, grid_y, tx, ty, tw, th, weight, xmin, ymin, xmax, ymax])\n                else:\n                    # we ignore other anchor boxes even if their iou scores all higher than ignore thresh\n                    result.append([index, grid_x, grid_y, 0., 0., 0., 0., -1.0, 0., 0., 0., 0.])\n\n        return result \n\n\ndef gt_creator(input_size, stride, label_lists, anchor_size):\n    \"\"\"\n    Input:\n        input_size : list -> the size of image in the training stage.\n        stride : int or list -> the downSample of the CNN, such as 32, 64 and so on.\n        label_list : list -> [[[xmin, ymin, xmax, ymax, cls_ind], ... ], [[xmin, ymin, xmax, ymax, cls_ind], ... ]],  \n                        and len(label_list) = batch_size;\n                            len(label_list[i]) = the number of class instance in a image;\n                            (xmin, ymin, xmax, ymax) : the coords of a bbox whose valus is between 0 and 1;\n                            cls_ind : the corresponding class label.\n    Output:\n        gt_tensor : ndarray -> shape = [batch_size, anchor_number, 1+1+4, grid_cell number ]\n    \"\"\"\n\n    # prepare the all empty gt datas\n    batch_size = len(label_lists)\n    h = w = input_size\n    \n    # We  make gt labels by anchor-free method and anchor-based method.\n    ws = w // stride\n    hs = h // stride\n    s = stride\n\n    # We use anchor boxes to build training target.\n    all_anchor_size = anchor_size\n    anchor_number = len(all_anchor_size)\n\n    gt_tensor = np.zeros([batch_size, hs, ws, anchor_number, 1+1+4+1+4])\n\n    for batch_index in range(batch_size):\n        for gt_label in label_lists[batch_index]:\n            # get a bbox coords\n            gt_class = int(gt_label[-1])\n            results = generate_txtytwth(gt_label, w, h, s, all_anchor_size)\n            if results:\n                for result in results:\n                    index, grid_x, grid_y, tx, ty, tw, th, weight, xmin, ymin, xmax, ymax = result\n                    if weight > 0.:\n                        if grid_y < gt_tensor.shape[1] and grid_x < gt_tensor.shape[2]:\n                            gt_tensor[batch_index, grid_y, grid_x, index, 0] = 1.0\n                            gt_tensor[batch_index, grid_y, grid_x, index, 1] = gt_class\n                            gt_tensor[batch_index, grid_y, grid_x, index, 2:6] = np.array([tx, ty, tw, th])\n                            gt_tensor[batch_index, grid_y, grid_x, index, 6] = weight\n                            gt_tensor[batch_index, grid_y, grid_x, index, 7:] = np.array([xmin, ymin, xmax, ymax])\n                    else:\n                        gt_tensor[batch_index, grid_y, grid_x, index, 0] = -1.0\n                        gt_tensor[batch_index, grid_y, grid_x, index, 6] = -1.0\n\n    gt_tensor = gt_tensor.reshape(batch_size, hs * ws * anchor_number, 1+1+4+1+4)\n\n    return gt_tensor\n\n\ndef multi_gt_creator(input_size, strides, label_lists, anchor_size):\n    \"\"\"creator multi scales gt\"\"\"\n    # prepare the all empty gt datas\n    batch_size = len(label_lists)\n    h = w = input_size\n    num_scale = len(strides)\n    gt_tensor = []\n    all_anchor_size = anchor_size\n    anchor_number = len(all_anchor_size) // num_scale\n\n    for s in strides:\n        gt_tensor.append(np.zeros([batch_size, h//s, w//s, anchor_number, 1+1+4+1+4]))\n        \n    # generate gt datas    \n    for batch_index in range(batch_size):\n        for gt_label in label_lists[batch_index]:\n            # get a bbox coords\n            gt_class = int(gt_label[-1])\n            xmin, ymin, xmax, ymax = gt_label[:-1]\n            # compute the center, width and height\n            c_x = (xmax + xmin) / 2 * w\n            c_y = (ymax + ymin) / 2 * h\n            box_w = (xmax - xmin) * w\n            box_h = (ymax - ymin) * h\n\n            if box_w < 1. or box_h < 1.:\n                # print('A dirty data !!!')\n                continue    \n\n            # compute the IoU\n            anchor_boxes = set_anchors(all_anchor_size)\n            gt_box = np.array([[0, 0, box_w, box_h]])\n            iou = compute_iou(anchor_boxes, gt_box)\n\n            # We only consider those anchor boxes whose IoU is more than ignore thresh,\n            iou_mask = (iou > ignore_thresh)\n\n            if iou_mask.sum() == 0:\n                # We assign the anchor box with highest IoU score.\n                index = np.argmax(iou)\n                # s_indx, ab_ind = index // num_scale, index % num_scale\n                s_indx = index // anchor_number\n                ab_ind = index - s_indx * anchor_number\n                # get the corresponding stride\n                s = strides[s_indx]\n                # get the corresponding anchor box\n                p_w, p_h = anchor_boxes[index, 2], anchor_boxes[index, 3]\n                # compute the gride cell location\n                c_x_s = c_x / s\n                c_y_s = c_y / s\n                grid_x = int(c_x_s)\n                grid_y = int(c_y_s)\n                # compute gt labels\n                tx = c_x_s - grid_x\n                ty = c_y_s - grid_y\n                tw = np.log(box_w / p_w)\n                th = np.log(box_h / p_h)\n                weight = 2.0 - (box_w / w) * (box_h / h)\n\n                if grid_y < gt_tensor[s_indx].shape[1] and grid_x < gt_tensor[s_indx].shape[2]:\n                    gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 0] = 1.0\n                    gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 1] = gt_class\n                    gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 2:6] = np.array([tx, ty, tw, th])\n                    gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 6] = weight\n                    gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 7:] = np.array([xmin, ymin, xmax, ymax])\n            \n            else:\n                # There are more than one anchor boxes whose IoU are higher than ignore thresh.\n                # But we only assign only one anchor box whose IoU is the best(objectness target is 1) and ignore other \n                # anchor boxes whose(we set their objectness as -1 which means we will ignore them during computing obj loss )\n                # iou_ = iou * iou_mask\n                \n                # We get the index of the best IoU\n                best_index = np.argmax(iou)\n                for index, iou_m in enumerate(iou_mask):\n                    if iou_m:\n                        if index == best_index:\n                            # s_indx, ab_ind = index // num_scale, index % num_scale\n                            s_indx = index // anchor_number\n                            ab_ind = index - s_indx * anchor_number\n                            # get the corresponding stride\n                            s = strides[s_indx]\n                            # get the corresponding anchor box\n                            p_w, p_h = anchor_boxes[index, 2], anchor_boxes[index, 3]\n                            # compute the gride cell location\n                            c_x_s = c_x / s\n                            c_y_s = c_y / s\n                            grid_x = int(c_x_s)\n                            grid_y = int(c_y_s)\n                            # compute gt labels\n                            tx = c_x_s - grid_x\n                            ty = c_y_s - grid_y\n                            tw = np.log(box_w / p_w)\n                            th = np.log(box_h / p_h)\n                            weight = 2.0 - (box_w / w) * (box_h / h)\n\n                            if grid_y < gt_tensor[s_indx].shape[1] and grid_x < gt_tensor[s_indx].shape[2]:\n                                gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 0] = 1.0\n                                gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 1] = gt_class\n                                gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 2:6] = np.array([tx, ty, tw, th])\n                                gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 6] = weight\n                                gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 7:] = np.array([xmin, ymin, xmax, ymax])\n            \n                        else:\n                            # we ignore other anchor boxes even if their iou scores are higher than ignore thresh\n                            # s_indx, ab_ind = index // num_scale, index % num_scale\n                            s_indx = index // anchor_number\n                            ab_ind = index - s_indx * anchor_number\n                            s = strides[s_indx]\n                            c_x_s = c_x / s\n                            c_y_s = c_y / s\n                            grid_x = int(c_x_s)\n                            grid_y = int(c_y_s)\n                            gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 0] = -1.0\n                            gt_tensor[s_indx][batch_index, grid_y, grid_x, ab_ind, 6] = -1.0\n\n    gt_tensor = [gt.reshape(batch_size, -1, 1+1+4+1+4) for gt in gt_tensor]\n    gt_tensor = np.concatenate(gt_tensor, 1)\n    \n    return gt_tensor\n\n\ndef iou_score(bboxes_a, bboxes_b):\n    \"\"\"\n        bbox_1 : [B*N, 4] = [x1, y1, x2, y2]\n        bbox_2 : [B*N, 4] = [x1, y1, x2, y2]\n    \"\"\"\n    tl = torch.max(bboxes_a[:, :2], bboxes_b[:, :2])\n    br = torch.min(bboxes_a[:, 2:], bboxes_b[:, 2:])\n    area_a = torch.prod(bboxes_a[:, 2:] - bboxes_a[:, :2], 1)\n    area_b = torch.prod(bboxes_b[:, 2:] - bboxes_b[:, :2], 1)\n\n    en = (tl < br).type(tl.type()).prod(dim=1)\n    area_i = torch.prod(br - tl, 1) * en  # * ((tl < br).all())\n    return area_i / (area_a + area_b - area_i + 1e-14)\n\n\ndef loss(pred_conf, pred_cls, pred_txtytwth, pred_iou, label):\n    # loss func\n    conf_loss_function = MSEWithLogitsLoss(reduction='mean')\n    cls_loss_function = nn.CrossEntropyLoss(reduction='none')\n    txty_loss_function = nn.BCEWithLogitsLoss(reduction='none')\n    twth_loss_function = nn.MSELoss(reduction='none')\n    iou_loss_function = nn.SmoothL1Loss(reduction='none')\n\n    # pred\n    pred_conf = pred_conf[:, :, 0]\n    pred_cls = pred_cls.permute(0, 2, 1)\n    pred_txty = pred_txtytwth[:, :, :2]\n    pred_twth = pred_txtytwth[:, :, 2:]\n    pred_iou = pred_iou[:, :, 0]\n\n    # gt    \n    gt_conf = label[:, :, 0].float()\n    gt_obj = label[:, :, 1].float()\n    gt_cls = label[:, :, 2].long()\n    gt_txty = label[:, :, 3:5].float()\n    gt_twth = label[:, :, 5:7].float()\n    gt_box_scale_weight = label[:, :, 7].float()\n    gt_iou = (gt_box_scale_weight > 0.).float()\n    gt_mask = (gt_box_scale_weight > 0.).float()\n\n    batch_size = pred_conf.size(0)\n    # objectness loss\n    conf_loss = conf_loss_function(pred_conf, gt_conf, gt_obj)\n    \n    # class loss\n    cls_loss = torch.sum(cls_loss_function(pred_cls, gt_cls) * gt_mask) / batch_size\n    \n    # box loss\n    txty_loss = torch.sum(torch.sum(txty_loss_function(pred_txty, gt_txty), dim=-1) * gt_box_scale_weight * gt_mask) / batch_size\n    twth_loss = torch.sum(torch.sum(twth_loss_function(pred_twth, gt_twth), dim=-1) * gt_box_scale_weight * gt_mask) / batch_size\n    bbox_loss = txty_loss + twth_loss\n\n    # iou loss\n    iou_loss = torch.sum(iou_loss_function(pred_iou, gt_iou) * gt_mask) / batch_size\n\n    return conf_loss, cls_loss, bbox_loss, iou_loss\n\n\nif __name__ == \"__main__\":\n    gt_box = np.array([[0.0, 0.0, 10, 10]])\n    anchor_boxes = np.array([[0.0, 0.0, 10, 10], \n                             [0.0, 0.0, 4, 4], \n                             [0.0, 0.0, 8, 8], \n                             [0.0, 0.0, 16, 16]\n                             ])\n    iou = compute_iou(anchor_boxes, gt_box)\n    print(iou)"
  },
  {
    "path": "train.py",
    "content": "from __future__ import division\n\nimport os\nimport random\nimport argparse\nimport time\nimport cv2\nimport numpy as np\nfrom copy import deepcopy\n\nimport torch\nimport torch.optim as optim\nimport torch.backends.cudnn as cudnn\nimport torch.distributed as dist\nfrom torch.nn.parallel import DistributedDataParallel as DDP\n\nfrom data.voc0712 import VOCDetection\nfrom data.coco2017 import COCODataset\nfrom data import config\nfrom data import BaseTransform, detection_collate\n\nimport tools\n\nfrom utils import distributed_utils\nfrom utils.com_paras_flops import FLOPs_and_Params\nfrom utils.augmentations import SSDAugmentation, ColorAugmentation\nfrom utils.cocoapi_evaluator import COCOAPIEvaluator\nfrom utils.vocapi_evaluator import VOCAPIEvaluator\nfrom utils.modules import ModelEMA\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description='YOLO Detection')\n    # basic\n    parser.add_argument('--cuda', action='store_true', default=False,\n                        help='use cuda.')\n    parser.add_argument('-bs', '--batch_size', default=16, type=int, \n                        help='Batch size for training')\n    parser.add_argument('--lr', default=1e-3, type=float, \n                        help='initial learning rate')\n    parser.add_argument('--wp_epoch', type=int, default=2,\n                        help='The upper bound of warm-up')\n    parser.add_argument('--start_epoch', type=int, default=0,\n                        help='start epoch to train')\n    parser.add_argument('-r', '--resume', default=None, type=str, \n                        help='keep training')\n    parser.add_argument('--momentum', default=0.9, type=float, \n                        help='Momentum value for optim')\n    parser.add_argument('--weight_decay', default=5e-4, type=float, \n                        help='Weight decay for SGD')\n    parser.add_argument('--num_workers', default=8, type=int, \n                        help='Number of workers used in dataloading')\n    parser.add_argument('--num_gpu', default=1, type=int, \n                        help='Number of GPUs to train')\n    parser.add_argument('--eval_epoch', type=int,\n                            default=10, help='interval between evaluations')\n    parser.add_argument('--tfboard', action='store_true', default=False,\n                        help='use tensorboard')\n    parser.add_argument('--save_folder', default='weights/', type=str, \n                        help='Gamma update for SGD')\n    parser.add_argument('--vis', action='store_true', default=False,\n                        help='visualize target.')\n\n    # model\n    parser.add_argument('-v', '--version', default='yolo_v2',\n                        help='yolov2_d19, yolov2_r50, yolov2_slim, yolov3, yolov3_spp, yolov3_tiny')\n    \n    # dataset\n    parser.add_argument('-root', '--data_root', default='/mnt/share/ssd2/dataset',\n                        help='dataset root')\n    parser.add_argument('-d', '--dataset', default='voc',\n                        help='voc or coco')\n    \n    # train trick\n    parser.add_argument('--no_warmup', action='store_true', default=False,\n                        help='do not use warmup')\n    parser.add_argument('-ms', '--multi_scale', action='store_true', default=False,\n                        help='use multi-scale trick')      \n    parser.add_argument('--mosaic', action='store_true', default=False,\n                        help='use mosaic augmentation')\n    parser.add_argument('--ema', action='store_true', default=False,\n                        help='use ema training trick')\n\n    # DDP train\n    parser.add_argument('-dist', '--distributed', action='store_true', default=False,\n                        help='distributed training')\n    parser.add_argument('--dist_url', default='env://', \n                        help='url used to set up distributed training')\n    parser.add_argument('--world_size', default=1, type=int,\n                        help='number of distributed processes')\n    parser.add_argument('--sybn', action='store_true', default=False, \n                        help='use sybn.')\n\n    return parser.parse_args()\n\n\ndef train():\n    args = parse_args()\n    print(\"Setting Arguments.. : \", args)\n    print(\"----------------------------------------------------------\")\n\n    # set distributed\n    print('World size: {}'.format(distributed_utils.get_world_size()))\n    if args.distributed:\n        distributed_utils.init_distributed_mode(args)\n        print(\"git:\\n  {}\\n\".format(distributed_utils.get_sha()))\n\n    # cuda\n    if args.cuda:\n        print('use cuda')\n        # cudnn.benchmark = True\n        device = torch.device(\"cuda\")\n    else:\n        device = torch.device(\"cpu\")\n\n    model_name = args.version\n    print('Model: ', model_name)\n\n    # load model and config file\n    if model_name == 'yolov2_d19':\n        from models.yolov2_d19 import YOLOv2D19 as yolo_net\n        cfg = config.yolov2_d19_cfg\n\n    elif model_name == 'yolov2_r50':\n        from models.yolov2_r50 import YOLOv2R50 as yolo_net\n        cfg = config.yolov2_r50_cfg\n\n    elif model_name == 'yolov3':\n        from models.yolov3 import YOLOv3 as yolo_net\n        cfg = config.yolov3_d53_cfg\n\n    elif model_name == 'yolov3_spp':\n        from models.yolov3_spp import YOLOv3Spp as yolo_net\n        cfg = config.yolov3_d53_cfg\n\n    elif model_name == 'yolov3_tiny':\n        from models.yolov3_tiny import YOLOv3tiny as yolo_net\n        cfg = config.yolov3_tiny_cfg\n    else:\n        print('Unknown model name...')\n        exit(0)\n\n    # path to save model\n    path_to_save = os.path.join(args.save_folder, args.dataset, args.version)\n    os.makedirs(path_to_save, exist_ok=True)\n    \n    # multi-scale\n    if args.multi_scale:\n        print('use the multi-scale trick ...')\n        train_size = cfg['train_size']\n        val_size = cfg['val_size']\n    else:\n        train_size = val_size = cfg['train_size']\n\n    # Model ENA\n    if args.ema:\n        print('use EMA trick ...')\n\n    # dataset and evaluator\n    if args.dataset == 'voc':\n        data_dir = os.path.join(args.data_root, 'VOCdevkit')\n        num_classes = 20\n        dataset = VOCDetection(data_dir=data_dir, \n                                transform=SSDAugmentation(train_size))\n\n        evaluator = VOCAPIEvaluator(data_root=data_dir,\n                                    img_size=val_size,\n                                    device=device,\n                                    transform=BaseTransform(val_size))\n\n    elif args.dataset == 'coco':\n        data_dir = os.path.join(args.data_root, 'COCO')\n        num_classes = 80\n        dataset = COCODataset(\n                    data_dir=data_dir,\n                    transform=SSDAugmentation(train_size))\n\n        evaluator = COCOAPIEvaluator(\n                        data_dir=data_dir,\n                        img_size=val_size,\n                        device=device,\n                        transform=BaseTransform(val_size))\n    \n    else:\n        print('unknow dataset !! Only support voc and coco !!')\n        exit(0)\n    \n    print('Training model on:', dataset.name)\n    print('The dataset size:', len(dataset))\n    print(\"----------------------------------------------------------\")\n\n    # build model\n    anchor_size = cfg['anchor_size_voc'] if args.dataset == 'voc' else cfg['anchor_size_coco']\n    net = yolo_net(device=device, \n                   input_size=train_size, \n                   num_classes=num_classes, \n                   trainable=True, \n                   anchor_size=anchor_size)\n    model = net\n    model = model.to(device).train()\n\n    # SyncBatchNorm\n    if args.sybn and args.distributed:\n        print('use SyncBatchNorm ...')\n        model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)\n\n    # DDP\n    model_without_ddp = model\n    if args.distributed:\n        model = DDP(model, device_ids=[args.gpu])\n        model_without_ddp = model.module\n\n    # compute FLOPs and Params\n    if distributed_utils.is_main_process:\n        model_copy = deepcopy(model_without_ddp)\n        model_copy.trainable = False\n        model_copy.eval()\n        FLOPs_and_Params(model=model_copy, \n                         size=train_size, \n                         device=device)\n        model_copy.trainable = True\n        model_copy.train()\n    if args.distributed:\n        # wait for all processes to synchronize\n        dist.barrier()\n\n    # dataloader\n    batch_size = args.batch_size * distributed_utils.get_world_size()\n    if args.distributed and args.num_gpu > 1:\n        dataloader = torch.utils.data.DataLoader(\n                        dataset=dataset, \n                        batch_size=batch_size, \n                        collate_fn=detection_collate,\n                        num_workers=args.num_workers,\n                        pin_memory=True,\n                        drop_last=True,\n                        sampler=torch.utils.data.distributed.DistributedSampler(dataset)\n                        )\n\n    else:\n        # dataloader\n        dataloader = torch.utils.data.DataLoader(\n                        dataset=dataset, \n                        shuffle=True,\n                        batch_size=batch_size, \n                        collate_fn=detection_collate,\n                        num_workers=args.num_workers,\n                        pin_memory=True,\n                        drop_last=True\n                        )\n\n    # keep training\n    if args.resume is not None:\n        print('keep training model: %s' % (args.resume))\n        model.load_state_dict(torch.load(args.resume, map_location=device))\n\n    # EMA\n    ema = ModelEMA(model) if args.ema else None\n\n    # use tfboard\n    if args.tfboard:\n        print('use tensorboard')\n        from torch.utils.tensorboard import SummaryWriter\n        c_time = time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time()))\n        log_path = os.path.join('log/', args.dataset, c_time)\n        os.makedirs(log_path, exist_ok=True)\n\n        tblogger = SummaryWriter(log_path)\n    \n    # optimizer setup\n    base_lr = (args.lr / 16) * batch_size\n    tmp_lr = base_lr\n    optimizer = optim.SGD(model.parameters(), \n                            lr=base_lr, \n                            momentum=args.momentum,\n                            weight_decay=args.weight_decay\n                            )\n\n    max_epoch = cfg['max_epoch']\n    epoch_size = len(dataloader)\n    best_map = -1.\n    warmup = not args.no_warmup\n\n    t0 = time.time()\n    # start training loop\n    for epoch in range(args.start_epoch, max_epoch):\n        if args.distributed:\n            dataloader.sampler.set_epoch(epoch)        \n\n        # use step lr\n        if epoch in cfg['lr_epoch']:\n            tmp_lr = tmp_lr * 0.1\n            set_lr(optimizer, tmp_lr)\n    \n        for iter_i, (images, targets) in enumerate(dataloader):\n            # WarmUp strategy for learning rate\n            ni = iter_i + epoch * epoch_size\n            # warmup\n            if epoch < args.wp_epoch and warmup:\n                nw = args.wp_epoch * epoch_size\n                tmp_lr = base_lr * pow(ni / nw, 4)\n                set_lr(optimizer, tmp_lr)\n\n            elif epoch == args.wp_epoch and iter_i == 0 and warmup:\n                # warmup is over\n                warmup = False\n                tmp_lr = base_lr\n                set_lr(optimizer, tmp_lr)\n\n            # multi-scale trick\n            if iter_i % 10 == 0 and iter_i > 0 and args.multi_scale:\n                # randomly choose a new size\n                r = cfg['random_size_range']\n                train_size = random.randint(r[0], r[1]) * 32\n                model.set_grid(train_size)\n            if args.multi_scale:\n                # interpolate\n                images = torch.nn.functional.interpolate(images, size=train_size, mode='bilinear', align_corners=False)\n            \n            targets = [label.tolist() for label in targets]\n            # visualize labels\n            if args.vis:\n                vis_data(images, targets, train_size)\n                continue\n\n            # label assignment\n            if model_name in ['yolov2_d19', 'yolov2_r50']:\n                targets = tools.gt_creator(input_size=train_size, \n                                           stride=net.stride, \n                                           label_lists=targets, \n                                           anchor_size=anchor_size\n                                           )\n            else:\n                targets = tools.multi_gt_creator(input_size=train_size, \n                                                 strides=net.stride, \n                                                 label_lists=targets, \n                                                 anchor_size=anchor_size\n                                                 )\n\n            # to device\n            images = images.float().to(device)\n            targets = torch.tensor(targets).float().to(device)\n\n            # forward\n            conf_loss, cls_loss, box_loss, iou_loss = model(images, target=targets)\n\n            # compute loss\n            total_loss = conf_loss + cls_loss + box_loss + iou_loss\n\n            loss_dict = dict(conf_loss=conf_loss,\n                             cls_loss=cls_loss,\n                             box_loss=box_loss,\n                             iou_loss=iou_loss,\n                             total_loss=total_loss\n                            )\n\n            loss_dict_reduced = distributed_utils.reduce_dict(loss_dict)\n\n            # check NAN for loss\n            if torch.isnan(total_loss):\n                print('loss is nan !!')\n                continue\n\n            # backprop\n            total_loss.backward()        \n            optimizer.step()\n            optimizer.zero_grad()\n\n            # ema\n            if args.ema:\n                ema.update(model)\n\n            # display\n            if distributed_utils.is_main_process() and iter_i % 10 == 0:\n                if args.tfboard:\n                    # viz loss\n                    tblogger.add_scalar('conf loss',  loss_dict_reduced['conf_loss'].item(),  iter_i + epoch * epoch_size)\n                    tblogger.add_scalar('cls loss',  loss_dict_reduced['cls_loss'].item(),  iter_i + epoch * epoch_size)\n                    tblogger.add_scalar('box loss',  loss_dict_reduced['box_loss'].item(),  iter_i + epoch * epoch_size)\n                    tblogger.add_scalar('iou loss',  loss_dict_reduced['iou_loss'].item(),  iter_i + epoch * epoch_size)\n                \n                t1 = time.time()\n                cur_lr = [param_group['lr']  for param_group in optimizer.param_groups]\n                # basic infor\n                log =  '[Epoch: {}/{}]'.format(epoch+1, max_epoch)\n                log += '[Iter: {}/{}]'.format(iter_i, epoch_size)\n                log += '[lr: {:.6f}]'.format(cur_lr[0])\n                # loss infor\n                for k in loss_dict_reduced.keys():\n                    log += '[{}: {:.2f}]'.format(k, loss_dict[k])\n\n                # other infor\n                log += '[time: {:.2f}]'.format(t1 - t0)\n                log += '[size: {}]'.format(train_size)\n\n                # print log infor\n                print(log, flush=True)\n                \n                t0 = time.time()\n\n        if distributed_utils.is_main_process():\n            # evaluation\n            if (epoch % args.eval_epoch) == 0 or (epoch == max_epoch - 1):\n                if args.ema:\n                    model_eval = ema.ema\n                else:\n                    model_eval = model_without_ddp\n\n                # check evaluator\n                if evaluator is None:\n                    print('No evaluator ... save model and go on training.')\n                    print('Saving state, epoch: {}'.format(epoch + 1))\n                    weight_name = '{}_epoch_{}.pth'.format(args.version, epoch + 1)\n                    checkpoint_path = os.path.join(path_to_save, weight_name)\n                    torch.save(model_eval.state_dict(), checkpoint_path)                      \n            \n                else:\n                    print('eval ...')\n                    # set eval mode\n                    model_eval.trainable = False\n                    model_eval.set_grid(val_size)\n                    model_eval.eval()\n\n                    # evaluate\n                    evaluator.evaluate(model_eval)\n\n                    cur_map = evaluator.map\n                    if cur_map > best_map:\n                        # update best-map\n                        best_map = cur_map\n                        # save model\n                        print('Saving state, epoch:', epoch + 1)\n                        weight_name = '{}_epoch_{}_{:.2f}.pth'.format(args.version, epoch + 1, best_map*100)\n                        checkpoint_path = os.path.join(path_to_save, weight_name)\n                        torch.save(model_eval.state_dict(), checkpoint_path)  \n\n                    if args.tfboard:\n                        if args.dataset == 'voc':\n                            tblogger.add_scalar('07test/mAP', evaluator.map, epoch)\n                        elif args.dataset == 'coco':\n                            tblogger.add_scalar('val/AP50_95', evaluator.ap50_95, epoch)\n                            tblogger.add_scalar('val/AP50', evaluator.ap50, epoch)\n\n                    # set train mode.\n                    model_eval.trainable = True\n                    model_eval.set_grid(train_size)\n                    model_eval.train()\n\n                # wait for all processes to synchronize\n                if args.distributed:\n                    dist.barrier()\n\n    if args.tfboard:\n        tblogger.close()\n\n\ndef set_lr(optimizer, lr):\n    for param_group in optimizer.param_groups:\n        param_group['lr'] = lr\n\n\ndef vis_data(images, targets, input_size):\n    # vis data\n    mean=(0.406, 0.456, 0.485)\n    std=(0.225, 0.224, 0.229)\n    mean = np.array(mean, dtype=np.float32)\n    std = np.array(std, dtype=np.float32)\n\n    img = images[0].permute(1, 2, 0).cpu().numpy()[:, :, ::-1]\n    img = ((img * std + mean)*255).astype(np.uint8)\n    img = img.copy()\n\n    for box in targets[0]:\n        xmin, ymin, xmax, ymax = box[:-1]\n        # print(xmin, ymin, xmax, ymax)\n        xmin *= input_size\n        ymin *= input_size\n        xmax *= input_size\n        ymax *= input_size\n        cv2.rectangle(img, (int(xmin), int(ymin)), (int(xmax), int(ymax)), (0, 0, 255), 2)\n\n    cv2.imshow('img', img)\n    cv2.waitKey(0)\n\n\nif __name__ == '__main__':\n    train()\n"
  },
  {
    "path": "utils/__init__.py",
    "content": ""
  },
  {
    "path": "utils/augmentations.py",
    "content": "import cv2\nimport numpy as np\nfrom numpy import random\n\n\ndef intersect(box_a, box_b):\n    max_xy = np.minimum(box_a[:, 2:], box_b[2:])\n    min_xy = np.maximum(box_a[:, :2], box_b[:2])\n    inter = np.clip((max_xy - min_xy), a_min=0, a_max=np.inf)\n    return inter[:, 0] * inter[:, 1]\n\n\ndef jaccard_numpy(box_a, box_b):\n    \"\"\"Compute the jaccard overlap of two sets of boxes.  The jaccard overlap\n    is simply the intersection over union of two boxes.\n    E.g.:\n        A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)\n    Args:\n        box_a: Multiple bounding boxes, Shape: [num_boxes,4]\n        box_b: Single bounding box, Shape: [4]\n    Return:\n        jaccard overlap: Shape: [box_a.shape[0], box_a.shape[1]]\n    \"\"\"\n    inter = intersect(box_a, box_b)\n    area_a = ((box_a[:, 2]-box_a[:, 0]) *\n              (box_a[:, 3]-box_a[:, 1]))  # [A,B]\n    area_b = ((box_b[2]-box_b[0]) *\n              (box_b[3]-box_b[1]))  # [A,B]\n    union = area_a + area_b - inter\n    return inter / union  # [A,B]\n\n\nclass Compose(object):\n    \"\"\"Composes several augmentations together.\n    Args:\n        transforms (List[Transform]): list of transforms to compose.\n    Example:\n        >>> augmentations.Compose([\n        >>>     transforms.CenterCrop(10),\n        >>>     transforms.ToTensor(),\n        >>> ])\n    \"\"\"\n\n    def __init__(self, transforms):\n        self.transforms = transforms\n\n    def __call__(self, img, boxes=None, labels=None):\n        for t in self.transforms:\n            img, boxes, labels = t(img, boxes, labels)\n        return img, boxes, labels\n\n\nclass ConvertFromInts(object):\n    def __call__(self, image, boxes=None, labels=None):\n        return image.astype(np.float32), boxes, labels\n\n\nclass Normalize(object):\n    def __init__(self, mean=None, std=None):\n        self.mean = np.array(mean, dtype=np.float32)\n        self.std = np.array(std, dtype=np.float32)\n\n    def __call__(self, image, boxes=None, labels=None):\n        image = image.astype(np.float32)\n        image /= 255.\n        image -= self.mean\n        image /= self.std\n\n        return image, boxes, labels\n\n\nclass ToAbsoluteCoords(object):\n    def __call__(self, image, boxes=None, labels=None):\n        height, width, channels = image.shape\n        boxes[:, 0] *= width\n        boxes[:, 2] *= width\n        boxes[:, 1] *= height\n        boxes[:, 3] *= height\n\n        return image, boxes, labels\n\n\nclass ToPercentCoords(object):\n    def __call__(self, image, boxes=None, labels=None):\n        height, width, channels = image.shape\n        boxes[:, 0] /= width\n        boxes[:, 2] /= width\n        boxes[:, 1] /= height\n        boxes[:, 3] /= height\n\n        return image, boxes, labels\n\n\nclass Resize(object):\n    def __init__(self, size=416):\n        self.size = size\n\n    def __call__(self, image, boxes=None, labels=None):\n        image = cv2.resize(image, (self.size, self.size))\n        return image, boxes, labels\n\n\nclass RandomSaturation(object):\n    def __init__(self, lower=0.5, upper=1.5):\n        self.lower = lower\n        self.upper = upper\n        assert self.upper >= self.lower, \"contrast upper must be >= lower.\"\n        assert self.lower >= 0, \"contrast lower must be non-negative.\"\n\n    def __call__(self, image, boxes=None, labels=None):\n        if random.randint(2):\n            image[:, :, 1] *= random.uniform(self.lower, self.upper)\n\n        return image, boxes, labels\n\n\nclass RandomHue(object):\n    def __init__(self, delta=18.0):\n        assert delta >= 0.0 and delta <= 360.0\n        self.delta = delta\n\n    def __call__(self, image, boxes=None, labels=None):\n        if random.randint(2):\n            image[:, :, 0] += random.uniform(-self.delta, self.delta)\n            image[:, :, 0][image[:, :, 0] > 360.0] -= 360.0\n            image[:, :, 0][image[:, :, 0] < 0.0] += 360.0\n        return image, boxes, labels\n\n\nclass RandomLightingNoise(object):\n    def __init__(self):\n        self.perms = ((0, 1, 2), (0, 2, 1),\n                      (1, 0, 2), (1, 2, 0),\n                      (2, 0, 1), (2, 1, 0))\n\n    def __call__(self, image, boxes=None, labels=None):\n        if random.randint(2):\n            swap = self.perms[random.randint(len(self.perms))]\n            shuffle = SwapChannels(swap)  # shuffle channels\n            image = shuffle(image)\n        return image, boxes, labels\n\n\nclass ConvertColor(object):\n    def __init__(self, current='BGR', transform='HSV'):\n        self.transform = transform\n        self.current = current\n\n    def __call__(self, image, boxes=None, labels=None):\n        if self.current == 'BGR' and self.transform == 'HSV':\n            image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)\n        elif self.current == 'HSV' and self.transform == 'BGR':\n            image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)\n        else:\n            raise NotImplementedError\n        return image, boxes, labels\n\n\nclass RandomContrast(object):\n    def __init__(self, lower=0.5, upper=1.5):\n        self.lower = lower\n        self.upper = upper\n        assert self.upper >= self.lower, \"contrast upper must be >= lower.\"\n        assert self.lower >= 0, \"contrast lower must be non-negative.\"\n\n    # expects float image\n    def __call__(self, image, boxes=None, labels=None):\n        if random.randint(2):\n            alpha = random.uniform(self.lower, self.upper)\n            image *= alpha\n        return image, boxes, labels\n\n\nclass RandomBrightness(object):\n    def __init__(self, delta=32):\n        assert delta >= 0.0\n        assert delta <= 255.0\n        self.delta = delta\n\n    def __call__(self, image, boxes=None, labels=None):\n        if random.randint(2):\n            delta = random.uniform(-self.delta, self.delta)\n            image += delta\n        return image, boxes, labels\n\n\nclass RandomSampleCrop(object):\n    \"\"\"Crop\n    Arguments:\n        img (Image): the image being input during training\n        boxes (Tensor): the original bounding boxes in pt form\n        labels (Tensor): the class labels for each bbox\n        mode (float tuple): the min and max jaccard overlaps\n    Return:\n        (img, boxes, classes)\n            img (Image): the cropped image\n            boxes (Tensor): the adjusted bounding boxes in pt form\n            labels (Tensor): the class labels for each bbox\n    \"\"\"\n    def __init__(self):\n        self.sample_options = (\n            # using entire original input image\n            None,\n            # sample a patch s.t. MIN jaccard w/ obj in .1,.3,.4,.7,.9\n            (0.1, None),\n            (0.3, None),\n            (0.7, None),\n            (0.9, None),\n            # randomly sample a patch\n            (None, None),\n        )\n\n    def __call__(self, image, boxes=None, labels=None):\n        height, width, _ = image.shape\n        while True:\n            # randomly choose a mode\n            sample_id = np.random.randint(len(self.sample_options))\n            mode = self.sample_options[sample_id]\n            if mode is None:\n                return image, boxes, labels\n\n            min_iou, max_iou = mode\n            if min_iou is None:\n                min_iou = float('-inf')\n            if max_iou is None:\n                max_iou = float('inf')\n\n            # max trails (50)\n            for _ in range(50):\n                current_image = image\n\n                w = random.uniform(0.3 * width, width)\n                h = random.uniform(0.3 * height, height)\n\n                # aspect ratio constraint b/t .5 & 2\n                if h / w < 0.5 or h / w > 2:\n                    continue\n\n                left = random.uniform(width - w)\n                top = random.uniform(height - h)\n\n                # convert to integer rect x1,y1,x2,y2\n                rect = np.array([int(left), int(top), int(left+w), int(top+h)])\n\n                # calculate IoU (jaccard overlap) b/t the cropped and gt boxes\n                overlap = jaccard_numpy(boxes, rect)\n\n                # is min and max overlap constraint satisfied? if not try again\n                if overlap.min() < min_iou and max_iou < overlap.max():\n                    continue\n\n                # cut the crop from the image\n                current_image = current_image[rect[1]:rect[3], rect[0]:rect[2],\n                                              :]\n\n                # keep overlap with gt box IF center in sampled patch\n                centers = (boxes[:, :2] + boxes[:, 2:]) / 2.0\n\n                # mask in all gt boxes that above and to the left of centers\n                m1 = (rect[0] < centers[:, 0]) * (rect[1] < centers[:, 1])\n\n                # mask in all gt boxes that under and to the right of centers\n                m2 = (rect[2] > centers[:, 0]) * (rect[3] > centers[:, 1])\n\n                # mask in that both m1 and m2 are true\n                mask = m1 * m2\n\n                # have any valid boxes? try again if not\n                if not mask.any():\n                    continue\n\n                # take only matching gt boxes\n                current_boxes = boxes[mask, :].copy()\n\n                # take only matching gt labels\n                current_labels = labels[mask]\n\n                # should we use the box left and top corner or the crop's\n                current_boxes[:, :2] = np.maximum(current_boxes[:, :2],\n                                                  rect[:2])\n                # adjust to crop (by substracting crop's left,top)\n                current_boxes[:, :2] -= rect[:2]\n\n                current_boxes[:, 2:] = np.minimum(current_boxes[:, 2:],\n                                                  rect[2:])\n                # adjust to crop (by substracting crop's left,top)\n                current_boxes[:, 2:] -= rect[:2]\n\n                return current_image, current_boxes, current_labels\n\n\nclass RandomMirror(object):\n    def __call__(self, image, boxes, classes):\n        _, width, _ = image.shape\n        if random.randint(2):\n            image = image[:, ::-1]\n            boxes = boxes.copy()\n            boxes[:, 0::2] = width - boxes[:, 2::-2]\n        return image, boxes, classes\n\n\nclass SwapChannels(object):\n    \"\"\"Transforms a tensorized image by swapping the channels in the order\n     specified in the swap tuple.\n    Args:\n        swaps (int triple): final order of channels\n            eg: (2, 1, 0)\n    \"\"\"\n\n    def __init__(self, swaps):\n        self.swaps = swaps\n\n    def __call__(self, image):\n        \"\"\"\n        Args:\n            image (Tensor): image tensor to be transformed\n        Return:\n            a tensor with channels swapped according to swap\n        \"\"\"\n        # if torch.is_tensor(image):\n        #     image = image.data.cpu().numpy()\n        # else:\n        #     image = np.array(image)\n        image = image[:, :, self.swaps]\n        return image\n\n\nclass PhotometricDistort(object):\n    def __init__(self):\n        self.pd = [\n            RandomContrast(),\n            ConvertColor(transform='HSV'),\n            RandomSaturation(),\n            RandomHue(),\n            ConvertColor(current='HSV', transform='BGR'),\n            RandomContrast()\n        ]\n        self.rand_brightness = RandomBrightness()\n        # self.rand_light_noise = RandomLightingNoise()\n\n    def __call__(self, image, boxes, labels):\n        im = image.copy()\n        im, boxes, labels = self.rand_brightness(im, boxes, labels)\n        if random.randint(2):\n            distort = Compose(self.pd[:-1])\n        else:\n            distort = Compose(self.pd[1:])\n        im, boxes, labels = distort(im, boxes, labels)\n        return im, boxes, labels\n        # return self.rand_light_noise(im, boxes, labels)\n\n\nclass SSDAugmentation(object):\n    def __init__(self, size=416, mean=(0.406, 0.456, 0.485), std=(0.225, 0.224, 0.229)):\n        self.mean = mean\n        self.size = size\n        self.std = std\n        self.augment = Compose([\n            ConvertFromInts(),\n            ToAbsoluteCoords(),\n            PhotometricDistort(),\n            RandomSampleCrop(),\n            RandomMirror(),\n            ToPercentCoords(),\n            Resize(self.size),\n            Normalize(self.mean, self.std)\n        ])\n\n    def __call__(self, img, boxes, labels):\n        return self.augment(img, boxes, labels)\n\n\nclass ColorAugmentation(object):\n    def __init__(self, size=416, mean=(0.406, 0.456, 0.485), std=(0.225, 0.224, 0.229)):\n        self.mean = mean\n        self.size = size\n        self.std = std\n        self.augment = Compose([\n            ConvertFromInts(),\n            ToAbsoluteCoords(),\n            PhotometricDistort(),\n            RandomMirror(),\n            ToPercentCoords(),\n            Resize(self.size),\n            Normalize(self.mean, self.std)\n        ])\n\n    def __call__(self, img, boxes, labels):\n        return self.augment(img, boxes, labels)\n"
  },
  {
    "path": "utils/cocoapi_evaluator.py",
    "content": "import json\nimport tempfile\n\nfrom pycocotools.cocoeval import COCOeval\nfrom torch.autograd import Variable\n\nfrom data.coco2017 import *\nfrom data import *\n\n\nclass COCOAPIEvaluator():\n    \"\"\"\n    COCO AP Evaluation class.\n    All the data in the val2017 dataset are processed \\\n    and evaluated by COCO API.\n    \"\"\"\n    def __init__(self, data_dir, img_size, device, testset=False, transform=None):\n        \"\"\"\n        Args:\n            data_dir (str): dataset root directory\n            img_size (int): image size after preprocess. images are resized \\\n                to squares whose shape is (img_size, img_size).\n            confthre (float):\n                confidence threshold ranging from 0 to 1, \\\n                which is defined in the config file.\n            nmsthre (float):\n                IoU threshold of non-max supression ranging from 0 to 1.\n        \"\"\"\n        self.testset = testset\n        if self.testset:\n            json_file='image_info_test-dev2017.json'\n            name = 'test2017'\n        else:\n            json_file='instances_val2017.json'\n            name='val2017'\n\n        self.dataset = COCODataset(data_dir=data_dir,\n                                   json_file=json_file,\n                                   name=name)\n        self.img_size = img_size\n        self.transform = transform\n        self.device = device\n\n        self.map = 0.\n        self.ap50_95 = 0.\n        self.ap50 = 0.\n\n    def evaluate(self, model):\n        \"\"\"\n        COCO average precision (AP) Evaluation. Iterate inference on the test dataset\n        and the results are evaluated by COCO API.\n        Args:\n            model : model object\n        Returns:\n            ap50_95 (float) : calculated COCO AP for IoU=50:95\n            ap50 (float) : calculated COCO AP for IoU=50\n        \"\"\"\n        model.eval()\n        ids = []\n        data_dict = []\n        num_images = len(self.dataset)\n        print('total number of images: %d' % (num_images))\n\n        # start testing\n        for index in range(num_images): # all the data in val2017\n            if index % 500 == 0:\n                print('[Eval: %d / %d]'%(index, num_images))\n\n            img, id_ = self.dataset.pull_image(index)  # load a batch\n            if self.transform is not None:\n                x = torch.from_numpy(self.transform(img)[0][:, :, (2, 1, 0)]).permute(2, 0, 1)\n                x = x.unsqueeze(0).to(self.device)\n            scale = np.array([[img.shape[1], img.shape[0],\n                            img.shape[1], img.shape[0]]])\n            \n            id_ = int(id_)\n            ids.append(id_)\n            with torch.no_grad():\n                outputs = model(x)\n                bboxes, scores, cls_inds = outputs\n                bboxes *= scale\n            for i, box in enumerate(bboxes):\n                x1 = float(box[0])\n                y1 = float(box[1])\n                x2 = float(box[2])\n                y2 = float(box[3])\n                label = self.dataset.class_ids[int(cls_inds[i])]\n                \n                bbox = [x1, y1, x2 - x1, y2 - y1]\n                score = float(scores[i]) # object score * class score\n                A = {\"image_id\": id_, \"category_id\": label, \"bbox\": bbox,\n                     \"score\": score} # COCO json format\n                data_dict.append(A)\n\n        annType = ['segm', 'bbox', 'keypoints']\n\n        # Evaluate the Dt (detection) json comparing with the ground truth\n        if len(data_dict) > 0:\n            print('evaluating ......')\n            cocoGt = self.dataset.coco\n            # For test\n            if self.testset:\n                json.dump(data_dict, open('yolov2_2017.json', 'w'))\n                cocoDt = cocoGt.loadRes('yolov2_2017.json')\n                print('inference on test-dev is done !!')\n                return -1, -1\n            # For val\n            else:\n                _, tmp = tempfile.mkstemp()\n                json.dump(data_dict, open(tmp, 'w'))\n                cocoDt = cocoGt.loadRes(tmp)\n                cocoEval = COCOeval(self.dataset.coco, cocoDt, annType[1])\n                cocoEval.params.imgIds = ids\n                cocoEval.evaluate()\n                cocoEval.accumulate()\n                cocoEval.summarize()\n\n                ap50_95, ap50 = cocoEval.stats[0], cocoEval.stats[1]\n                print('ap50_95 : ', ap50_95)\n                print('ap50 : ', ap50)\n                self.map = ap50_95\n                self.ap50_95 = ap50_95\n                self.ap50 = ap50\n\n                return ap50, ap50_95\n        else:\n            return 0, 0\n\n"
  },
  {
    "path": "utils/com_paras_flops.py",
    "content": "import torch\nfrom thop import profile\n\n\ndef FLOPs_and_Params(model, size, device):\n    x = torch.randn(1, 3, size, size).to(device)\n    model.trainable = False\n    model.eval()\n\n    flops, params = profile(model, inputs=(x, ))\n    print('FLOPs : ', flops / 1e9, ' B')\n    print('Params : ', params / 1e6, ' M')\n\n    model.trainable = True\n    model.train()\n\n\nif __name__ == \"__main__\":\n    pass\n"
  },
  {
    "path": "utils/distributed_utils.py",
    "content": "# from github: https://github.com/ruinmessi/ASFF/blob/master/utils/distributed_util.py\n\nimport torch\nimport torch.distributed as dist\nimport os\nimport subprocess\nimport pickle\n\n\ndef all_gather(data):\n    \"\"\"\n    Run all_gather on arbitrary picklable data (not necessarily tensors)\n    Args:\n        data: any picklable object\n    Returns:\n        list[data]: list of data gathered from each rank\n    \"\"\"\n    world_size = get_world_size()\n    if world_size == 1:\n        return [data]\n\n    # serialized to a Tensor\n    buffer = pickle.dumps(data)\n    storage = torch.ByteStorage.from_buffer(buffer)\n    tensor = torch.ByteTensor(storage).to(\"cuda\")\n\n    # obtain Tensor size of each rank\n    local_size = torch.tensor([tensor.numel()], device=\"cuda\")\n    size_list = [torch.tensor([0], device=\"cuda\") for _ in range(world_size)]\n    dist.all_gather(size_list, local_size)\n    size_list = [int(size.item()) for size in size_list]\n    max_size = max(size_list)\n\n    # receiving Tensor from all ranks\n    # we pad the tensor because torch all_gather does not support\n    # gathering tensors of different shapes\n    tensor_list = []\n    for _ in size_list:\n        tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=\"cuda\"))\n    if local_size != max_size:\n        padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device=\"cuda\")\n        tensor = torch.cat((tensor, padding), dim=0)\n    dist.all_gather(tensor_list, tensor)\n\n    data_list = []\n    for size, tensor in zip(size_list, tensor_list):\n        buffer = tensor.cpu().numpy().tobytes()[:size]\n        data_list.append(pickle.loads(buffer))\n\n    return data_list\n\n\ndef reduce_dict(input_dict, average=True):\n    \"\"\"\n    Args:\n        input_dict (dict): all the values will be reduced\n        average (bool): whether to do average or sum\n    Reduce the values in the dictionary from all processes so that all processes\n    have the averaged results. Returns a dict with the same fields as\n    input_dict, after reduction.\n    \"\"\"\n    world_size = get_world_size()\n    if world_size < 2:\n        return input_dict\n    with torch.no_grad():\n        names = []\n        values = []\n        # sort the keys so that they are consistent across processes\n        for k in sorted(input_dict.keys()):\n            names.append(k)\n            values.append(input_dict[k])\n        values = torch.stack(values, dim=0)\n        dist.all_reduce(values)\n        if average:\n            values /= world_size\n        reduced_dict = {k: v for k, v in zip(names, values)}\n    return reduced_dict\n\n\ndef get_sha():\n    cwd = os.path.dirname(os.path.abspath(__file__))\n\n    def _run(command):\n        return subprocess.check_output(command, cwd=cwd).decode('ascii').strip()\n    sha = 'N/A'\n    diff = \"clean\"\n    branch = 'N/A'\n    try:\n        sha = _run(['git', 'rev-parse', 'HEAD'])\n        subprocess.check_output(['git', 'diff'], cwd=cwd)\n        diff = _run(['git', 'diff-index', 'HEAD'])\n        diff = \"has uncommited changes\" if diff else \"clean\"\n        branch = _run(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])\n    except Exception:\n        pass\n    message = f\"sha: {sha}, status: {diff}, branch: {branch}\"\n    return message\n\n\ndef setup_for_distributed(is_master):\n    \"\"\"\n    This function disables printing when not in master process\n    \"\"\"\n    import builtins as __builtin__\n    builtin_print = __builtin__.print\n\n    def print(*args, **kwargs):\n        force = kwargs.pop('force', False)\n        if is_master or force:\n            builtin_print(*args, **kwargs)\n\n    __builtin__.print = print\n\n\ndef is_dist_avail_and_initialized():\n    if not dist.is_available():\n        return False\n    if not dist.is_initialized():\n        return False\n    return True\n\n\ndef get_world_size():\n    if not is_dist_avail_and_initialized():\n        return 1\n    return dist.get_world_size()\n\n\ndef get_rank():\n    if not is_dist_avail_and_initialized():\n        return 0\n    return dist.get_rank()\n\n\ndef is_main_process():\n    return get_rank() == 0\n\n\ndef save_on_master(*args, **kwargs):\n    if is_main_process():\n        torch.save(*args, **kwargs)\n\n\ndef init_distributed_mode(args):\n    if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:\n        args.rank = int(os.environ[\"RANK\"])\n        args.world_size = int(os.environ['WORLD_SIZE'])\n        args.gpu = int(os.environ['LOCAL_RANK'])\n    elif 'SLURM_PROCID' in os.environ:\n        args.rank = int(os.environ['SLURM_PROCID'])\n        args.gpu = args.rank % torch.cuda.device_count()\n    else:\n        print('Not using distributed mode')\n        args.distributed = False\n        return\n\n    args.distributed = True\n\n    torch.cuda.set_device(args.gpu)\n    args.dist_backend = 'nccl'\n    print('| distributed init (rank {}): {}'.format(\n        args.rank, args.dist_url), flush=True)\n    torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url,\n                                         world_size=args.world_size, rank=args.rank)\n    torch.distributed.barrier()\n    setup_for_distributed(args.rank == 0)\n"
  },
  {
    "path": "utils/kmeans_anchor.py",
    "content": "import numpy as np\nimport random\nimport argparse\nimport os\nimport sys\nsys.path.append('..')\n\nfrom data.voc0712 import VOCDetection\nfrom data.coco2017 import COCODataset\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description='kmeans for anchor box')\n\n    parser.add_argument('-root', '--data_root', default='/mnt/share/ssd2/dataset',\n                        help='dataset root')\n    parser.add_argument('-d', '--dataset', default='coco',\n                        help='coco, voc.')\n    parser.add_argument('-na', '--num_anchorbox', default=9, type=int,\n                        help='number of anchor box.')\n    parser.add_argument('-size', '--input_size', default=416, type=int,\n                        help='input size.')\n    parser.add_argument('--scale', action='store_true', default=False,\n                        help='divide the sizes of anchor boxes by 32 .')\n    return parser.parse_args()\n                    \nargs = parse_args()\n                    \n\nclass Box():\n    def __init__(self, x, y, w, h):\n        self.x = x\n        self.y = y\n        self.w = w\n        self.h = h\n\n\ndef iou(box1, box2):\n    x1, y1, w1, h1 = box1.x, box1.y, box1.w, box1.h\n    x2, y2, w2, h2 = box2.x, box2.y, box2.w, box2.h\n\n    S_1 = w1 * h1\n    S_2 = w2 * h2\n\n    xmin_1, ymin_1 = x1 - w1 / 2, y1 - h1 / 2\n    xmax_1, ymax_1 = x1 + w1 / 2, y1 + h1 / 2\n    xmin_2, ymin_2 = x2 - w2 / 2, y2 - h2 / 2\n    xmax_2, ymax_2 = x2 + w2 / 2, y2 + h2 / 2\n\n    I_w = min(xmax_1, xmax_2) - max(xmin_1, xmin_2)\n    I_h = min(ymax_1, ymax_2) - max(ymin_1, ymin_2)\n    if I_w < 0 or I_h < 0:\n        return 0\n    I = I_w * I_h\n\n    IoU = I / (S_1 + S_2 - I)\n\n    return IoU\n\n\ndef init_centroids(boxes, n_anchors):\n    \"\"\"\n        We use kmeans++ to initialize centroids.\n    \"\"\"\n    centroids = []\n    boxes_num = len(boxes)\n\n    centroid_index = int(np.random.choice(boxes_num, 1)[0])\n    centroids.append(boxes[centroid_index])\n    print(centroids[0].w,centroids[0].h)\n\n    for centroid_index in range(0, n_anchors-1):\n        sum_distance = 0\n        distance_thresh = 0\n        distance_list = []\n        cur_sum = 0\n\n        for box in boxes:\n            min_distance = 1\n            for centroid_i, centroid in enumerate(centroids):\n                distance = (1 - iou(box, centroid))\n                if distance < min_distance:\n                    min_distance = distance\n            sum_distance += min_distance\n            distance_list.append(min_distance)\n\n        distance_thresh = sum_distance * np.random.random()\n\n        for i in range(0, boxes_num):\n            cur_sum += distance_list[i]\n            if cur_sum > distance_thresh:\n                centroids.append(boxes[i])\n                print(boxes[i].w, boxes[i].h)\n                break\n    return centroids\n\n\ndef do_kmeans(n_anchors, boxes, centroids):\n    loss = 0\n    groups = []\n    new_centroids = []\n    # for box in centroids:\n    #     print('box: ', box.x, box.y, box.w, box.h)\n    # exit()\n    for i in range(n_anchors):\n        groups.append([])\n        new_centroids.append(Box(0, 0, 0, 0))\n    \n    for box in boxes:\n        min_distance = 1\n        group_index = 0\n        for centroid_index, centroid in enumerate(centroids):\n            distance = (1 - iou(box, centroid))\n            if distance < min_distance:\n                min_distance = distance\n                group_index = centroid_index\n        groups[group_index].append(box)\n        loss += min_distance\n        new_centroids[group_index].w += box.w\n        new_centroids[group_index].h += box.h\n\n    for i in range(n_anchors):\n        new_centroids[i].w /= max(len(groups[i]), 1)\n        new_centroids[i].h /= max(len(groups[i]), 1)\n\n    return new_centroids, groups, loss# / len(boxes)\n\n\ndef anchor_box_kmeans(total_gt_boxes, n_anchors, loss_convergence, iters, plus=True):\n    \"\"\"\n        This function will use k-means to get appropriate anchor boxes for train dataset.\n        Input:\n            total_gt_boxes: \n            n_anchor : int -> the number of anchor boxes.\n            loss_convergence : float -> threshold of iterating convergence.\n            iters: int -> the number of iterations for training kmeans.\n        Output: anchor_boxes : list -> [[w1, h1], [w2, h2], ..., [wn, hn]].\n    \"\"\"\n    boxes = total_gt_boxes\n    centroids = []\n    if plus:\n        centroids = init_centroids(boxes, n_anchors)\n    else:\n        total_indexs = range(len(boxes))\n        sample_indexs = random.sample(total_indexs, n_anchors)\n        for i in sample_indexs:\n            centroids.append(boxes[i])\n\n    # iterate k-means\n    centroids, groups, old_loss = do_kmeans(n_anchors, boxes, centroids)\n    iterations = 1\n    while(True):\n        centroids, groups, loss = do_kmeans(n_anchors, boxes, centroids)\n        iterations += 1\n        print(\"Loss = %f\" % loss)\n        if abs(old_loss - loss) < loss_convergence or iterations > iters:\n            break\n        old_loss = loss\n\n        for centroid in centroids:\n            print(centroid.w, centroid.h)\n    \n    print(\"k-means result : \") \n    for centroid in centroids:\n        if args.scale:\n            print(\"w, h: \", round(centroid.w / 32., 2), round(centroid.h / 32., 2), \n                \"area: \", round(centroid.w / 32., 2) * round(centroid.h / 32., 2))\n        else:\n            print(\"w, h: \", round(centroid.w, 2), round(centroid.h, 2), \n                \"area: \", round(centroid.w, 2) * round(centroid.h, 2))\n    \n    return centroids\n\n\nif __name__ == \"__main__\":\n\n    n_anchors = args.num_anchorbox\n    img_size = args.img_size\n    dataset = args.dataset\n    \n    loss_convergence = 1e-6\n    iters_n = 1000\n    \n    dataset_voc = VOCDetection(data_dir=os.path.join(args.root, 'VOCdevkit'), \n                                img_size=img_size)\n\n    dataset_coco = COCODataset(data_dir=os.path.join(args.root, 'COCO'),\n                                img_size=img_size)\n\n    boxes = []\n    print(\"The dataset size: \", len(dataset))\n    print(\"Loading the dataset ...\")\n    # VOC\n    for i in range(len(dataset_voc)):\n        if i % 5000 == 0:\n            print('Loading voc data [%d / %d]' % (i+1, len(dataset_voc)))\n\n        # For VOC\n        img, _ = dataset_voc.pull_image(i)\n        w, h = img.shape[1], img.shape[0]\n        _, annotation = dataset_voc.pull_anno(i)\n\n        # prepare bbox datas\n        for box_and_label in annotation:\n            box = box_and_label[:-1]\n            xmin, ymin, xmax, ymax = box\n            bw = (xmax - xmin) / w * img_size\n            bh = (ymax - ymin) / h * img_size\n            # check bbox\n            if bw < 1.0 or bh < 1.0:\n                continue\n            boxes.append(Box(0, 0, bw, bh))\n\n    # COCO\n    for i in range(len(dataset_coco)):\n        if i % 5000 == 0:\n            print('Loading coco datat [%d / %d]' % (i+1, len(dataset_coco)))\n\n        # For COCO\n        img, _ = dataset_coco.pull_image(i)\n        w, h = img.shape[1], img.shape[0]\n        annotation = dataset_coco.pull_anno(i)\n\n        # prepare bbox datas\n        for box_and_label in annotation:\n            box = box_and_label[:-1]\n            xmin, ymin, xmax, ymax = box\n            bw = (xmax - xmin) / w * img_size\n            bh = (ymax - ymin) / h * img_size\n            # check bbox\n            if bw < 1.0 or bh < 1.0:\n                continue\n            boxes.append(Box(0, 0, bw, bh))\n\n    print(\"Number of all bboxes: \", len(boxes))\n    print(\"Start k-means !\")\n    centroids = anchor_box_kmeans(boxes, n_anchors, loss_convergence, iters_n, plus=True)\n"
  },
  {
    "path": "utils/modules.py",
    "content": "import math\nimport torch\nimport torch.nn as nn\nfrom copy import deepcopy\n\n\nclass Conv(nn.Module):\n    def __init__(self, in_ch, out_ch, k=1, p=0, s=1, d=1, g=1, act=True):\n        super(Conv, self).__init__()\n        if act:\n            self.convs = nn.Sequential(\n                nn.Conv2d(in_ch, out_ch, k, stride=s, padding=p, dilation=d, groups=g),\n                nn.BatchNorm2d(out_ch),\n                nn.LeakyReLU(0.1, inplace=True)\n            )\n        else:\n            self.convs = nn.Sequential(\n                nn.Conv2d(in_ch, out_ch, k, stride=s, padding=p, dilation=d, groups=g),\n                nn.BatchNorm2d(out_ch)\n            )\n\n    def forward(self, x):\n        return self.convs(x)\n\n\nclass UpSample(nn.Module):\n    def __init__(self, size=None, scale_factor=None, mode='nearest', align_corner=None):\n        super(UpSample, self).__init__()\n        self.size = size\n        self.scale_factor = scale_factor\n        self.mode = mode\n        self.align_corner = align_corner\n\n    def forward(self, x):\n        return torch.nn.functional.interpolate(x, size=self.size, scale_factor=self.scale_factor, \n                                                mode=self.mode, align_corners=self.align_corner)\n\n\nclass reorg_layer(nn.Module):\n    def __init__(self, stride):\n        super(reorg_layer, self).__init__()\n        self.stride = stride\n\n    def forward(self, x):\n        batch_size, channels, height, width = x.size()\n        _height, _width = height // self.stride, width // self.stride\n        \n        x = x.view(batch_size, channels, _height, self.stride, _width, self.stride).transpose(3, 4).contiguous()\n        x = x.view(batch_size, channels, _height * _width, self.stride * self.stride).transpose(2, 3).contiguous()\n        x = x.view(batch_size, channels, self.stride * self.stride, _height, _width).transpose(1, 2).contiguous()\n        x = x.view(batch_size, -1, _height, _width)\n\n        return x\n\n\nclass SPP(nn.Module):\n    \"\"\"\n        Spatial Pyramid Pooling\n    \"\"\"\n    def __init__(self):\n        super(SPP, self).__init__()\n\n    def forward(self, x):\n        x_1 = torch.nn.functional.max_pool2d(x, 5, stride=1, padding=2)\n        x_2 = torch.nn.functional.max_pool2d(x, 9, stride=1, padding=4)\n        x_3 = torch.nn.functional.max_pool2d(x, 13, stride=1, padding=6)\n        x = torch.cat([x, x_1, x_2, x_3], dim=1)\n\n        return x\n\n\nclass ModelEMA(object):\n    def __init__(self, model, decay=0.9999, updates=0):\n        # create EMA\n        self.ema = deepcopy(model).eval()\n        self.updates = updates\n        self.decay = lambda x: decay * (1 - math.exp(-x / 2000.))\n        for p in self.ema.parameters():\n            p.requires_grad_(False)\n\n    def update(self, model):\n        # Update EMA parameters\n        with torch.no_grad():\n            self.updates += 1\n            d = self.decay(self.updates)\n\n            msd = model.state_dict()\n            for k, v in self.ema.state_dict().items():\n                if v.dtype.is_floating_point:\n                    v *= d\n                    v += (1. - d) * msd[k].detach()\n"
  },
  {
    "path": "utils/vocapi_evaluator.py",
    "content": "\"\"\"Adapted from:\n    @longcw faster_rcnn_pytorch: https://github.com/longcw/faster_rcnn_pytorch\n    @rbgirshick py-faster-rcnn https://github.com/rbgirshick/py-faster-rcnn\n    Licensed under The MIT License [see LICENSE for details]\n\"\"\"\n\nfrom torch.autograd import Variable\nfrom data.voc0712 import VOCDetection, VOC_CLASSES\nimport sys\nimport os\nimport time\nimport numpy as np\nimport pickle\nimport xml.etree.ElementTree as ET\n\n\nclass VOCAPIEvaluator():\n    \"\"\" VOC AP Evaluation class \"\"\"\n    def __init__(self, data_root, img_size, device, transform, set_type='test', year='2007', display=False):\n        self.data_root = data_root\n        self.img_size = img_size\n        self.device = device\n        self.transform = transform\n        self.labelmap = VOC_CLASSES\n        self.set_type = set_type\n        self.year = year\n        self.display = display\n\n        # path\n        self.devkit_path = data_root + 'VOC' + year\n        self.annopath = os.path.join(data_root, 'VOC2007', 'Annotations', '%s.xml')\n        self.imgpath = os.path.join(data_root, 'VOC2007', 'JPEGImages', '%s.jpg')\n        self.imgsetpath = os.path.join(data_root, 'VOC2007', 'ImageSets', 'Main', set_type+'.txt')\n        self.output_dir = self.get_output_dir('voc_eval/', self.set_type)\n\n        # dataset\n        self.dataset = VOCDetection(data_dir=data_root, \n                                    image_sets=[('2007', set_type)],\n                                    transform=transform\n                                    )\n\n    def evaluate(self, net):\n        net.eval()\n        num_images = len(self.dataset)\n        # all detections are collected into:\n        #    all_boxes[cls][image] = N x 5 array of detections in\n        #    (x1, y1, x2, y2, score)\n        self.all_boxes = [[[] for _ in range(num_images)]\n                        for _ in range(len(self.labelmap))]\n\n        # timers\n        det_file = os.path.join(self.output_dir, 'detections.pkl')\n\n        for i in range(num_images):\n            im, gt, h, w = self.dataset.pull_item(i)\n\n            x = Variable(im.unsqueeze(0)).to(self.device)\n            t0 = time.time()\n            # forward\n            bboxes, scores, cls_inds = net(x)\n            detect_time = time.time() - t0\n            scale = np.array([[w, h, w, h]])\n            bboxes *= scale\n\n            for j in range(len(self.labelmap)):\n                inds = np.where(cls_inds == j)[0]\n                if len(inds) == 0:\n                    self.all_boxes[j][i] = np.empty([0, 5], dtype=np.float32)\n                    continue\n                c_bboxes = bboxes[inds]\n                c_scores = scores[inds]\n                c_dets = np.hstack((c_bboxes,\n                                    c_scores[:, np.newaxis])).astype(np.float32,\n                                                                    copy=False)\n                self.all_boxes[j][i] = c_dets\n\n            if i % 500 == 0:\n                print('im_detect: {:d}/{:d} {:.3f}s'.format(i + 1, num_images, detect_time))\n\n        with open(det_file, 'wb') as f:\n            pickle.dump(self.all_boxes, f, pickle.HIGHEST_PROTOCOL)\n\n        print('Evaluating detections')\n        self.evaluate_detections(self.all_boxes)\n\n        print('Mean AP: ', self.map)\n  \n\n    def parse_rec(self, filename):\n        \"\"\" Parse a PASCAL VOC xml file \"\"\"\n        tree = ET.parse(filename)\n        objects = []\n        for obj in tree.findall('object'):\n            obj_struct = {}\n            obj_struct['name'] = obj.find('name').text\n            obj_struct['pose'] = obj.find('pose').text\n            obj_struct['truncated'] = int(obj.find('truncated').text)\n            obj_struct['difficult'] = int(obj.find('difficult').text)\n            bbox = obj.find('bndbox')\n            obj_struct['bbox'] = [int(bbox.find('xmin').text),\n                                int(bbox.find('ymin').text),\n                                int(bbox.find('xmax').text),\n                                int(bbox.find('ymax').text)]\n            objects.append(obj_struct)\n\n        return objects\n\n\n    def get_output_dir(self, name, phase):\n        \"\"\"Return the directory where experimental artifacts are placed.\n        If the directory does not exist, it is created.\n        A canonical path is built using the name from an imdb and a network\n        (if not None).\n        \"\"\"\n        filedir = os.path.join(name, phase)\n        if not os.path.exists(filedir):\n            os.makedirs(filedir)\n        return filedir\n\n\n    def get_voc_results_file_template(self, cls):\n        # VOCdevkit/VOC2007/results/det_test_aeroplane.txt\n        filename = 'det_' + self.set_type + '_%s.txt' % (cls)\n        filedir = os.path.join(self.devkit_path, 'results')\n        if not os.path.exists(filedir):\n            os.makedirs(filedir)\n        path = os.path.join(filedir, filename)\n        return path\n\n\n    def write_voc_results_file(self, all_boxes):\n        for cls_ind, cls in enumerate(self.labelmap):\n            if self.display:\n                print('Writing {:s} VOC results file'.format(cls))\n            filename = self.get_voc_results_file_template(cls)\n            with open(filename, 'wt') as f:\n                for im_ind, index in enumerate(self.dataset.ids):\n                    dets = all_boxes[cls_ind][im_ind]\n                    if dets == []:\n                        continue\n                    # the VOCdevkit expects 1-based indices\n                    for k in range(dets.shape[0]):\n                        f.write('{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\\n'.\n                                format(index[1], dets[k, -1],\n                                    dets[k, 0] + 1, dets[k, 1] + 1,\n                                    dets[k, 2] + 1, dets[k, 3] + 1))\n\n\n    def do_python_eval(self, use_07=True):\n        cachedir = os.path.join(self.devkit_path, 'annotations_cache')\n        aps = []\n        # The PASCAL VOC metric changed in 2010\n        use_07_metric = use_07\n        print('VOC07 metric? ' + ('Yes' if use_07_metric else 'No'))\n        if not os.path.isdir(self.output_dir):\n            os.mkdir(self.output_dir)\n        for i, cls in enumerate(self.labelmap):\n            filename = self.get_voc_results_file_template(cls)\n            rec, prec, ap = self.voc_eval(detpath=filename, \n                                          classname=cls, \n                                          cachedir=cachedir, \n                                          ovthresh=0.5, \n                                          use_07_metric=use_07_metric\n                                        )\n            aps += [ap]\n            print('AP for {} = {:.4f}'.format(cls, ap))\n            with open(os.path.join(self.output_dir, cls + '_pr.pkl'), 'wb') as f:\n                pickle.dump({'rec': rec, 'prec': prec, 'ap': ap}, f)\n        if self.display:\n            self.map = np.mean(aps)\n            print('Mean AP = {:.4f}'.format(np.mean(aps)))\n            print('~~~~~~~~')\n            print('Results:')\n            for ap in aps:\n                print('{:.3f}'.format(ap))\n            print('{:.3f}'.format(np.mean(aps)))\n            print('~~~~~~~~')\n            print('')\n            print('--------------------------------------------------------------')\n            print('Results computed with the **unofficial** Python eval code.')\n            print('Results should be very close to the official MATLAB eval code.')\n            print('--------------------------------------------------------------')\n        else:\n            self.map = np.mean(aps)\n            print('Mean AP = {:.4f}'.format(np.mean(aps)))\n\n\n    def voc_ap(self, rec, prec, use_07_metric=True):\n        \"\"\" ap = voc_ap(rec, prec, [use_07_metric])\n        Compute VOC AP given precision and recall.\n        If use_07_metric is true, uses the\n        VOC 07 11 point method (default:True).\n        \"\"\"\n        if use_07_metric:\n            # 11 point metric\n            ap = 0.\n            for t in np.arange(0., 1.1, 0.1):\n                if np.sum(rec >= t) == 0:\n                    p = 0\n                else:\n                    p = np.max(prec[rec >= t])\n                ap = ap + p / 11.\n        else:\n            # correct AP calculation\n            # first append sentinel values at the end\n            mrec = np.concatenate(([0.], rec, [1.]))\n            mpre = np.concatenate(([0.], prec, [0.]))\n\n            # compute the precision envelope\n            for i in range(mpre.size - 1, 0, -1):\n                mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])\n\n            # to calculate area under PR curve, look for points\n            # where X axis (recall) changes value\n            i = np.where(mrec[1:] != mrec[:-1])[0]\n\n            # and sum (\\Delta recall) * prec\n            ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])\n        return ap\n\n\n    def voc_eval(self, detpath, classname, cachedir, ovthresh=0.5, use_07_metric=True):\n        if not os.path.isdir(cachedir):\n            os.mkdir(cachedir)\n        cachefile = os.path.join(cachedir, 'annots.pkl')\n        # read list of images\n        with open(self.imgsetpath, 'r') as f:\n            lines = f.readlines()\n        imagenames = [x.strip() for x in lines]\n        if not os.path.isfile(cachefile):\n            # load annots\n            recs = {}\n            for i, imagename in enumerate(imagenames):\n                recs[imagename] = self.parse_rec(self.annopath % (imagename))\n                if i % 100 == 0 and self.display:\n                    print('Reading annotation for {:d}/{:d}'.format(\n                    i + 1, len(imagenames)))\n            # save\n            if self.display:\n                print('Saving cached annotations to {:s}'.format(cachefile))\n            with open(cachefile, 'wb') as f:\n                pickle.dump(recs, f)\n        else:\n            # load\n            with open(cachefile, 'rb') as f:\n                recs = pickle.load(f)\n\n        # extract gt objects for this class\n        class_recs = {}\n        npos = 0\n        for imagename in imagenames:\n            R = [obj for obj in recs[imagename] if obj['name'] == classname]\n            bbox = np.array([x['bbox'] for x in R])\n            difficult = np.array([x['difficult'] for x in R]).astype(np.bool)\n            det = [False] * len(R)\n            npos = npos + sum(~difficult)\n            class_recs[imagename] = {'bbox': bbox,\n                                    'difficult': difficult,\n                                    'det': det}\n\n        # read dets\n        detfile = detpath.format(classname)\n        with open(detfile, 'r') as f:\n            lines = f.readlines()\n        if any(lines) == 1:\n\n            splitlines = [x.strip().split(' ') for x in lines]\n            image_ids = [x[0] for x in splitlines]\n            confidence = np.array([float(x[1]) for x in splitlines])\n            BB = np.array([[float(z) for z in x[2:]] for x in splitlines])\n\n            # sort by confidence\n            sorted_ind = np.argsort(-confidence)\n            sorted_scores = np.sort(-confidence)\n            BB = BB[sorted_ind, :]\n            image_ids = [image_ids[x] for x in sorted_ind]\n\n            # go down dets and mark TPs and FPs\n            nd = len(image_ids)\n            tp = np.zeros(nd)\n            fp = np.zeros(nd)\n            for d in range(nd):\n                R = class_recs[image_ids[d]]\n                bb = BB[d, :].astype(float)\n                ovmax = -np.inf\n                BBGT = R['bbox'].astype(float)\n                if BBGT.size > 0:\n                    # compute overlaps\n                    # intersection\n                    ixmin = np.maximum(BBGT[:, 0], bb[0])\n                    iymin = np.maximum(BBGT[:, 1], bb[1])\n                    ixmax = np.minimum(BBGT[:, 2], bb[2])\n                    iymax = np.minimum(BBGT[:, 3], bb[3])\n                    iw = np.maximum(ixmax - ixmin, 0.)\n                    ih = np.maximum(iymax - iymin, 0.)\n                    inters = iw * ih\n                    uni = ((bb[2] - bb[0]) * (bb[3] - bb[1]) +\n                        (BBGT[:, 2] - BBGT[:, 0]) *\n                        (BBGT[:, 3] - BBGT[:, 1]) - inters)\n                    overlaps = inters / uni\n                    ovmax = np.max(overlaps)\n                    jmax = np.argmax(overlaps)\n\n                if ovmax > ovthresh:\n                    if not R['difficult'][jmax]:\n                        if not R['det'][jmax]:\n                            tp[d] = 1.\n                            R['det'][jmax] = 1\n                        else:\n                            fp[d] = 1.\n                else:\n                    fp[d] = 1.\n\n            # compute precision recall\n            fp = np.cumsum(fp)\n            tp = np.cumsum(tp)\n            rec = tp / float(npos)\n            # avoid divide by zero in case the first detection matches a difficult\n            # ground truth\n            prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)\n            ap = self.voc_ap(rec, prec, use_07_metric)\n        else:\n            rec = -1.\n            prec = -1.\n            ap = -1.\n\n        return rec, prec, ap\n\n\n    def evaluate_detections(self, box_list):\n        self.write_voc_results_file(box_list)\n        self.do_python_eval()\n\n\nif __name__ == '__main__':\n    pass"
  },
  {
    "path": "weights/README.md",
    "content": "# yolo-v2-v3 and tiny model\nHi, guys ! \n\nFor researchers in China, you can download them from BaiduYunDisk. \nThere are 5 models including yolo-v2, yolo-v3, yolo_v3_spp, slim-yolo-v2 and tiny-yolo-v3.\n\nThe link is as following: \n\nlink: https://pan.baidu.com/s/1rnmM8HGFzE2NTv6AkljJdg\n\npassword: 5c8h \n\n<!-- link: https://drive.google.com/open?id=1Yrxz2IW3nzMZiX6EvcTzayDlH8ju4ZFc -->\n\nI will upload all models to googledrive."
  }
]