[
  {
    "path": "LICENSE",
    "content": "                                Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2021 KAKAO BRAIN Corp. All Rights Reserved.\n   \n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n\n\nDeformable DETR\n\nCopyright 2020 SenseTime\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n   http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\n\nDETR\n\nCopyright 2020 - present, Facebook, Inc\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n   http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n"
  },
  {
    "path": "NOTICE",
    "content": "===============================================================================\nDeformable DETR's Apache License 2.0\n===============================================================================\nThe overall structure of the code is based on the implementation in \nDeformable-DETR(https://github.com/fundamentalvision/Deformable-DETR).\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nCopyright (c) 2020 SenseTime\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n   http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n===============================================================================\nDETR's Apache License 2.0\n===============================================================================\nDeformable DETR code is orginally built on the implementation in DETR\n(https://github.com/facebookresearch/detr).\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nCopyright (c) 2020 Facebook, Inc\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n   http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\n===============================================================================\nSwin Transformer' MIT License\n===============================================================================\nThe transformer backbone is based on the implementation in Swin Transformer\n(https://github.com/microsoft/Swin-Transformer).\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nCopyright (c) 2021 Microsoft\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "[![KakaoBrain](https://img.shields.io/badge/kakao-brain-ffcd00.svg)](http://kakaobrain.com/)\n[![pytorch](https://img.shields.io/badge/pytorch-1.6.0-%2523ee4c2c.svg)](https://pytorch.org/)\n[![pytorch](https://img.shields.io/badge/pytorch-1.7.1-%2523ee4c2c.svg)](https://pytorch.org/)\n\nSparse DETR (ICLR'22)\n========\n\nBy [Byungseok Roh](https://scholar.google.com/citations?user=H4VWYHwAAAAJ)\\*,  [Jaewoong Shin](https://scholar.google.com/citations?user=i_o_95kAAAAJ)\\*,  [Wuhyun Shin](https://scholar.google.com/citations?user=bGwfkakAAAAJ)\\*, and [Saehoon Kim](https://scholar.google.com/citations?user=_ZfueMIAAAAJ) at [Kakao Brain](https://www.kakaobrain.com).\n(*: Equal contribution)\n\n* This repository is an official implementation of the paper [Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity](https://arxiv.org/abs/2111.14330). \n* The code and some instructions are built upon the official [Deformable DETR repository](https://github.com/fundamentalvision/Deformable-DETR).\n\n\n\n# Introduction\n\n**TL; DR.** Sparse DETR is an efficient end-to-end object detector that **sparsifies encoder tokens** by using the learnable DAM(Decoder Attention Map) predictor. It achieves better performance than Deformable DETR even with only 10% encoder queries on the COCO dataset.\n\n<p align=\"center\">\n<img src=\"./figs/dam_creation.png\" height=350>\n</p>\n\n**Abstract.** DETR is the first end-to-end object detector using a transformer encoder-decoder architecture and demonstrates competitive performance but low computational efficiency on high resolution feature maps.\nThe subsequent work, Deformable DETR, enhances the efficiency of DETR by replacing dense attention with deformable attention, which achieves 10x faster convergence and improved performance. \nDeformable DETR uses the multiscale feature to ameliorate performance, however, the number of encoder tokens increases by 20x compared to DETR, and the computation cost of the encoder attention remains a bottleneck.\nIn our preliminary experiment, we observe that the detection performance hardly deteriorates even if only a part of the encoder token is updated.\nInspired by this observation, we propose *Sparse DETR* that selectively updates only the tokens expected to be referenced by the decoder, thus help the model effectively detect objects.\nIn addition, we show that applying an auxiliary detection loss on the selected tokens in the encoder improves the performance while minimizing computational overhead.\nWe validate that *Sparse DETR* achieves better performance than Deformable DETR even with only 10\\% encoder tokens on the COCO dataset.\nAlbeit only the encoder tokens are sparsified, the total computation cost decreases by 38\\% and the frames per second (FPS) increases by 42\\% compared to Deformable DETR.\n\n\n# Installation\n\n## Requirements\n\nWe have tested the code on the following environments: \n* Python 3.7.7 / Pytorch 1.6.0 / torchvisoin 0.7.0 / CUDA 10.1 / Ubuntu 18.04\n* Python 3.8.3 / Pytorch 1.7.1 / torchvisoin 0.8.2 / CUDA 11.1 / Ubuntu 18.04\n\nRun the following command to install dependencies:\n```bash\npip install -r requirements.txt\n```\n\n## Compiling CUDA operators\n```bash\ncd ./models/ops\nsh ./make.sh\n# unit test (should see all checking is True)\npython test.py\n```\n\n# Usage\n\n## Dataset preparation\n\nPlease download [COCO 2017 dataset](https://cocodataset.org/) and organize them as follows:\n\n```\ncode_root/\n└── data/\n    └── coco/\n        ├── train2017/\n        ├── val2017/\n        └── annotations/\n        \t├── instances_train2017.json\n        \t└── instances_val2017.json\n```\n\n## Training\n\n### Training on a single node\n\nFor example, the command for training Sparse DETR with the keeping ratio of 10% on 8 GPUs is as follows:\n\n```bash\n$ GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 8 ./configs/swint_sparse_detr_rho_0.1.sh\n```\n\n### Training on multiple nodes\n\nFor example, the command Sparse DETR with the keeping ratio of 10% on 2 nodes of each with 8 GPUs is as follows:\n\nOn node 1:\n\n```bash\n$ MASTER_ADDR=<IP address of node 1> NODE_RANK=0 GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 16 ./configs/swint_sparse_detr_rho_0.1.sh\n```\n\nOn node 2:\n\n```bash\n$ MASTER_ADDR=<IP address of node 2> NODE_RANK=1 GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 16 ./configs/swint_sparse_detr_rho_0.1.sh\n```\n\n### Direct argument control\n\n```bash\n# Deformable DETR (with bounding-box-refinement and two-stage argument, if wanted)\n$ GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 8 python main.py --with_box_refine --two_stage\n# Efficient DETR (with the class-specific head as describe in their paper)\n$ GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 8 python main.py --with_box_refine --two_stage --eff_query_init --eff_specific_head\n# Sparse DETR (with the keeping ratio of 10% and encoder auxiliary loss)\n$ GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 8 python main.py --with_box_refine --two_stage --eff_query_init --eff_specific_head --rho 0.1 --use_enc_aux_loss\n```\n\n### Some tips to speed-up training\n* If your file system is slow to read images, you may consider enabling '--cache_mode' option to load the whole dataset into memory at the beginning of training.\n* You may increase the batch size to maximize the GPU utilization, according to GPU memory of yours, e.g., set '--batch_size 3' or '--batch_size 4'.\n\n## Evaluation\n\nYou can get the pre-trained model of Sparse DETR (the link is in \"Main Results\" session), then run the following command to evaluate it on COCO 2017 validation set:\n\n```bash\n# Note that you should run the command with the corresponding configuration.\n$ ./configs/swint_sparse_detr_rho_0.1.sh --resume <path to pre-trained model> --eval\n```\n\nYou can also run distributed evaluation by using ```./tools/run_dist_launch.sh```.\n\n# Main Results\nThe tables below demonstrate the detection performance of Sparse DETR on the COCO 2017 validation set when using different backbones. \n* **Top-k** : sampling the top-k object queries instead of using the learned object queries(as in Efficient DETR).\n* **BBR** : performing bounding box refinement in the decoder block(as in Deformable DETR).\n* The **encoder auxiliary loss** proposed in our paper is only applied to Sparse DETR.\n* **FLOPs** and **FPS** are measured in the same way as used in Deformable DETR. \n* Refer to **Table 1** in the paper for more details.\n\n\n\n## ResNet-50 backbone\n| Method             | Epochs | ρ   | Top-k & BBR | AP   | #Params(M) | GFLOPs | B4FPS | Download |\n|:------------------:|:------:|:---:|:-----------:|:----:|:----------:|:------:|:-----:|:--------:|\n| Faster R-CNN + FPN | 109    | N/A |             | 42.0 | 42M        | 180G   | 26    |          |\n| DETR               | 50     | N/A |             | 35.0 | 41M        | 86G    | 28    |          |\n| DETR               | 500    | N/A |             | 42.0 | 41M        | 86G    | 28    |          |\n| DETR-DC5           | 500    | N/A |             | 43.3 | 41M        | 187G   | 12    |          |\n| PnP-DETR           | 500    | 33% |             | 41.1 |            |        |       |          |\n|                    | 500    | 50% |             | 41.8 |            |        |       |          |\n| PnP-DETR-DC5       | 500    | 33% |             | 42.7 |            |        |       |          |\n|                    | 500    | 50% |             | 43.1 |            |        |       |          |\n| Deformable-DETR    | 50     | N/A |             | 43.9 | 39.8M      | 172.9G | 19.1  |          |\n|                    | 50     | N/A | o           | 46.0 | 40.8M      | 177.3G | 18.2  |          |\n| Sparse-DETR        | 50     | 10% | o           | 45.3 | 40.9M      | 105.4G | 26.5  | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_10.pth)     |\n|                    | 50     | 20% | o           | 45.6 | 40.9M      | 112.9G | 24.8  | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_20.pth)     |\n|                    | 50     | 30% | o           | 46.0 | 40.9M      | 120.5G | 23.2  | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_30.pth)     |\n|                    | 50     | 40% | o           | 46.2 | 40.9M      | 128.0G | 21.8  | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_40.pth)     |\n|                    | 50     | 50% | o           | 46.3 | 40.9M      | 135.6G | 20.5  | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_50.pth)     |\n\n\n\n## Swin-T backbone\n| Method          | Epochs | ρ   | Top-k & BBR | AP   | #Params(M) | GFLOPs | B4FPS | Download |\n|:---------------:|:------:|:---:|:-----------:|:----:|:----------:|:------:|:-----:|:--------:|\n| DETR            | 50     | N/A |             | 35.9 | 45.0M      | 91.6G  | 26.8  |          |\n| DETR            | 500    | N/A |             | 45.4 | 45.0M      | 91.6G  | 26.8  |          |\n| Deformable-DETR | 50     | N/A |             | 45.7 | 40.3M      | 180.4G | 15.9  |          |\n|                 | 50     | N/A | o           | 48.0 | 41.3M      | 184.8G | 15.4  |          |\n| Sparse-DETR     | 50     | 10% | o           | 48.2 | 41.4M      | 113.4G | 21.2  | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_swint_10.pth)     |\n|                 | 50     | 20% | o           | 48.8 | 41.4M      | 121.0G | 20    | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_swint_20.pth)     |\n|                 | 50     | 30% | o           | 49.1 | 41.4M      | 128.5G | 18.9  | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_swint_30.pth)     |\n|                 | 50     | 40% | o           | 49.2 | 41.4M      | 136.1G | 18    | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_swint_40.pth)     |\n|                 | 50     | 50% | o           | 49.3 | 41.4M      | 143.7G | 17.2  | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_swint_50.pth)     |\n\n\n## Initializing ResNet-50 backbone with SCRL\nThe performance of Sparse DETR can be further improved when the backbone network is initialized with the `SCRL`([Spatially Consistent Representation Learning](https://arxiv.org/abs/2103.06122)) that aims to learn dense representations in a self-supervised way, compared to the default initialization with the ImageNet pre-trained one, denoted as `IN-sup` in the table below. \n* We obtained pre-trained weights from [Torchvision](https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html#sphx-glr-beginner-finetuning-torchvision-models-tutorial-py) for `IN-sup`, and the [SCRL GitHub repository](https://github.com/kakaobrain/scrl) for `SCRL`.\n* To reproduce the `SCRL` results, add `--scrl_pretrained_path <downloaded_filepath>` to the training command.\n \n| Method      | ρ   | AP(IN-sup) | AP(SCRL) | AP(gain) | Download |\n|:-----------:|:---:|:-----------:|:--------:|:--------:|:--------:|\n| Sparse DETR | 10% | 45.3        | 46.9     | +1.6     | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_scrl_10.pth)     |\n|             | 20% | 45.6        | 47.2     | +1.7     | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_scrl_20.pth)     |\n|             | 30% | 46.0        | 47.4     | +1.4     | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_scrl_30.pth)     |\n|             | 40% | 46.2        | 47.7     | +1.5     | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_scrl_40.pth)     |\n|             | 50% | 46.3        | 47.9     | +1.6     | [link](https://twg.kakaocdn.net/brainrepo/sparse_detr/sparse_detr_r50_scrl_50.pth)     |\n\n\n# Citation\nIf you find Sparse DETR useful in your research, please consider citing:\n```bibtex\n@inproceedings{roh2022sparse,\n  title={Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity},\n  author={Roh, Byungseok and Shin, JaeWoong and Shin, Wuhyun and Kim, Saehoon},\n  booktitle={ICLR},\n  year={2022}\n}\n```\n\n# License\n\nThis project is released under the [Apache 2.0 license](./LICENSE).\nCopyright 2021 [Kakao Brain Corp](https://www.kakaobrain.com). All Rights Reserved.\n"
  },
  {
    "path": "configs/r50_deformable_detr.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/r50_deformable_detr\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/r50_efficient_detr.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/r50_efficient_detr\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --with_box_refine \\\n    --two_stage \\\n    --eff_query_init \\\n    --eff_specific_head \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/r50_sparse_detr_rho_0.1.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/r50_sparse_detr_0.1\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --with_box_refine \\\n    --two_stage \\\n    --eff_query_init \\\n    --eff_specific_head \\\n    --rho 0.1 \\\n    --use_enc_aux_loss \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/r50_sparse_detr_rho_0.2.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/r50_sparse_detr_0.2\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --with_box_refine \\\n    --two_stage \\\n    --eff_query_init \\\n    --eff_specific_head \\\n    --rho 0.2 \\\n    --use_enc_aux_loss \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/r50_sparse_detr_rho_0.3.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/r50_sparse_detr_0.3\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --with_box_refine \\\n    --two_stage \\\n    --eff_query_init \\\n    --eff_specific_head \\\n    --rho 0.3 \\\n    --use_enc_aux_loss \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/swint_deformable_detr.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/swint_deformable_detr\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --backbone swin-t \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/swint_efficient_detr.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/swint_efficient_detr\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --backbone swin-t \\\n    --with_box_refine \\\n    --two_stage \\\n    --eff_query_init \\\n    --eff_specific_head \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/swint_sparse_detr_rho_0.1.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/swint_sparse_detr_0.1\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --backbone swin-t \\\n    --with_box_refine \\\n    --two_stage \\\n    --eff_query_init \\\n    --eff_specific_head \\\n    --rho 0.1 \\\n    --use_enc_aux_loss \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/swint_sparse_detr_rho_0.2.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/swint_sparse_detr_0.2\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --backbone swin-t \\\n    --with_box_refine \\\n    --two_stage \\\n    --eff_query_init \\\n    --eff_specific_head \\\n    --rho 0.2 \\\n    --use_enc_aux_loss \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "configs/swint_sparse_detr_rho_0.3.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nEXP_DIR=exps/swint_sparse_detr_0.3\nPY_ARGS=${@:1}\n\npython -u main.py \\\n    --output_dir ${EXP_DIR} \\\n    --backbone swin-t \\\n    --with_box_refine \\\n    --two_stage \\\n    --eff_query_init \\\n    --eff_specific_head \\\n    --rho 0.3 \\\n    --use_enc_aux_loss \\\n    ${PY_ARGS}\n"
  },
  {
    "path": "datasets/__init__.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\nimport torch.utils.data\nfrom .torchvision_datasets import CocoDetection\n\nfrom .coco import build as build_coco\n\n\ndef get_coco_api_from_dataset(dataset):\n    for _ in range(10):\n        # if isinstance(dataset, torchvision.datasets.CocoDetection):\n        #     break\n        if isinstance(dataset, torch.utils.data.Subset):\n            dataset = dataset.dataset\n    if isinstance(dataset, CocoDetection):\n        return dataset.coco\n\n\ndef build_dataset(image_set, args):\n    if args.dataset_file == 'coco':\n        return build_coco(image_set, args)\n    if args.dataset_file == 'coco_panoptic':\n        # to avoid making panopticapi required for coco\n        from .coco_panoptic import build as build_coco_panoptic\n        return build_coco_panoptic(image_set, args)\n    raise ValueError(f'dataset {args.dataset_file} not supported')\n"
  },
  {
    "path": "datasets/coco.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\n\"\"\"\nCOCO dataset which returns image_id for evaluation.\n\nMostly copy-paste from https://github.com/pytorch/vision/blob/13b35ff/references/detection/coco_utils.py\n\"\"\"\nfrom pathlib import Path\n\nimport torch\nimport torch.utils.data\nfrom pycocotools import mask as coco_mask\n\nfrom .torchvision_datasets import CocoDetection as TvCocoDetection\nfrom util.misc import get_local_rank, get_local_size\nimport datasets.transforms as T\n\n\nclass CocoDetection(TvCocoDetection):\n    def __init__(self, img_folder, ann_file, transforms, return_masks, cache_mode=False, local_rank=0, local_size=1):\n        super(CocoDetection, self).__init__(img_folder, ann_file,\n                                            cache_mode=cache_mode, local_rank=local_rank, local_size=local_size)\n        self._transforms = transforms\n        self.prepare = ConvertCocoPolysToMask(return_masks)\n\n    def __getitem__(self, idx):\n        img, target = super(CocoDetection, self).__getitem__(idx)\n        image_id = self.ids[idx]\n        target = {'image_id': image_id, 'annotations': target}\n        img, target = self.prepare(img, target)\n        if self._transforms is not None:\n            img, target = self._transforms(img, target)\n        return img, target\n\n\ndef convert_coco_poly_to_mask(segmentations, height, width):\n    masks = []\n    for polygons in segmentations:\n        rles = coco_mask.frPyObjects(polygons, height, width)\n        mask = coco_mask.decode(rles)\n        if len(mask.shape) < 3:\n            mask = mask[..., None]\n        mask = torch.as_tensor(mask, dtype=torch.uint8)\n        mask = mask.any(dim=2)\n        masks.append(mask)\n    if masks:\n        masks = torch.stack(masks, dim=0)\n    else:\n        masks = torch.zeros((0, height, width), dtype=torch.uint8)\n    return masks\n\n\nclass ConvertCocoPolysToMask(object):\n    def __init__(self, return_masks=False):\n        self.return_masks = return_masks\n\n    def __call__(self, image, target):\n        w, h = image.size\n\n        image_id = target[\"image_id\"]\n        image_id = torch.tensor([image_id])\n\n        anno = target[\"annotations\"]\n\n        anno = [obj for obj in anno if 'iscrowd' not in obj or obj['iscrowd'] == 0]\n\n        boxes = [obj[\"bbox\"] for obj in anno]\n        # guard against no boxes via resizing\n        boxes = torch.as_tensor(boxes, dtype=torch.float32).reshape(-1, 4)\n        boxes[:, 2:] += boxes[:, :2]\n        boxes[:, 0::2].clamp_(min=0, max=w)\n        boxes[:, 1::2].clamp_(min=0, max=h)\n\n        classes = [obj[\"category_id\"] for obj in anno]\n        classes = torch.tensor(classes, dtype=torch.int64)\n\n        if self.return_masks:\n            segmentations = [obj[\"segmentation\"] for obj in anno]\n            masks = convert_coco_poly_to_mask(segmentations, h, w)\n\n        keypoints = None\n        if anno and \"keypoints\" in anno[0]:\n            keypoints = [obj[\"keypoints\"] for obj in anno]\n            keypoints = torch.as_tensor(keypoints, dtype=torch.float32)\n            num_keypoints = keypoints.shape[0]\n            if num_keypoints:\n                keypoints = keypoints.view(num_keypoints, -1, 3)\n\n        keep = (boxes[:, 3] > boxes[:, 1]) & (boxes[:, 2] > boxes[:, 0])\n        boxes = boxes[keep]\n        classes = classes[keep]\n        if self.return_masks:\n            masks = masks[keep]\n        if keypoints is not None:\n            keypoints = keypoints[keep]\n\n        target = {}\n        target[\"boxes\"] = boxes\n        target[\"labels\"] = classes\n        if self.return_masks:\n            target[\"masks\"] = masks\n        target[\"image_id\"] = image_id\n        if keypoints is not None:\n            target[\"keypoints\"] = keypoints\n\n        # for conversion to coco api\n        area = torch.tensor([obj[\"area\"] for obj in anno])\n        iscrowd = torch.tensor([obj[\"iscrowd\"] if \"iscrowd\" in obj else 0 for obj in anno])\n        target[\"area\"] = area[keep]\n        target[\"iscrowd\"] = iscrowd[keep]\n\n        target[\"orig_size\"] = torch.as_tensor([int(h), int(w)])\n        target[\"size\"] = torch.as_tensor([int(h), int(w)])\n\n        return image, target\n\n\ndef make_coco_transforms(image_set):\n\n    normalize = T.Compose([\n        T.ToTensor(),\n        T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n    ])\n\n    scales = [480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800]\n\n    if image_set == 'train':\n        return T.Compose([\n            T.RandomHorizontalFlip(),\n            T.RandomSelect(\n                T.RandomResize(scales, max_size=1333),\n                T.Compose([\n                    T.RandomResize([400, 500, 600]),\n                    T.RandomSizeCrop(384, 600),\n                    T.RandomResize(scales, max_size=1333),\n                ])\n            ),\n            normalize,\n        ])\n\n    if image_set == 'val':\n        return T.Compose([\n            T.RandomResize([800], max_size=1333),\n            normalize,\n        ])\n\n    raise ValueError(f'unknown {image_set}')\n\n\ndef build(image_set, args):\n    root = Path(args.coco_path)\n    assert root.exists(), f'provided COCO path {root} does not exist'\n    mode = 'instances'\n    PATHS = {\n        \"train\": (root / \"train2017\", root / \"annotations\" / f'{mode}_train2017.json'),\n        \"val\": (root / \"val2017\", root / \"annotations\" / f'{mode}_val2017.json'),\n    }\n\n    img_folder, ann_file = PATHS[image_set]\n    dataset = CocoDetection(img_folder, ann_file, transforms=make_coco_transforms(image_set), return_masks=args.masks,\n                            cache_mode=args.cache_mode, local_rank=get_local_rank(), local_size=get_local_size())\n    return dataset\n"
  },
  {
    "path": "datasets/coco_eval.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\n\"\"\"\nCOCO evaluator that works in distributed mode.\n\nMostly copy-paste from https://github.com/pytorch/vision/blob/edfd5a7/references/detection/coco_eval.py\nThe difference is that there is less copy-pasting from pycocotools\nin the end of the file, as python3 can suppress prints with contextlib\n\"\"\"\nimport os\nimport contextlib\nimport copy\nimport numpy as np\nimport torch\n\nfrom pycocotools.cocoeval import COCOeval\nfrom pycocotools.coco import COCO\nimport pycocotools.mask as mask_util\n\nfrom util.misc import all_gather\n\n\nclass CocoEvaluator(object):\n    def __init__(self, coco_gt, iou_types):\n        assert isinstance(iou_types, (list, tuple))\n        coco_gt = copy.deepcopy(coco_gt)\n        self.coco_gt = coco_gt\n\n        self.iou_types = iou_types\n        self.coco_eval = {}\n        for iou_type in iou_types:\n            self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type)\n\n        self.img_ids = []\n        self.eval_imgs = {k: [] for k in iou_types}\n\n    def update(self, predictions):\n        img_ids = list(np.unique(list(predictions.keys())))\n        self.img_ids.extend(img_ids)\n\n        for iou_type in self.iou_types:\n            results = self.prepare(predictions, iou_type)\n\n            # suppress pycocotools prints\n            with open(os.devnull, 'w') as devnull:\n                with contextlib.redirect_stdout(devnull):\n                    coco_dt = COCO.loadRes(self.coco_gt, results) if results else COCO()\n            coco_eval = self.coco_eval[iou_type]\n\n            coco_eval.cocoDt = coco_dt\n            coco_eval.params.imgIds = list(img_ids)\n            img_ids, eval_imgs = evaluate(coco_eval)\n\n            self.eval_imgs[iou_type].append(eval_imgs)\n\n    def synchronize_between_processes(self):\n        for iou_type in self.iou_types:\n            self.eval_imgs[iou_type] = np.concatenate(self.eval_imgs[iou_type], 2)\n            create_common_coco_eval(self.coco_eval[iou_type], self.img_ids, self.eval_imgs[iou_type])\n\n    def accumulate(self):\n        for coco_eval in self.coco_eval.values():\n            coco_eval.accumulate()\n\n    def summarize(self):\n        for iou_type, coco_eval in self.coco_eval.items():\n            print(\"IoU metric: {}\".format(iou_type))\n            coco_eval.summarize()\n\n    def prepare(self, predictions, iou_type):\n        if iou_type == \"bbox\":\n            return self.prepare_for_coco_detection(predictions)\n        elif iou_type == \"segm\":\n            return self.prepare_for_coco_segmentation(predictions)\n        elif iou_type == \"keypoints\":\n            return self.prepare_for_coco_keypoint(predictions)\n        else:\n            raise ValueError(\"Unknown iou type {}\".format(iou_type))\n\n    def prepare_for_coco_detection(self, predictions):\n        coco_results = []\n        for original_id, prediction in predictions.items():\n            if len(prediction) == 0:\n                continue\n\n            boxes = prediction[\"boxes\"]\n            boxes = convert_to_xywh(boxes).tolist()\n            scores = prediction[\"scores\"].tolist()\n            labels = prediction[\"labels\"].tolist()\n\n            coco_results.extend(\n                [\n                    {\n                        \"image_id\": original_id,\n                        \"category_id\": labels[k],\n                        \"bbox\": box,\n                        \"score\": scores[k],\n                    }\n                    for k, box in enumerate(boxes)\n                ]\n            )\n        return coco_results\n\n    def prepare_for_coco_segmentation(self, predictions):\n        coco_results = []\n        for original_id, prediction in predictions.items():\n            if len(prediction) == 0:\n                continue\n\n            scores = prediction[\"scores\"]\n            labels = prediction[\"labels\"]\n            masks = prediction[\"masks\"]\n\n            masks = masks > 0.5\n\n            scores = prediction[\"scores\"].tolist()\n            labels = prediction[\"labels\"].tolist()\n\n            rles = [\n                mask_util.encode(np.array(mask[0, :, :, np.newaxis], dtype=np.uint8, order=\"F\"))[0]\n                for mask in masks\n            ]\n            for rle in rles:\n                rle[\"counts\"] = rle[\"counts\"].decode(\"utf-8\")\n\n            coco_results.extend(\n                [\n                    {\n                        \"image_id\": original_id,\n                        \"category_id\": labels[k],\n                        \"segmentation\": rle,\n                        \"score\": scores[k],\n                    }\n                    for k, rle in enumerate(rles)\n                ]\n            )\n        return coco_results\n\n    def prepare_for_coco_keypoint(self, predictions):\n        coco_results = []\n        for original_id, prediction in predictions.items():\n            if len(prediction) == 0:\n                continue\n\n            boxes = prediction[\"boxes\"]\n            boxes = convert_to_xywh(boxes).tolist()\n            scores = prediction[\"scores\"].tolist()\n            labels = prediction[\"labels\"].tolist()\n            keypoints = prediction[\"keypoints\"]\n            keypoints = keypoints.flatten(start_dim=1).tolist()\n\n            coco_results.extend(\n                [\n                    {\n                        \"image_id\": original_id,\n                        \"category_id\": labels[k],\n                        'keypoints': keypoint,\n                        \"score\": scores[k],\n                    }\n                    for k, keypoint in enumerate(keypoints)\n                ]\n            )\n        return coco_results\n\n\ndef convert_to_xywh(boxes):\n    xmin, ymin, xmax, ymax = boxes.unbind(1)\n    return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1)\n\n\ndef merge(img_ids, eval_imgs):\n    all_img_ids = all_gather(img_ids)\n    all_eval_imgs = all_gather(eval_imgs)\n\n    merged_img_ids = []\n    for p in all_img_ids:\n        merged_img_ids.extend(p)\n\n    merged_eval_imgs = []\n    for p in all_eval_imgs:\n        merged_eval_imgs.append(p)\n\n    merged_img_ids = np.array(merged_img_ids)\n    merged_eval_imgs = np.concatenate(merged_eval_imgs, 2)\n\n    # keep only unique (and in sorted order) images\n    merged_img_ids, idx = np.unique(merged_img_ids, return_index=True)\n    merged_eval_imgs = merged_eval_imgs[..., idx]\n\n    return merged_img_ids, merged_eval_imgs\n\n\ndef create_common_coco_eval(coco_eval, img_ids, eval_imgs):\n    img_ids, eval_imgs = merge(img_ids, eval_imgs)\n    img_ids = list(img_ids)\n    eval_imgs = list(eval_imgs.flatten())\n\n    coco_eval.evalImgs = eval_imgs\n    coco_eval.params.imgIds = img_ids\n    coco_eval._paramsEval = copy.deepcopy(coco_eval.params)\n\n\n#################################################################\n# From pycocotools, just removed the prints and fixed\n# a Python3 bug about unicode not defined\n#################################################################\n\n\ndef evaluate(self):\n    '''\n    Run per image evaluation on given images and store results (a list of dict) in self.evalImgs\n    :return: None\n    '''\n    # tic = time.time()\n    # print('Running per image evaluation...')\n    p = self.params\n    # add backward compatibility if useSegm is specified in params\n    if p.useSegm is not None:\n        p.iouType = 'segm' if p.useSegm == 1 else 'bbox'\n        print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType))\n    # print('Evaluate annotation type *{}*'.format(p.iouType))\n    p.imgIds = list(np.unique(p.imgIds))\n    if p.useCats:\n        p.catIds = list(np.unique(p.catIds))\n    p.maxDets = sorted(p.maxDets)\n    self.params = p\n\n    self._prepare()\n    # loop through images, area range, max detection number\n    catIds = p.catIds if p.useCats else [-1]\n\n    if p.iouType == 'segm' or p.iouType == 'bbox':\n        computeIoU = self.computeIoU\n    elif p.iouType == 'keypoints':\n        computeIoU = self.computeOks\n    self.ious = {\n        (imgId, catId): computeIoU(imgId, catId)\n        for imgId in p.imgIds\n        for catId in catIds}\n\n    evaluateImg = self.evaluateImg\n    maxDet = p.maxDets[-1]\n    evalImgs = [\n        evaluateImg(imgId, catId, areaRng, maxDet)\n        for catId in catIds\n        for areaRng in p.areaRng\n        for imgId in p.imgIds\n    ]\n    # this is NOT in the pycocotools code, but could be done outside\n    evalImgs = np.asarray(evalImgs).reshape(len(catIds), len(p.areaRng), len(p.imgIds))\n    self._paramsEval = copy.deepcopy(self.params)\n    # toc = time.time()\n    # print('DONE (t={:0.2f}s).'.format(toc-tic))\n    return p.imgIds, evalImgs\n\n#################################################################\n# end of straight copy from pycocotools, just removing the prints\n#################################################################\n"
  },
  {
    "path": "datasets/coco_panoptic.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\nimport json\nfrom pathlib import Path\n\nimport numpy as np\nimport torch\nfrom PIL import Image\n\nfrom panopticapi.utils import rgb2id\nfrom util.box_ops import masks_to_boxes\n\nfrom .coco import make_coco_transforms\n\n\nclass CocoPanoptic:\n    def __init__(self, img_folder, ann_folder, ann_file, transforms=None, return_masks=True):\n        with open(ann_file, 'r') as f:\n            self.coco = json.load(f)\n\n        # sort 'images' field so that they are aligned with 'annotations'\n        # i.e., in alphabetical order\n        self.coco['images'] = sorted(self.coco['images'], key=lambda x: x['id'])\n        # sanity check\n        if \"annotations\" in self.coco:\n            for img, ann in zip(self.coco['images'], self.coco['annotations']):\n                assert img['file_name'][:-4] == ann['file_name'][:-4]\n\n        self.img_folder = img_folder\n        self.ann_folder = ann_folder\n        self.ann_file = ann_file\n        self.transforms = transforms\n        self.return_masks = return_masks\n\n    def __getitem__(self, idx):\n        ann_info = self.coco['annotations'][idx] if \"annotations\" in self.coco else self.coco['images'][idx]\n        img_path = Path(self.img_folder) / ann_info['file_name'].replace('.png', '.jpg')\n        ann_path = Path(self.ann_folder) / ann_info['file_name']\n\n        img = Image.open(img_path).convert('RGB')\n        w, h = img.size\n        if \"segments_info\" in ann_info:\n            masks = np.asarray(Image.open(ann_path), dtype=np.uint32)\n            masks = rgb2id(masks)\n\n            ids = np.array([ann['id'] for ann in ann_info['segments_info']])\n            masks = masks == ids[:, None, None]\n\n            masks = torch.as_tensor(masks, dtype=torch.uint8)\n            labels = torch.tensor([ann['category_id'] for ann in ann_info['segments_info']], dtype=torch.int64)\n\n        target = {}\n        target['image_id'] = torch.tensor([ann_info['image_id'] if \"image_id\" in ann_info else ann_info[\"id\"]])\n        if self.return_masks:\n            target['masks'] = masks\n        target['labels'] = labels\n\n        target[\"boxes\"] = masks_to_boxes(masks)\n\n        target['size'] = torch.as_tensor([int(h), int(w)])\n        target['orig_size'] = torch.as_tensor([int(h), int(w)])\n        if \"segments_info\" in ann_info:\n            for name in ['iscrowd', 'area']:\n                target[name] = torch.tensor([ann[name] for ann in ann_info['segments_info']])\n\n        if self.transforms is not None:\n            img, target = self.transforms(img, target)\n\n        return img, target\n\n    def __len__(self):\n        return len(self.coco['images'])\n\n    def get_height_and_width(self, idx):\n        img_info = self.coco['images'][idx]\n        height = img_info['height']\n        width = img_info['width']\n        return height, width\n\n\ndef build(image_set, args):\n    img_folder_root = Path(args.coco_path)\n    ann_folder_root = Path(args.coco_panoptic_path)\n    assert img_folder_root.exists(), f'provided COCO path {img_folder_root} does not exist'\n    assert ann_folder_root.exists(), f'provided COCO path {ann_folder_root} does not exist'\n    mode = 'panoptic'\n    PATHS = {\n        \"train\": (\"train2017\", Path(\"annotations\") / f'{mode}_train2017.json'),\n        \"val\": (\"val2017\", Path(\"annotations\") / f'{mode}_val2017.json'),\n    }\n\n    img_folder, ann_file = PATHS[image_set]\n    img_folder_path = img_folder_root / img_folder\n    ann_folder = ann_folder_root / f'{mode}_{img_folder}'\n    ann_file = ann_folder_root / ann_file\n\n    dataset = CocoPanoptic(img_folder_path, ann_folder, ann_file,\n                           transforms=make_coco_transforms(image_set), return_masks=args.masks)\n\n    return dataset\n"
  },
  {
    "path": "datasets/data_prefetcher.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n\nimport torch\n\ndef to_cuda(samples, targets, device):\n    samples = samples.to(device, non_blocking=True)\n    targets = [{k: v.to(device, non_blocking=True) for k, v in t.items()} for t in targets]\n    return samples, targets\n\nclass data_prefetcher():\n    def __init__(self, loader, device, prefetch=True):\n        self.loader = iter(loader)\n        self.prefetch = prefetch\n        self.device = device\n        if prefetch:\n            self.stream = torch.cuda.Stream()\n            self.preload()\n\n    def preload(self):\n        try:\n            self.next_samples, self.next_targets = next(self.loader)\n        except StopIteration:\n            self.next_samples = None\n            self.next_targets = None\n            return\n        # if record_stream() doesn't work, another option is to make sure device inputs are created\n        # on the main stream.\n        # self.next_input_gpu = torch.empty_like(self.next_input, device='cuda')\n        # self.next_target_gpu = torch.empty_like(self.next_target, device='cuda')\n        # Need to make sure the memory allocated for next_* is not still in use by the main stream\n        # at the time we start copying to next_*:\n        # self.stream.wait_stream(torch.cuda.current_stream())\n        with torch.cuda.stream(self.stream):\n            self.next_samples, self.next_targets = to_cuda(self.next_samples, self.next_targets, self.device)\n            # more code for the alternative if record_stream() doesn't work:\n            # copy_ will record the use of the pinned source tensor in this side stream.\n            # self.next_input_gpu.copy_(self.next_input, non_blocking=True)\n            # self.next_target_gpu.copy_(self.next_target, non_blocking=True)\n            # self.next_input = self.next_input_gpu\n            # self.next_target = self.next_target_gpu\n\n            # With Amp, it isn't necessary to manually convert data to half.\n            # if args.fp16:\n            #     self.next_input = self.next_input.half()\n            # else:\n\n    def next(self):\n        if self.prefetch:\n            torch.cuda.current_stream().wait_stream(self.stream)\n            samples = self.next_samples\n            targets = self.next_targets\n            if samples is not None:\n                samples.record_stream(torch.cuda.current_stream())\n            if targets is not None:\n                for t in targets:\n                    for k, v in t.items():\n                        v.record_stream(torch.cuda.current_stream())\n            self.preload()\n        else:\n            try:\n                samples, targets = next(self.loader)\n                samples, targets = to_cuda(samples, targets, self.device)\n            except StopIteration:\n                samples = None\n                targets = None\n        return samples, targets\n"
  },
  {
    "path": "datasets/panoptic_eval.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\nimport json\nimport os\n\nimport util.misc as utils\n\ntry:\n    from panopticapi.evaluation import pq_compute\nexcept ImportError:\n    pass\n\n\nclass PanopticEvaluator(object):\n    def __init__(self, ann_file, ann_folder, output_dir=\"panoptic_eval\"):\n        self.gt_json = ann_file\n        self.gt_folder = ann_folder\n        if utils.is_main_process():\n            if not os.path.exists(output_dir):\n                os.mkdir(output_dir)\n        self.output_dir = output_dir\n        self.predictions = []\n\n    def update(self, predictions):\n        for p in predictions:\n            with open(os.path.join(self.output_dir, p[\"file_name\"]), \"wb\") as f:\n                f.write(p.pop(\"png_string\"))\n\n        self.predictions += predictions\n\n    def synchronize_between_processes(self):\n        all_predictions = utils.all_gather(self.predictions)\n        merged_predictions = []\n        for p in all_predictions:\n            merged_predictions += p\n        self.predictions = merged_predictions\n\n    def summarize(self):\n        if utils.is_main_process():\n            json_data = {\"annotations\": self.predictions}\n            predictions_json = os.path.join(self.output_dir, \"predictions.json\")\n            with open(predictions_json, \"w\") as f:\n                f.write(json.dumps(json_data))\n            return pq_compute(self.gt_json, predictions_json, gt_folder=self.gt_folder, pred_folder=self.output_dir)\n        return None\n"
  },
  {
    "path": "datasets/samplers.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from codes in torch.utils.data.distributed\n# ------------------------------------------------------------------------\n\nimport os\nimport math\nimport torch\nimport torch.distributed as dist\nfrom torch.utils.data.sampler import Sampler\n\n\nclass DistributedSampler(Sampler):\n    \"\"\"Sampler that restricts data loading to a subset of the dataset.\n    It is especially useful in conjunction with\n    :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each\n    process can pass a DistributedSampler instance as a DataLoader sampler,\n    and load a subset of the original dataset that is exclusive to it.\n    .. note::\n        Dataset is assumed to be of constant size.\n    Arguments:\n        dataset: Dataset used for sampling.\n        num_replicas (optional): Number of processes participating in\n            distributed training.\n        rank (optional): Rank of the current process within num_replicas.\n    \"\"\"\n\n    def __init__(self, dataset, num_replicas=None, rank=None, local_rank=None, local_size=None, shuffle=True):\n        if num_replicas is None:\n            if not dist.is_available():\n                raise RuntimeError(\"Requires distributed package to be available\")\n            num_replicas = dist.get_world_size()\n        if rank is None:\n            if not dist.is_available():\n                raise RuntimeError(\"Requires distributed package to be available\")\n            rank = dist.get_rank()\n        self.dataset = dataset\n        self.num_replicas = num_replicas\n        self.rank = rank\n        self.epoch = 0\n        self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas))\n        self.total_size = self.num_samples * self.num_replicas\n        self.shuffle = shuffle\n\n    def __iter__(self):\n        if self.shuffle:\n            # deterministically shuffle based on epoch\n            g = torch.Generator()\n            g.manual_seed(self.epoch)\n            indices = torch.randperm(len(self.dataset), generator=g).tolist()\n        else:\n            indices = torch.arange(len(self.dataset)).tolist()\n\n        # add extra samples to make it evenly divisible\n        indices += indices[: (self.total_size - len(indices))]\n        assert len(indices) == self.total_size\n\n        # subsample\n        offset = self.num_samples * self.rank\n        indices = indices[offset : offset + self.num_samples]\n        assert len(indices) == self.num_samples\n\n        return iter(indices)\n\n    def __len__(self):\n        return self.num_samples\n\n    def set_epoch(self, epoch):\n        self.epoch = epoch\n\n\nclass NodeDistributedSampler(Sampler):\n    \"\"\"Sampler that restricts data loading to a subset of the dataset.\n    It is especially useful in conjunction with\n    :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each\n    process can pass a DistributedSampler instance as a DataLoader sampler,\n    and load a subset of the original dataset that is exclusive to it.\n    .. note::\n        Dataset is assumed to be of constant size.\n    Arguments:\n        dataset: Dataset used for sampling.\n        num_replicas (optional): Number of processes participating in\n            distributed training.\n        rank (optional): Rank of the current process within num_replicas.\n    \"\"\"\n\n    def __init__(self, dataset, num_replicas=None, rank=None, local_rank=None, local_size=None, shuffle=True):\n        if num_replicas is None:\n            if not dist.is_available():\n                raise RuntimeError(\"Requires distributed package to be available\")\n            num_replicas = dist.get_world_size()\n        if rank is None:\n            if not dist.is_available():\n                raise RuntimeError(\"Requires distributed package to be available\")\n            rank = dist.get_rank()\n        if local_rank is None:\n            local_rank = int(os.environ.get('LOCAL_RANK', 0))\n        if local_size is None:\n            local_size = int(os.environ.get('LOCAL_SIZE', 1))\n        self.dataset = dataset\n        self.shuffle = shuffle\n        self.num_replicas = num_replicas\n        self.num_parts = local_size\n        self.rank = rank\n        self.local_rank = local_rank\n        self.epoch = 0\n        self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas))\n        self.total_size = self.num_samples * self.num_replicas\n\n        self.total_size_parts = self.num_samples * self.num_replicas // self.num_parts\n\n    def __iter__(self):\n        if self.shuffle:\n            # deterministically shuffle based on epoch\n            g = torch.Generator()\n            g.manual_seed(self.epoch)\n            indices = torch.randperm(len(self.dataset), generator=g).tolist()\n        else:\n            indices = torch.arange(len(self.dataset)).tolist()\n        indices = [i for i in indices if i % self.num_parts == self.local_rank]\n\n        # add extra samples to make it evenly divisible\n        indices += indices[:(self.total_size_parts - len(indices))]\n        assert len(indices) == self.total_size_parts\n\n        # subsample\n        indices = indices[self.rank // self.num_parts:self.total_size_parts:self.num_replicas // self.num_parts]\n        assert len(indices) == self.num_samples\n\n        return iter(indices)\n\n    def __len__(self):\n        return self.num_samples\n\n    def set_epoch(self, epoch):\n        self.epoch = epoch\n"
  },
  {
    "path": "datasets/torchvision_datasets/__init__.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n\nfrom .coco import CocoDetection\n"
  },
  {
    "path": "datasets/torchvision_datasets/coco.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from torchvision\n# ------------------------------------------------------------------------\n\n\"\"\"\nCopy-Paste from torchvision, but add utility of caching images on memory\n\"\"\"\nfrom torchvision.datasets.vision import VisionDataset\nfrom PIL import Image\nimport os\nimport os.path\nimport tqdm\nfrom io import BytesIO\n\n\nclass CocoDetection(VisionDataset):\n    \"\"\"`MS Coco Detection <http://mscoco.org/dataset/#detections-challenge2016>`_ Dataset.\n    Args:\n        root (string): Root directory where images are downloaded to.\n        annFile (string): Path to json annotation file.\n        transform (callable, optional): A function/transform that  takes in an PIL image\n            and returns a transformed version. E.g, ``transforms.ToTensor``\n        target_transform (callable, optional): A function/transform that takes in the\n            target and transforms it.\n        transforms (callable, optional): A function/transform that takes input sample and its target as entry\n            and returns a transformed version.\n    \"\"\"\n\n    def __init__(self, root, annFile, transform=None, target_transform=None, transforms=None,\n                 cache_mode=False, local_rank=0, local_size=1):\n        super(CocoDetection, self).__init__(root, transforms, transform, target_transform)\n        from pycocotools.coco import COCO\n        self.coco = COCO(annFile)\n        self.ids = list(sorted(self.coco.imgs.keys()))\n        self.cache_mode = cache_mode\n        self.local_rank = local_rank\n        self.local_size = local_size\n        if cache_mode:\n            self.cache = {}\n            self.cache_images()\n\n    def cache_images(self):\n        self.cache = {}\n        for index, img_id in zip(tqdm.trange(len(self.ids)), self.ids):\n            if index % self.local_size != self.local_rank:\n                continue\n            path = self.coco.loadImgs(img_id)[0]['file_name']\n            with open(os.path.join(self.root, path), 'rb') as f:\n                self.cache[path] = f.read()\n\n    def get_image(self, path):\n        if self.cache_mode:\n            if path not in self.cache.keys():\n                with open(os.path.join(self.root, path), 'rb') as f:\n                    self.cache[path] = f.read()\n            return Image.open(BytesIO(self.cache[path])).convert('RGB')\n        return Image.open(os.path.join(self.root, path)).convert('RGB')\n\n    def __getitem__(self, index):\n        \"\"\"\n        Args:\n            index (int): Index\n        Returns:\n            tuple: Tuple (image, target). target is the object returned by ``coco.loadAnns``.\n        \"\"\"\n        coco = self.coco\n        img_id = self.ids[index]\n        ann_ids = coco.getAnnIds(imgIds=img_id)\n        target = coco.loadAnns(ann_ids)\n\n        path = coco.loadImgs(img_id)[0]['file_name']\n\n        img = self.get_image(path)\n        if self.transforms is not None:\n            img, target = self.transforms(img, target)\n\n        return img, target\n\n    def __len__(self):\n        return len(self.ids)\n"
  },
  {
    "path": "datasets/transforms.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\n\"\"\"\nTransforms and data augmentation for both image + bbox.\n\"\"\"\nimport random\n\nimport PIL\nimport torch\nimport torchvision.transforms as T\nimport torchvision.transforms.functional as F\n\nfrom util.box_ops import box_xyxy_to_cxcywh\nfrom util.misc import interpolate\n\n\ndef crop(image, target, region):\n    cropped_image = F.crop(image, *region)\n\n    target = target.copy()\n    i, j, h, w = region\n\n    # should we do something wrt the original size?\n    target[\"size\"] = torch.tensor([h, w])\n\n    fields = [\"labels\", \"area\", \"iscrowd\"]\n\n    if \"boxes\" in target:\n        boxes = target[\"boxes\"]\n        max_size = torch.as_tensor([w, h], dtype=torch.float32)\n        cropped_boxes = boxes - torch.as_tensor([j, i, j, i])\n        cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size)\n        cropped_boxes = cropped_boxes.clamp(min=0)\n        area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1)\n        target[\"boxes\"] = cropped_boxes.reshape(-1, 4)\n        target[\"area\"] = area\n        fields.append(\"boxes\")\n\n    if \"masks\" in target:\n        # FIXME should we update the area here if there are no boxes?\n        target['masks'] = target['masks'][:, i:i + h, j:j + w]\n        fields.append(\"masks\")\n\n    # remove elements for which the boxes or masks that have zero area\n    if \"boxes\" in target or \"masks\" in target:\n        # favor boxes selection when defining which elements to keep\n        # this is compatible with previous implementation\n        if \"boxes\" in target:\n            cropped_boxes = target['boxes'].reshape(-1, 2, 2)\n            keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1)\n        else:\n            keep = target['masks'].flatten(1).any(1)\n\n        for field in fields:\n            target[field] = target[field][keep]\n\n    return cropped_image, target\n\n\ndef hflip(image, target):\n    flipped_image = F.hflip(image)\n\n    w, h = image.size\n\n    target = target.copy()\n    if \"boxes\" in target:\n        boxes = target[\"boxes\"]\n        boxes = boxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor([w, 0, w, 0])\n        target[\"boxes\"] = boxes\n\n    if \"masks\" in target:\n        target['masks'] = target['masks'].flip(-1)\n\n    return flipped_image, target\n\n\ndef resize(image, target, size, max_size=None):\n    # size can be min_size (scalar) or (w, h) tuple\n\n    def get_size_with_aspect_ratio(image_size, size, max_size=None):\n        w, h = image_size\n        if max_size is not None:\n            min_original_size = float(min((w, h)))\n            max_original_size = float(max((w, h)))\n            if max_original_size / min_original_size * size > max_size:\n                size = int(round(max_size * min_original_size / max_original_size))\n\n        if (w <= h and w == size) or (h <= w and h == size):\n            return (h, w)\n\n        if w < h:\n            ow = size\n            oh = int(size * h / w)\n        else:\n            oh = size\n            ow = int(size * w / h)\n\n        return (oh, ow)\n\n    def get_size(image_size, size, max_size=None):\n        if isinstance(size, (list, tuple)):\n            return size[::-1]\n        else:\n            return get_size_with_aspect_ratio(image_size, size, max_size)\n\n    size = get_size(image.size, size, max_size)\n    rescaled_image = F.resize(image, size)\n\n    if target is None:\n        return rescaled_image, None\n\n    ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size))\n    ratio_width, ratio_height = ratios\n\n    target = target.copy()\n    if \"boxes\" in target:\n        boxes = target[\"boxes\"]\n        scaled_boxes = boxes * torch.as_tensor([ratio_width, ratio_height, ratio_width, ratio_height])\n        target[\"boxes\"] = scaled_boxes\n\n    if \"area\" in target:\n        area = target[\"area\"]\n        scaled_area = area * (ratio_width * ratio_height)\n        target[\"area\"] = scaled_area\n\n    h, w = size\n    target[\"size\"] = torch.tensor([h, w])\n\n    if \"masks\" in target:\n        target['masks'] = interpolate(\n            target['masks'][:, None].float(), size, mode=\"nearest\")[:, 0] > 0.5\n\n    return rescaled_image, target\n\n\ndef pad(image, target, padding):\n    # assumes that we only pad on the bottom right corners\n    padded_image = F.pad(image, (0, 0, padding[0], padding[1]))\n    if target is None:\n        return padded_image, None\n    target = target.copy()\n    # should we do something wrt the original size?\n    target[\"size\"] = torch.tensor(padded_image[::-1])\n    if \"masks\" in target:\n        target['masks'] = torch.nn.functional.pad(target['masks'], (0, padding[0], 0, padding[1]))\n    return padded_image, target\n\n\nclass RandomCrop(object):\n    def __init__(self, size):\n        self.size = size\n\n    def __call__(self, img, target):\n        region = T.RandomCrop.get_params(img, self.size)\n        return crop(img, target, region)\n\n\nclass RandomSizeCrop(object):\n    def __init__(self, min_size: int, max_size: int):\n        self.min_size = min_size\n        self.max_size = max_size\n\n    def __call__(self, img: PIL.Image.Image, target: dict):\n        w = random.randint(self.min_size, min(img.width, self.max_size))\n        h = random.randint(self.min_size, min(img.height, self.max_size))\n        region = T.RandomCrop.get_params(img, [h, w])\n        return crop(img, target, region)\n\n\nclass CenterCrop(object):\n    def __init__(self, size):\n        self.size = size\n\n    def __call__(self, img, target):\n        image_width, image_height = img.size\n        crop_height, crop_width = self.size\n        crop_top = int(round((image_height - crop_height) / 2.))\n        crop_left = int(round((image_width - crop_width) / 2.))\n        return crop(img, target, (crop_top, crop_left, crop_height, crop_width))\n\n\nclass RandomHorizontalFlip(object):\n    def __init__(self, p=0.5):\n        self.p = p\n\n    def __call__(self, img, target):\n        if random.random() < self.p:\n            return hflip(img, target)\n        return img, target\n\n\nclass RandomResize(object):\n    def __init__(self, sizes, max_size=None):\n        assert isinstance(sizes, (list, tuple))\n        self.sizes = sizes\n        self.max_size = max_size\n\n    def __call__(self, img, target=None):\n        size = random.choice(self.sizes)\n        return resize(img, target, size, self.max_size)\n\n\nclass RandomPad(object):\n    def __init__(self, max_pad):\n        self.max_pad = max_pad\n\n    def __call__(self, img, target):\n        pad_x = random.randint(0, self.max_pad)\n        pad_y = random.randint(0, self.max_pad)\n        return pad(img, target, (pad_x, pad_y))\n\n\nclass RandomSelect(object):\n    \"\"\"\n    Randomly selects between transforms1 and transforms2,\n    with probability p for transforms1 and (1 - p) for transforms2\n    \"\"\"\n    def __init__(self, transforms1, transforms2, p=0.5):\n        self.transforms1 = transforms1\n        self.transforms2 = transforms2\n        self.p = p\n\n    def __call__(self, img, target):\n        if random.random() < self.p:\n            return self.transforms1(img, target)\n        return self.transforms2(img, target)\n\n\nclass ToTensor(object):\n    def __call__(self, img, target):\n        return F.to_tensor(img), target\n\n\nclass RandomErasing(object):\n\n    def __init__(self, *args, **kwargs):\n        self.eraser = T.RandomErasing(*args, **kwargs)\n\n    def __call__(self, img, target):\n        return self.eraser(img), target\n\n\nclass Normalize(object):\n    def __init__(self, mean, std):\n        self.mean = mean\n        self.std = std\n\n    def __call__(self, image, target=None):\n        image = F.normalize(image, mean=self.mean, std=self.std)\n        if target is None:\n            return image, None\n        target = target.copy()\n        h, w = image.shape[-2:]\n        if \"boxes\" in target:\n            boxes = target[\"boxes\"]\n            boxes = box_xyxy_to_cxcywh(boxes)\n            boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32)\n            target[\"boxes\"] = boxes\n        return image, target\n\n\nclass Compose(object):\n    def __init__(self, transforms):\n        self.transforms = transforms\n\n    def __call__(self, image, target):\n        for t in self.transforms:\n            image, target = t(image, target)\n        return image, target\n\n    def __repr__(self):\n        format_string = self.__class__.__name__ + \"(\"\n        for t in self.transforms:\n            format_string += \"\\n\"\n            format_string += \"    {0}\".format(t)\n        format_string += \"\\n)\"\n        return format_string\n"
  },
  {
    "path": "engine.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\n\"\"\"\nTrain and eval functions used in main.py\n\"\"\"\nimport math\nimport os\nimport sys\nfrom typing import Iterable\n\nimport torch\nimport util.misc as utils\nfrom datasets.coco_eval import CocoEvaluator\nfrom datasets.panoptic_eval import PanopticEvaluator\nfrom datasets.data_prefetcher import data_prefetcher\n\nfrom util.misc import check_unused_parameters\n\n\ndef train_one_epoch(model: torch.nn.Module, criterion: torch.nn.Module,\n                    data_loader: Iterable, optimizer: torch.optim.Optimizer,\n                    device: torch.device, epoch: int, max_norm: float = 0, \n                    writer=None, total_iter=0):\n    model.train()\n    criterion.train()\n    metric_logger = utils.MetricLogger(delimiter=\"  \")\n    metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))\n    metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))\n    metric_logger.add_meter('grad_norm', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))\n    header = 'Epoch: [{}]'.format(epoch)\n    print_freq = 10\n\n    prefetcher = data_prefetcher(data_loader, device, prefetch=True)\n    samples, targets = prefetcher.next()\n\n    for i in metric_logger.log_every(range(len(data_loader)), print_freq, header):            \n        outputs = model(samples)\n        loss_dict = criterion(outputs, targets)\n        weight_dict = criterion.weight_dict\n        losses = sum(loss_dict[k] * weight_dict[k] for k in loss_dict.keys() if k in weight_dict)\n\n        # reduce losses over all GPUs for logging purposes\n        loss_dict_reduced = utils.reduce_dict(loss_dict)\n        loss_dict_reduced_unscaled = {f'{k}_unscaled': v\n                                      for k, v in loss_dict_reduced.items()}\n        loss_dict_reduced_scaled = {k: v * weight_dict[k]\n                                    for k, v in loss_dict_reduced.items() if k in weight_dict}\n        losses_reduced_scaled = sum(loss_dict_reduced_scaled.values())\n\n        loss_value = losses_reduced_scaled.item()\n\n        if not math.isfinite(loss_value):\n            print(\"Loss is {}, stopping training\".format(loss_value))\n            print(loss_dict_reduced)\n            sys.exit(1)\n            \n        optimizer.zero_grad()\n        losses.backward()\n        \n        if i == 0:\n            check_unused_parameters(model, loss_dict, weight_dict)\n                \n        if max_norm > 0:\n            grad_total_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)\n        else:\n            grad_total_norm = utils.get_total_grad_norm(model.parameters(), max_norm)\n            \n        metric_logger.update(loss=loss_value, **loss_dict_reduced_scaled, **loss_dict_reduced_unscaled)\n        metric_logger.update(class_error=loss_dict_reduced['class_error'])\n        metric_logger.update(lr=optimizer.param_groups[0][\"lr\"])\n        metric_logger.update(grad_norm=grad_total_norm)\n                    \n        optimizer.step()\n\n        if total_iter % (print_freq*10) == 0 and utils.is_main_process():\n            writer.add_scalar('train/loss', loss_value, total_iter)\n            writer.add_scalar('train/class_error', loss_dict_reduced['class_error'], total_iter)\n            writer.add_scalar('lr', optimizer.param_groups[0][\"lr\"], total_iter)\n            writer.add_scalar('train/grad_norm', grad_total_norm, total_iter)\n            for key, value in loss_dict_reduced_scaled.items():\n                writer.add_scalar('train/'+key, value, total_iter)\n            for key, value in loss_dict_reduced_unscaled.items():\n                if \"corr\" in key:\n                    writer.add_scalar('train/'+key, value, total_iter)\n\n        total_iter += 1\n        samples, targets = prefetcher.next()\n\n    # gather the stats from all processes\n    metric_logger.synchronize_between_processes()\n    print(\"Averaged stats:\", metric_logger)\n    return {k: meter.global_avg for k, meter in metric_logger.meters.items()}, total_iter\n\n\n@torch.no_grad()\ndef evaluate(model, criterion, postprocessors, data_loader, base_ds, device, args):\n    model.eval()\n    criterion.eval()\n\n    metric_logger = utils.MetricLogger(delimiter=\"  \")\n    metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))\n    header = 'Test:'\n\n    iou_types = tuple(k for k in ('segm', 'bbox') if k in postprocessors.keys())\n    coco_evaluator = CocoEvaluator(base_ds, iou_types)\n\n    panoptic_evaluator = None\n    if 'panoptic' in postprocessors.keys():\n        panoptic_evaluator = PanopticEvaluator(\n            data_loader.dataset.ann_file,\n            data_loader.dataset.ann_folder,\n            output_dir=os.path.join(args.output_dir, \"panoptic_eval\"),\n        )\n\n    for step, (samples, targets) in enumerate(metric_logger.log_every(data_loader, 10, header)):\n        samples = samples.to(device)\n        targets = [{k: v.to(device) for k, v in t.items()} for t in targets]\n\n        outputs = model(samples)\n        loss_dict = criterion(outputs, targets)\n        weight_dict = criterion.weight_dict\n\n        # reduce losses over all GPUs for logging purposes\n        loss_dict_reduced = utils.reduce_dict(loss_dict)\n        loss_dict_reduced_scaled = {k: v * weight_dict[k]\n                                    for k, v in loss_dict_reduced.items() if k in weight_dict}\n        loss_dict_reduced_unscaled = {f'{k}_unscaled': v\n                                      for k, v in loss_dict_reduced.items()}\n        metric_logger.update(loss=sum(loss_dict_reduced_scaled.values()),\n                             **loss_dict_reduced_scaled,\n                             **loss_dict_reduced_unscaled)\n        metric_logger.update(class_error=loss_dict_reduced['class_error'])\n\n        orig_target_sizes = torch.stack([t[\"orig_size\"] for t in targets], dim=0)\n        results = postprocessors['bbox'](outputs, orig_target_sizes)\n        if 'segm' in postprocessors.keys():\n            target_sizes = torch.stack([t[\"size\"] for t in targets], dim=0)\n            results = postprocessors['segm'](results, outputs, orig_target_sizes, target_sizes)\n        res = {target['image_id'].item(): output for target, output in zip(targets, results)}\n        if coco_evaluator is not None:\n            coco_evaluator.update(res)\n\n        if panoptic_evaluator is not None:\n            res_pano = postprocessors[\"panoptic\"](outputs, target_sizes, orig_target_sizes)\n            for i, target in enumerate(targets):\n                image_id = target[\"image_id\"].item()\n                file_name = f\"{image_id:012d}.png\"\n                res_pano[i][\"image_id\"] = image_id\n                res_pano[i][\"file_name\"] = file_name\n\n            panoptic_evaluator.update(res_pano)\n\n\n\n    # gather the stats from all processes\n    metric_logger.synchronize_between_processes()\n    print(\"Averaged stats:\", metric_logger)\n    if coco_evaluator is not None:\n        coco_evaluator.synchronize_between_processes()\n    if panoptic_evaluator is not None:\n        panoptic_evaluator.synchronize_between_processes()\n\n    # accumulate predictions from all images\n    if coco_evaluator is not None:\n        coco_evaluator.accumulate()\n        coco_evaluator.summarize()\n    panoptic_res = None\n    if panoptic_evaluator is not None:\n        panoptic_res = panoptic_evaluator.summarize()\n    stats = {k: meter.global_avg for k, meter in metric_logger.meters.items()}\n    if coco_evaluator is not None:\n        if 'bbox' in postprocessors.keys():\n            stats['coco_eval_bbox'] = coco_evaluator.coco_eval['bbox'].stats.tolist()\n        if 'segm' in postprocessors.keys():\n            stats['coco_eval_masks'] = coco_evaluator.coco_eval['segm'].stats.tolist()\n    if panoptic_res is not None:\n        stats['PQ_all'] = panoptic_res[\"All\"]\n        stats['PQ_th'] = panoptic_res[\"Things\"]\n        stats['PQ_st'] = panoptic_res[\"Stuff\"]\n    return stats, coco_evaluator\n"
  },
  {
    "path": "main.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\nimport argparse\nimport datetime\nimport json\nimport random\nimport time\nfrom tabulate import tabulate\nfrom pathlib import Path\n\nimport numpy as np\nimport torch\nfrom torch.utils.data import DataLoader, Subset\n\nimport datasets\nimport util.misc as utils\nimport datasets.samplers as samplers\nfrom datasets import build_dataset, get_coco_api_from_dataset\nfrom engine import evaluate, train_one_epoch\nfrom models import build_model\nfrom util.benchmark import compute_fps, compute_gflops\n\nfrom torch.utils.tensorboard import SummaryWriter\n\n\ndef get_args_parser():\n    parser = argparse.ArgumentParser('Deformable DETR Detector', add_help=False)\n    parser.add_argument('--lr', default=2e-4, type=float)\n    parser.add_argument('--lr_backbone_names', default=[\"backbone.0\"], type=str, nargs='+')\n    parser.add_argument('--lr_backbone', default=2e-5, type=float)\n    parser.add_argument('--lr_linear_proj_names', default=['reference_points', 'sampling_offsets'], type=str, nargs='+')\n    parser.add_argument('--lr_linear_proj_mult', default=0.1, type=float)\n    parser.add_argument('--batch_size', default=2, type=int)\n    parser.add_argument('--weight_decay', default=1e-4, type=float)\n    parser.add_argument('--epochs', default=50, type=int)\n    parser.add_argument('--lr_drop', default=40, type=int)\n    parser.add_argument('--lr_drop_epochs', default=None, type=int, nargs='+')\n    parser.add_argument('--clip_max_norm', default=0.1, type=float,\n                        help='gradient clipping max norm')\n\n\n    parser.add_argument('--sgd', action='store_true')\n\n    # Variants of Deformable DETR\n    parser.add_argument('--with_box_refine', default=False, action='store_true')\n    parser.add_argument('--two_stage', default=False, action='store_true')\n\n    # Model parameters\n    parser.add_argument('--frozen_weights', type=str, default=None,\n                        help=\"Path to the pretrained model. If set, only the mask head will be trained\")\n\n    # * Backbone\n    parser.add_argument('--backbone', default='resnet50', type=str,\n                        help=\"Name of the convolutional backbone to use\")\n    parser.add_argument('--dilation', action='store_true',\n                        help=\"If true, we replace stride with dilation in the last convolutional block (DC5)\")\n    parser.add_argument('--position_embedding', default='sine', type=str, choices=('sine', 'learned'),\n                        help=\"Type of positional embedding to use on top of the image features\")\n    parser.add_argument('--position_embedding_scale', default=2 * np.pi, type=float,\n                        help=\"position / size * scale\")\n    parser.add_argument('--num_feature_levels', default=4, type=int, help='number of feature levels')\n\n    # * Modified architecture\n    parser.add_argument('--backbone_from_scratch', default=False, action='store_true')\n    parser.add_argument('--finetune_early_layers', default=False, action='store_true')\n    parser.add_argument('--scrl_pretrained_path', default='', type=str)\n\n    # * Transformer\n    parser.add_argument('--enc_layers', default=6, type=int,\n                        help=\"Number of encoding layers in the transformer\")\n    parser.add_argument('--dec_layers', default=6, type=int,\n                        help=\"Number of decoding layers in the transformer\")\n    parser.add_argument('--dim_feedforward', default=1024, type=int,\n                        help=\"Intermediate size of the feedforward layers in the transformer blocks\")\n    parser.add_argument('--hidden_dim', default=256, type=int,\n                        help=\"Size of the embeddings (dimension of the transformer)\")\n    parser.add_argument('--dropout', default=0.1, type=float,\n                        help=\"Dropout applied in the transformer\")\n    parser.add_argument('--nheads', default=8, type=int,\n                        help=\"Number of attention heads inside the transformer's attentions\")\n    parser.add_argument('--num_queries', default=300, type=int,\n                        help=\"Number of query slots\")\n    parser.add_argument('--dec_n_points', default=4, type=int)\n    parser.add_argument('--enc_n_points', default=4, type=int)\n    \n    # * Efficient DETR\n    parser.add_argument('--eff_query_init', default=False, action='store_true')\n    parser.add_argument('--eff_specific_head', default=False, action='store_true')\n\n    # * Sparse DETR\n    parser.add_argument('--use_enc_aux_loss', default=False, action='store_true')\n    parser.add_argument('--rho', default=0., type=float)\n\n    # * Segmentation\n    parser.add_argument('--masks', action='store_true',\n                        help=\"Train segmentation head if the flag is provided\")\n\n    # Loss\n    parser.add_argument('--no_aux_loss', dest='aux_loss', action='store_false',\n                        help=\"Disables auxiliary decoding losses (loss at each layer)\")\n\n    # * Matcher\n    parser.add_argument('--set_cost_class', default=2, type=float,\n                        help=\"Class coefficient in the matching cost\")\n    parser.add_argument('--set_cost_bbox', default=5, type=float,\n                        help=\"L1 box coefficient in the matching cost\")\n    parser.add_argument('--set_cost_giou', default=2, type=float,\n                        help=\"giou box coefficient in the matching cost\")\n\n    # * Loss coefficients\n    parser.add_argument('--mask_loss_coef', default=1, type=float)\n    parser.add_argument('--dice_loss_coef', default=1, type=float)\n    parser.add_argument('--cls_loss_coef', default=2, type=float)\n    parser.add_argument('--bbox_loss_coef', default=5, type=float)\n    parser.add_argument('--giou_loss_coef', default=2, type=float)\n    parser.add_argument('--mask_prediction_coef', default=1, type=float)\n    parser.add_argument('--focal_alpha', default=0.25, type=float)\n\n    # * dataset parameters\n    parser.add_argument('--dataset_file', default='coco')\n    parser.add_argument('--coco_path', default='./data/coco', type=str)\n    parser.add_argument('--coco_panoptic_path', type=str)\n    parser.add_argument('--remove_difficult', action='store_true')\n\n    parser.add_argument('--output_dir', default='',\n                        help='path where to save, empty for no saving')\n    parser.add_argument('--device', default='cuda',\n                        help='device to use for training / testing')\n    parser.add_argument('--seed', default=42, type=int)\n    parser.add_argument('--resume', default='', help='resume from checkpoint')\n    parser.add_argument('--start_epoch', default=0, type=int, metavar='N',\n                        help='start epoch')\n    parser.add_argument('--eval', action='store_true')\n    parser.add_argument('--num_workers', default=2, type=int)\n    parser.add_argument('--cache_mode', default=False, action='store_true', help='whether to cache images on memory')\n    \n    # * benchmark\n    parser.add_argument('--approx_benchmark_only', default=False, action='store_true')\n    parser.add_argument('--benchmark_only', default=False, action='store_true')\n    parser.add_argument('--no_benchmark', dest='benchmark', action='store_false')\n\n    return parser\n\n\ndef main(args):\n    utils.init_distributed_mode(args)\n    print(\"git:\\n  {}\\n\".format(utils.get_sha()))\n\n    if args.frozen_weights is not None:\n        assert args.masks, \"Frozen training is meant for segmentation only\"\n    print(args)\n\n    device = torch.device(args.device)\n\n    # fix the seed for reproducibility\n    seed = args.seed + utils.get_rank()\n    torch.manual_seed(seed)\n    np.random.seed(seed)\n    random.seed(seed)\n\n    model, criterion, postprocessors = build_model(args)\n    model.to(device)\n    model_without_ddp = model\n    \n    dataset_val_org = build_dataset(image_set='val', args=args)\n    \n    if args.approx_benchmark_only or args.benchmark_only:\n        assert not args.distributed and args.benchmark\n    \n    if utils.is_main_process() and args.benchmark:\n        n_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n        if args.benchmark_only:\n            gflops = compute_gflops(model, dataset_val_org, approximated=False)\n        else:\n            gflops = compute_gflops(model, dataset_val_org, approximated=True)\n        fps = compute_fps(model, dataset_val_org, num_iters=20, batch_size=1)\n        bfps = compute_fps(model, dataset_val_org, num_iters=20, batch_size=4)\n        tab_keys = [\"#Params(M)\", \"GFLOPs\", \"FPS\", \"B4FPS\"]\n        tab_vals = [n_params / 10 ** 6, gflops, fps, bfps]\n        table = tabulate([tab_vals], headers=tab_keys, tablefmt=\"pipe\",\n                        floatfmt=\".3f\", stralign=\"center\", numalign=\"center\")\n        print(\"===== Benchmark (Crude Approx.) =====\\n\" + table)\n        \n    if args.approx_benchmark_only or args.benchmark_only:\n        import sys; sys.exit()\n            \n    if args.distributed:\n        # wait for benchmark in the main process\n        torch.distributed.barrier()\n        \n    dataset_train = build_dataset(image_set='train', args=args)\n    dataset_val = dataset_val_org\n\n    if args.distributed:\n        if args.cache_mode:\n            sampler_train = samplers.NodeDistributedSampler(dataset_train)\n            sampler_val = samplers.NodeDistributedSampler(dataset_val, shuffle=False)\n        else:\n            sampler_train = samplers.DistributedSampler(dataset_train)\n            sampler_val = samplers.DistributedSampler(dataset_val, shuffle=False)\n    else:\n        sampler_train = torch.utils.data.RandomSampler(dataset_train)\n        sampler_val = torch.utils.data.SequentialSampler(dataset_val)\n\n    batch_sampler_train = torch.utils.data.BatchSampler(\n        sampler_train, args.batch_size, drop_last=True)\n\n    data_loader_train = DataLoader(dataset_train, batch_sampler=batch_sampler_train,\n                                   collate_fn=utils.collate_fn, num_workers=args.num_workers,\n                                   pin_memory=True)\n    data_loader_val = DataLoader(dataset_val, args.batch_size, sampler=sampler_val,\n                                 drop_last=False, collate_fn=utils.collate_fn, num_workers=args.num_workers,\n                                 pin_memory=True)\n\n    args = utils.scale_learning_rate(args)\n    def match_name_keywords(n, name_keywords):\n        out = False\n        for b in name_keywords:\n            if b in n:\n                out = True\n                break\n        return out\n\n    param_dicts = [\n        {\n            \"params\":\n                [p for n, p in model_without_ddp.named_parameters()\n                 if (not match_name_keywords(n, args.lr_backbone_names) \n                     and not match_name_keywords(n, args.lr_linear_proj_names) \n                     and p.requires_grad)],\n            \"lr\": args.lr,\n        },\n        {\n            \"params\": [p for n, p in model_without_ddp.named_parameters() \n                       if (match_name_keywords(n, args.lr_backbone_names) \n                           and not match_name_keywords(n, args.lr_linear_proj_names) \n                           and p.requires_grad)],\n            \"lr\": args.lr_backbone,\n        },\n        {\n            \"params\": [p for n, p in model_without_ddp.named_parameters() \n                       if match_name_keywords(n, args.lr_linear_proj_names) and p.requires_grad],\n            \"lr\": args.lr * args.lr_linear_proj_mult,\n        }\n    ]\n    if args.sgd:\n        optimizer = torch.optim.SGD(param_dicts, lr=args.lr, momentum=0.9,\n                                    weight_decay=args.weight_decay)\n    else:\n        optimizer = torch.optim.AdamW(param_dicts, lr=args.lr,\n                                      weight_decay=args.weight_decay)\n    lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, args.lr_drop)\n\n    if args.distributed:\n        model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu], \n                                                          find_unused_parameters=True)\n        model_without_ddp = model.module\n\n    if args.dataset_file == \"coco_panoptic\":\n        # We also evaluate AP during panoptic training, on original coco DS\n        coco_val = datasets.coco.build(\"val\", args)\n        base_ds = get_coco_api_from_dataset(coco_val)\n    else:\n        base_ds = get_coco_api_from_dataset(dataset_val)\n\n    if args.frozen_weights is not None:\n        checkpoint = torch.load(args.frozen_weights, map_location='cpu')\n        model_without_ddp.detr.load_state_dict(checkpoint['model'])\n\n    output_dir = Path(args.output_dir)\n    if args.resume:\n        if args.resume.startswith('https'):\n            checkpoint = torch.hub.load_state_dict_from_url(\n                args.resume, map_location='cpu', check_hash=True)\n        else:\n            checkpoint = torch.load(args.resume, map_location='cpu')\n        missing_keys, unexpected_keys = model_without_ddp.load_state_dict(checkpoint['model'], strict=False)\n        unexpected_keys = [k for k in unexpected_keys if not (k.endswith('total_params') or k.endswith('total_ops'))]\n        if len(missing_keys) > 0:\n            print('Missing Keys: {}'.format(missing_keys))\n        if len(unexpected_keys) > 0:\n            print('Unexpected Keys: {}'.format(unexpected_keys))\n        if not args.eval and 'optimizer' in checkpoint and 'lr_scheduler' in checkpoint and 'epoch' in checkpoint:\n            import copy\n            p_groups = copy.deepcopy(optimizer.param_groups)\n            optimizer.load_state_dict(checkpoint['optimizer'])\n            for pg, pg_old in zip(optimizer.param_groups, p_groups):\n                pg['lr'] = pg_old['lr']\n                pg['initial_lr'] = pg_old['initial_lr']\n            print(optimizer.param_groups)\n            lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])\n            # todo: this is a hack for doing experiment that resume from checkpoint \n            # and also modify lr scheduler (e.g., decrease lr in advance).\n            args.override_resumed_lr_drop = True\n            if args.override_resumed_lr_drop:\n                print('Warning: (hack) args.override_resumed_lr_drop is set to True, '\n                      'so args.lr_drop would override lr_drop in resumed lr_scheduler.')\n                lr_scheduler.step_size = args.lr_drop\n                lr_scheduler.base_lrs = list(map(lambda group: group['initial_lr'], optimizer.param_groups))\n            lr_scheduler.step(lr_scheduler.last_epoch)\n            args.start_epoch = checkpoint['epoch'] + 1\n        # check the resumed model\n        if not args.eval:\n            test_stats, coco_evaluator = evaluate(\n                model, criterion, postprocessors, data_loader_val, base_ds, device, args\n            )\n    \n    if args.eval:\n        print(\"Start evaluation\")\n        start_time = time.time()\n        test_stats, coco_evaluator = evaluate(model, criterion, postprocessors,\n                                              data_loader_val, base_ds, device, args)\n        if args.output_dir:\n            utils.save_on_master(coco_evaluator.coco_eval[\"bbox\"].eval, output_dir / \"eval.pth\")\n        print_final_result_on_master(model, dataset_val_org, args, test_stats, start_time)\n        return\n\n    if utils.is_main_process():\n        writer = SummaryWriter(output_dir)\n    else:\n        writer = None\n    total_iter = 0\n    \n    print(\"Start training\")\n    start_time = time.time()\n    for epoch in range(args.start_epoch, args.epochs):\n        if args.distributed:\n            sampler_train.set_epoch(epoch)\n        train_stats, total_iter = train_one_epoch(\n            model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm, writer, total_iter)\n        lr_scheduler.step()\n        if args.output_dir:\n            checkpoint_paths = [output_dir / 'checkpoint.pth']\n            # extra checkpoint before LR drop and every 5 epochs\n            if (epoch + 1) % args.lr_drop == 0 or (epoch + 1) % 5 == 0:\n                checkpoint_paths.append(output_dir / f'checkpoint{epoch:04}.pth')\n            for checkpoint_path in checkpoint_paths:\n                utils.save_on_master({\n                    'model': model_without_ddp.state_dict(),\n                    'optimizer': optimizer.state_dict(),\n                    'lr_scheduler': lr_scheduler.state_dict(),\n                    'epoch': epoch,\n                    'args': args,\n                }, checkpoint_path)\n\n        test_stats, coco_evaluator = evaluate(\n            model, criterion, postprocessors, data_loader_val, base_ds, device, args\n        )\n\n        # write test status\n        if utils.is_main_process():\n            writer.add_scalar('test/AP', test_stats['coco_eval_bbox'][0], epoch)\n            writer.add_scalar('test/AP50', test_stats['coco_eval_bbox'][1], epoch)\n            writer.add_scalar('test/AP75', test_stats['coco_eval_bbox'][2], epoch)\n            writer.add_scalar('test/APs', test_stats['coco_eval_bbox'][3], epoch)\n            writer.add_scalar('test/APm', test_stats['coco_eval_bbox'][4], epoch)\n            writer.add_scalar('test/APl', test_stats['coco_eval_bbox'][5], epoch)\n            writer.add_scalar('test/class_error', test_stats['class_error'], epoch)\n            writer.add_scalar('test/loss', test_stats['loss'], epoch)\n            writer.add_scalar('test/loss_ce', test_stats['loss_ce'], epoch)\n            writer.add_scalar('test/loss_bbox', test_stats['loss_bbox'], epoch)\n            writer.add_scalar('test/loss_giou', test_stats['loss_giou'], epoch)\n            for key, value in test_stats.items():\n                if \"corr\" in key:\n                    writer.add_scalar('test/'+key, value, epoch)\n\n        log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},\n                     **{f'test_{k}': v for k, v in test_stats.items()},\n                     'epoch': epoch}\n\n        if args.output_dir and utils.is_main_process():\n            if args.benchmark:\n                log_stats.update({'params': n_params, 'gflops': gflops, 'fps': fps, 'bfps': bfps})\n\n            with (output_dir / \"log.txt\").open(\"a\") as f:\n                f.write(json.dumps(log_stats) + \"\\n\")\n\n            # for evaluation logs\n            if coco_evaluator is not None:\n                (output_dir / 'eval').mkdir(exist_ok=True)\n                if \"bbox\" in coco_evaluator.coco_eval:\n                    filenames = ['latest.pth']\n                    if epoch % 50 == 0:\n                        filenames.append(f'{epoch:03}.pth')\n                    for name in filenames:\n                        torch.save(coco_evaluator.coco_eval[\"bbox\"].eval,\n                                   output_dir / \"eval\" / name)\n        \n    print_final_result_on_master(model, dataset_val_org, args, test_stats, start_time)\n    \n\ndef print_final_result_on_master(model, dataset_val, args, test_stats, start_time=None):   \n    if not utils.is_main_process():\n        return False\n    \n    # training wallclock-time / gpus-hours\n    num_gpus = args.world_size if args.distributed else 1\n    if start_time is not None:\n        total_time = time.time() - start_time\n        gpu_hours = total_time / 3600 * num_gpus\n        gpu_hours_per_epoch = gpu_hours / args.epochs\n        total_time_str = str(datetime.timedelta(seconds=int(total_time)))\n    else:\n        total_time_str, gpu_hours, gpu_hours_per_epoch = [\"N/A\"] * 3\n\n    # make result table\n    now = datetime.datetime.now().strftime(\"%h%d %H:%M\")\n    tab_keys =  [\"Time\", \"output_dir\", \"epochs\", \"bsz\", \"#GPUs\"]\n    tab_vals =  [now, Path(args.output_dir), args.epochs, int(args.batch_size * num_gpus), num_gpus]\n    tab_keys += [\"AP\", \"AP50\", \"AP75\", \"APs\", \"APm\", \"APl\"]\n    tab_vals += [v * 100 for v in test_stats['coco_eval_bbox'][:6]]\n\n    tab_keys += [\"E/T\", \"GPU*hrs\", \"GPU*hrs/ep\"]\n    tab_vals += [total_time_str, gpu_hours, gpu_hours_per_epoch]\n    \n    # add benchmark\n    if args.benchmark:\n        gflops = compute_gflops(model, dataset_val, approximated=False)\n        fps = compute_fps(model, dataset_val, num_iters=300, batch_size=1)\n        bfps = compute_fps(model, dataset_val, num_iters=300, batch_size=4)\n        tab_keys += ['GFLOPs', 'FPS', 'B4FPS']\n        tab_vals += [gflops, fps, bfps]\n        \n    table = tabulate([tab_vals], headers=tab_keys, tablefmt=\"pipe\",\n                     floatfmt=\".3f\", stralign=\"center\", numalign=\"center\")\n    \n    # dump to the file\n    with open(\"log_result.txt\", \"a\") as f:\n        f.write(\"\\n\" + table + \"\\n\")\n            \n    print(f\"Save the final result to ./log_result.txt\\n{table}\")\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser('Sparse DETR training and evaluation script', parents=[get_args_parser()])\n    args = parser.parse_args()\n    if args.output_dir:\n        Path(args.output_dir).mkdir(parents=True, exist_ok=True)\n    main(args)\n"
  },
  {
    "path": "models/__init__.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\nfrom .deformable_detr import build\n\n\ndef build_model(args):\n    return build(args)\n\n"
  },
  {
    "path": "models/backbone.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\n\"\"\"\nBackbone modules.\n\"\"\"\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn.functional as F\nimport torchvision\nfrom torch import nn\nfrom torchvision.models._utils import IntermediateLayerGetter\nfrom typing import Dict, List\n\nfrom models import swin_transformer\nfrom util.misc import NestedTensor, is_main_process\n\nfrom .position_encoding import build_position_encoding\n\n\nclass FrozenBatchNorm2d(torch.nn.Module):\n    \"\"\"\n    BatchNorm2d where the batch statistics and the affine parameters are fixed.\n\n    Copy-paste from torchvision.misc.ops with added eps before rsqrt,\n    without which any other models than torchvision.models.resnet[18,34,50,101]\n    produce nans.\n    \"\"\"\n\n    def __init__(self, n, eps=1e-5):\n        super(FrozenBatchNorm2d, self).__init__()\n        self.register_buffer(\"weight\", torch.ones(n))\n        self.register_buffer(\"bias\", torch.zeros(n))\n        self.register_buffer(\"running_mean\", torch.zeros(n))\n        self.register_buffer(\"running_var\", torch.ones(n))\n        self.eps = eps\n\n    def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,\n                              missing_keys, unexpected_keys, error_msgs):\n        num_batches_tracked_key = prefix + 'num_batches_tracked'\n        if num_batches_tracked_key in state_dict:\n            del state_dict[num_batches_tracked_key]\n\n        super(FrozenBatchNorm2d, self)._load_from_state_dict(\n            state_dict, prefix, local_metadata, strict,\n            missing_keys, unexpected_keys, error_msgs)\n\n    def forward(self, x):\n        # move reshapes to the beginning\n        # to make it fuser-friendly\n        w = self.weight.reshape(1, -1, 1, 1)\n        b = self.bias.reshape(1, -1, 1, 1)\n        rv = self.running_var.reshape(1, -1, 1, 1)\n        rm = self.running_mean.reshape(1, -1, 1, 1)\n        eps = self.eps\n        scale = w * (rv + eps).rsqrt()\n        bias = b - rm * scale\n        return x * scale + bias\n\n\nclass BackboneBase(nn.Module):\n\n    def __init__(self, backbone: nn.Module, train_backbone: bool, return_interm_layers: bool, args):\n        # TODO: args -> duplicated args\n        super().__init__()\n        if 'none' in args.backbone:\n            self.strides = [1]  # not used, actually (length only matters)  \n            self.num_channels = [3]\n            return_layers = self.get_return_layers('identity', (0,))\n            self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)\n\n        elif 'resnet' in args.backbone:\n            \n            if not args.backbone_from_scratch and not args.finetune_early_layers:\n                print(\"Freeze early layers.\")\n                for name, parameter in backbone.named_parameters():\n                    if not train_backbone or all([k not in name for k in ['layer2', 'layer3', 'layer4']]):\n                        parameter.requires_grad_(False)\n            else:\n                print('Finetune early layers as well.')\n                    \n            layer_name = \"layer\"\n            if return_interm_layers:\n                return_layers = self.get_return_layers(layer_name, (2, 3, 4))\n                self.strides = [8, 16, 32]\n                self.num_channels = [512, 1024, 2048]\n            else:\n                return_layers = self.get_return_layers(layer_name, (4,))\n                self.strides = [32]\n                self.num_channels = [2048]\n            self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)\n                \n        elif 'swin' in args.backbone:\n            if return_interm_layers:\n                num_channels = [int(backbone.embed_dim * 2 ** i) for i in range(backbone.num_layers)]\n                return_layers = [2, 3, 4]\n                self.strides = [8, 16, 32]\n                self.num_channels = num_channels[1:]\n            else:\n                return_layers = [4]\n                self.strides = [32]\n                self.num_channels = num_channels[-1]\n            self.body = backbone\n                \n        else:\n            raise ValueError(f\"Unknown backbone name: {args.backbone}\")\n        \n    @staticmethod\n    def get_return_layers(name: str, layer_ids):\n        return {name + str(n): str(i) for i, n in enumerate(layer_ids)}\n\n    def forward(self, tensor_list: NestedTensor):\n        xs = self.body(tensor_list.tensors)\n        out: Dict[str, NestedTensor] = {}\n        for name, x in xs.items():\n            m = tensor_list.mask\n            assert m is not None\n            mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0]\n            out[name] = NestedTensor(x, mask)\n        return out\n    \n    \nclass DummyBackbone(torch.nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.identity0 = torch.nn.Identity()\n\n\nclass Backbone(BackboneBase):\n    \"\"\"ResNet backbone with frozen BatchNorm.\"\"\"\n    def __init__(self, name: str,\n                 train_backbone: bool,\n                 return_interm_layers: bool,\n                 dilation: bool,\n                 args):\n        print(f\"Backbone: {name}\")\n        pretrained = is_main_process() and not args.backbone_from_scratch and not args.scrl_pretrained_path\n        if not pretrained:\n            print(\"Train backbone from scratch.\")\n        else:\n            print(\"Load pretrained weights\")\n        \n        if \"none\" in name:\n            backbone = DummyBackbone()\n        elif \"resnet\" in name:\n            assert name not in (\"resnet18\", \"resnet34\"), \"number of channels are hard coded\"\n            backbone = getattr(torchvision.models, name)(\n                replace_stride_with_dilation=[False, False, dilation],\n                pretrained=pretrained, norm_layer=FrozenBatchNorm2d)\n        elif \"swin\" in name:\n            assert not dilation, \"not supported\"\n            if not args.backbone_from_scratch and not args.finetune_early_layers:\n                print(\"Freeze early layers.\")\n                frozen_stages = 2\n            else:\n                print('Finetune early layers as well.')\n                frozen_stages = -1\n            if return_interm_layers:\n                out_indices = [1, 2, 3]\n            else:\n                out_indices = [3]\n                \n            backbone = swin_transformer.build_model(\n                name, out_indices=out_indices, frozen_stages=frozen_stages, pretrained=pretrained)\n        else:\n            raise ValueError(f\"Unknown backbone name: {args.backbone}\")\n            \n        if args.scrl_pretrained_path:\n            assert \"resnet\" in name, \"Currently only resnet50 is available.\"\n            ckpt = torch.load(args.scrl_pretrained_path, map_location=\"cpu\")\n            translate_map = {\n                \"encoder.0\" : \"conv1\",\n                \"encoder.1\" : \"bn1\",\n                \"encoder.4\" : \"layer1\",\n                \"encoder.5\" : \"layer2\",\n                \"encoder.6\" : \"layer3\",\n                \"encoder.7\" : \"layer4\",\n            }\n            state_dict = {\n                translate_map[k[:9]] + k[9:] : v\n                for k, v in ckpt[\"online_network_state_dict\"].items()\n                if \"encoder\" in k\n            }\n            backbone.load_state_dict(state_dict, strict=False)\n        \n        super().__init__(backbone, train_backbone, return_interm_layers, args)\n        if dilation and \"resnet\" in name:\n            self.strides[-1] = self.strides[-1] // 2\n\n\nclass Joiner(nn.Sequential):\n    def __init__(self, backbone, position_embedding):\n        super().__init__(backbone, position_embedding)\n        self.strides = backbone.strides\n        self.num_channels = backbone.num_channels\n\n    def forward(self, tensor_list: NestedTensor):\n        xs = self[0](tensor_list)\n        out: List[NestedTensor] = []\n        pos = []\n        for name, x in sorted(xs.items()):\n            out.append(x)\n\n        # position encoding\n        for x in out:\n            pos.append(self[1](x).to(x.tensors.dtype))\n\n        return out, pos\n    \n    \ndef test_backbone(backbone):\n    imgs = [\n        torch.randn(2, 3, 633, 122),\n        torch.randn(2, 3, 322, 532),\n        torch.randn(2, 3, 236, 42),\n    ]\n    return [backbone(img).shape for img in imgs]\n\n\ndef build_backbone(args):\n    # test_backbone(torchvision.models.resnet50())\n    position_embedding = build_position_encoding(args)\n    train_backbone = args.lr_backbone > 0\n    return_interm_layers = args.masks or (args.num_feature_levels > 1)\n    backbone = Backbone(args.backbone, train_backbone, return_interm_layers, args.dilation, args)\n    model = Joiner(backbone, position_embedding)\n    return model\n"
  },
  {
    "path": "models/deformable_detr.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\n\"\"\"\nDeformable DETR model and criterion classes.\n\"\"\"\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\nimport math\n\nfrom util import box_ops\nfrom util.misc import (NestedTensor, nested_tensor_from_tensor_list,\n                       accuracy, get_world_size, interpolate,\n                       is_dist_avail_and_initialized, inverse_sigmoid)\nfrom util.dam import idx_to_flat_grid, attn_map_to_flat_grid, compute_corr\n\nfrom .backbone import build_backbone\nfrom .matcher import build_matcher\nfrom .segmentation import (DETRsegm, PostProcessPanoptic, PostProcessSegm,\n                           dice_loss, sigmoid_focal_loss)\nfrom .deformable_transformer import build_deforamble_transformer\nimport copy\n\n\ndef _get_clones(module, N):\n    return nn.ModuleList([copy.deepcopy(module) for i in range(N)])\n\n\nclass DeformableDETR(nn.Module):\n    \"\"\" This is the Deformable DETR module that performs object detection \"\"\"\n    def __init__(self, backbone, transformer, num_classes, num_queries, num_feature_levels,\n                 aux_loss=True, with_box_refine=False, two_stage=False, args=None):\n        \"\"\" Initializes the model.\n        Parameters:\n            backbone: torch module of the backbone to be used. See backbone.py\n            transformer: torch module of the transformer architecture. See transformer.py\n            num_classes: number of object classes\n            num_queries: number of object queries, ie detection slot. This is the maximal number of objects\n                         DETR can detect in a single image. For COCO, we recommend 100 queries.\n            aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used.\n            with_box_refine: iterative bounding box refinement\n            two_stage: two-stage Deformable DETR\n        \"\"\"\n        super().__init__()\n        self.num_queries = num_queries\n        self.transformer = transformer\n        hidden_dim = transformer.d_model\n        self.class_embed = nn.Linear(hidden_dim, num_classes)\n        self.bbox_embed = MLP(hidden_dim, hidden_dim, output_dim=4, num_layers=3)\n        self.num_feature_levels = num_feature_levels\n        if not two_stage:\n            self.query_embed = nn.Embedding(num_queries, hidden_dim * 2)\n            # will be splited into query_embed(query_pos) & tgt later\n        if num_feature_levels > 1:\n            num_backbone_outs = len(backbone.strides)\n            input_proj_list = []\n            for _ in range(num_backbone_outs):\n                in_channels = backbone.num_channels[_]\n                input_proj_list.append(nn.Sequential(\n                    nn.Conv2d(in_channels, hidden_dim, kernel_size=1),\n                    nn.GroupNorm(32, hidden_dim),\n                ))\n            for _ in range(num_feature_levels - num_backbone_outs):\n                input_proj_list.append(nn.Sequential(\n                    nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1),\n                    nn.GroupNorm(32, hidden_dim),\n                ))\n                in_channels = hidden_dim\n            self.input_proj = nn.ModuleList(input_proj_list)\n        else:\n            self.input_proj = nn.ModuleList([\n                nn.Sequential(\n                    nn.Conv2d(backbone.num_channels[0], hidden_dim, kernel_size=1),\n                    nn.GroupNorm(32, hidden_dim),\n                )])\n        self.backbone = backbone\n        self.aux_loss = aux_loss\n        self.with_box_refine = with_box_refine\n        self.two_stage = two_stage\n\n        self.use_enc_aux_loss = args.use_enc_aux_loss\n        self.rho = args.rho\n\n        prior_prob = 0.01\n        bias_value = -math.log((1 - prior_prob) / prior_prob)\n        self.class_embed.bias.data = torch.ones(num_classes) * bias_value\n        nn.init.constant_(self.bbox_embed.layers[-1].weight.data, 0)\n        nn.init.constant_(self.bbox_embed.layers[-1].bias.data, 0)\n        for proj in self.input_proj:\n            nn.init.xavier_uniform_(proj[0].weight, gain=1)\n            nn.init.constant_(proj[0].bias, 0)\n \n        # hack implementation: a list of embedding heads (see the order)\n        # n: dec_layers / m: enc_layers\n        # [dec_0, dec_1, ..., dec_n-1, encoder, backbone, enc_0, enc_1, ..., enc_m-2]\n        \n        # at each layer of decoder (by default)\n        num_pred = transformer.decoder.num_layers\n        if self.two_stage:\n            # at the end of encoder\n            num_pred += 1  \n        if self.use_enc_aux_loss:\n            # at each layer of encoder (excl. the last)\n            num_pred += transformer.encoder.num_layers - 1  \n        \n        if with_box_refine or self.use_enc_aux_loss:\n            # individual heads with the same initialization\n            self.class_embed = _get_clones(self.class_embed, num_pred)\n            self.bbox_embed = _get_clones(self.bbox_embed, num_pred)\n            nn.init.constant_(self.bbox_embed[0].layers[-1].bias.data[2:], -2.0)\n        else:\n            # shared heads\n            nn.init.constant_(self.bbox_embed.layers[-1].bias.data[2:], -2.0)\n            self.class_embed = nn.ModuleList([self.class_embed for _ in range(num_pred)])\n            self.bbox_embed = nn.ModuleList([self.bbox_embed for _ in range(num_pred)])\n            \n        if two_stage:\n            # hack implementation\n            self.transformer.decoder.class_embed = self.class_embed\n            self.transformer.decoder.bbox_embed = self.bbox_embed            \n            for box_embed in self.transformer.decoder.bbox_embed:\n                nn.init.constant_(box_embed.layers[-1].bias.data[2:], 0.0)\n                \n        if self.use_enc_aux_loss:\n            # the output from the last layer should be specially treated as an input of decoder\n            num_layers_excluding_the_last = transformer.encoder.num_layers - 1\n            self.transformer.encoder.aux_heads = True\n            self.transformer.encoder.class_embed = self.class_embed[-num_layers_excluding_the_last:]\n            self.transformer.encoder.bbox_embed = self.bbox_embed[-num_layers_excluding_the_last:] \n            for box_embed in self.transformer.encoder.bbox_embed:\n                nn.init.constant_(box_embed.layers[-1].bias.data[2:], 0.0)\n\n    def forward(self, samples: NestedTensor):\n        \"\"\" The forward expects a NestedTensor, which consists of:\n               - samples.tensor: batched images, of shape [batch_size x 3 x H x W]\n               - samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels\n\n            It returns a dict with the following elements:\n               - \"pred_logits\": the classification logits (including no-object) for all queries.\n                                Shape= [batch_size x num_queries x (num_classes + 1)]\n               - \"pred_boxes\": The normalized boxes coordinates for all queries, represented as\n                               (center_x, center_y, height, width). These values are normalized in [0, 1],\n                               relative to the size of each individual image (disregarding possible padding).\n                               See PostProcess for information on how to retrieve the unnormalized bounding box.\n               - \"aux_outputs\": Optional, only returned when auxilary losses are activated. It is a list of\n                                dictionnaries containing the two above keys for each decoder layer.\n        \"\"\"\n        ###########\n        # Backbone\n        if not isinstance(samples, NestedTensor):\n            samples = nested_tensor_from_tensor_list(samples)\n        features, pos = self.backbone(samples)\n\n        srcs = []\n        masks = []\n        \n        # multi-scale features projected from ~C5 with 1x1 conv\n        for l, feat in enumerate(features):\n            src, mask = feat.decompose()\n            srcs.append(self.input_proj[l](src))\n            masks.append(mask)\n            assert mask is not None\n            \n        # multi-scale features smaller than C5 projected with 2 strided 3x3 conv\n        if self.num_feature_levels > len(srcs):\n            _len_srcs = len(srcs)\n            for l in range(_len_srcs, self.num_feature_levels):\n                if l == _len_srcs:\n                    # feature scale 1/32 \n                    src = self.input_proj[l](features[-1].tensors)\n                else:\n                    # feature scale <1/64: recursively downsample the last projection\n                    src = self.input_proj[l](srcs[-1])\n                m = samples.mask\n                mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0]\n                pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype)\n                srcs.append(src)\n                masks.append(mask)\n                pos.append(pos_l)\n\n        ###########\n        # Transformer encoder & decoder\n        query_embeds = None\n        if not self.two_stage:\n            query_embeds = self.query_embed.weight\n        (hs, init_reference, inter_references, \n         enc_outputs_class, enc_outputs_coord_unact, \n         backbone_mask_prediction,\n         enc_inter_outputs_class, enc_inter_outputs_coord, \n         sampling_locations_enc, attn_weights_enc, \n         sampling_locations_dec, attn_weights_dec,\n         backbone_topk_proposals, spatial_shapes, level_start_index) = \\\n            self.transformer(srcs, masks, pos, query_embeds)\n\n        ###########\n        # Detection heads\n        outputs_classes = []\n        outputs_coords = []\n        for lvl in range(len(hs)):\n            # lvl: level of decoding layer\n            outputs_class = self.class_embed[lvl](hs[lvl])\n            outputs_coord = self.bbox_embed[lvl](hs[lvl])\n            \n            assert init_reference is not None and inter_references is not None\n            if lvl == 0:\n                reference = init_reference\n            else:\n                reference = inter_references[lvl - 1]\n            reference = inverse_sigmoid(reference)\n            if reference.shape[-1] == 4:\n                outputs_coord += reference\n            else:\n                assert reference.shape[-1] == 2\n                outputs_coord[..., :2] += reference\n            \n            outputs_coord = outputs_coord.sigmoid()\n            outputs_classes.append(outputs_class)\n            outputs_coords.append(outputs_coord)\n            \n        outputs_class = torch.stack(outputs_classes)\n        outputs_coord = torch.stack(outputs_coords)\n\n        # the topmost layer output\n        out = {\n            \"pred_logits\": outputs_class[-1],\n            \"pred_boxes\": outputs_coord[-1],\n            \"sampling_locations_enc\": sampling_locations_enc,\n            \"attn_weights_enc\": attn_weights_enc,\n            \"sampling_locations_dec\": sampling_locations_dec,\n            \"attn_weights_dec\": attn_weights_dec,\n            \"spatial_shapes\": spatial_shapes,\n            \"level_start_index\": level_start_index,\n        }\n        if backbone_topk_proposals is not None:\n            out[\"backbone_topk_proposals\"] = backbone_topk_proposals\n        \n        if self.aux_loss:\n            # make loss from every intermediate layers (excluding the last one)\n            out['aux_outputs'] = self._set_aux_loss(outputs_class[:-1], outputs_coord[:-1])\n\n        if self.two_stage:\n            enc_outputs_coord = enc_outputs_coord_unact.sigmoid()\n            out['enc_outputs'] = {'pred_logits': enc_outputs_class, 'pred_boxes': enc_outputs_coord}\n\n        if self.rho:\n            out[\"backbone_mask_prediction\"] = backbone_mask_prediction\n            \n        if self.use_enc_aux_loss:\n            out['aux_outputs_enc'] = self._set_aux_loss(enc_inter_outputs_class, enc_inter_outputs_coord)\n        \n        if self.rho:\n            out[\"sparse_token_nums\"] = self.transformer.sparse_token_nums\n\n        out['mask_flatten'] = torch.cat([m.flatten(1) for m in masks], 1)\n\n        return out\n\n    @torch.jit.unused\n    def _set_aux_loss(self, outputs_class, outputs_coord):\n        # this is a workaround to make torchscript happy, as torchscript\n        # doesn't support dictionary with non-homogeneous values, such\n        # as a dict having both a Tensor and a list.\n        return [{'pred_logits': a, 'pred_boxes': b}\n                for a, b in zip(outputs_class, outputs_coord)]\n\n\nclass SetCriterion(nn.Module):\n    \"\"\" This class computes the loss for DETR.\n    The process happens in two steps:\n        1) we compute hungarian assignment between ground truth boxes and the outputs of the model\n        2) we supervise each pair of matched ground-truth / prediction (supervise class and box)\n    \"\"\"\n    def __init__(self, num_classes, matcher, weight_dict, losses, args):\n        \"\"\" Create the criterion.\n        Parameters:\n            num_classes: number of object categories, omitting the special no-object category\n            matcher: module able to compute a matching between targets and proposals\n            weight_dict: dict containing as key the names of the losses and as values their relative weight.\n            losses: list of all the losses to be applied. See get_loss for list of available losses.\n            focal_alpha: alpha in Focal Loss\n        \"\"\"\n        super().__init__()\n        self.num_classes = num_classes\n        self.matcher = matcher\n        self.weight_dict = weight_dict\n        self.losses = losses\n\n        self.focal_alpha = args.focal_alpha\n        self.eff_specific_head = args.eff_specific_head\n\n    def loss_labels(self, outputs, targets, indices, num_boxes, log=True):\n        \"\"\"Classification loss (NLL)\n        targets dicts must contain the key \"labels\" containing a tensor of dim [nb_target_boxes]\n        \"\"\"\n        assert 'pred_logits' in outputs\n        src_logits = outputs['pred_logits']\n\n        idx = self._get_src_permutation_idx(indices)\n        target_classes_o = torch.cat([t[\"labels\"][J] for t, (_, J) in zip(targets, indices)])\n        target_classes = torch.full(src_logits.shape[:2], self.num_classes,\n                                    dtype=torch.int64, device=src_logits.device)\n        target_classes[idx] = target_classes_o\n\n        target_classes_onehot = torch.zeros([src_logits.shape[0], src_logits.shape[1], src_logits.shape[2] + 1],\n                                            dtype=src_logits.dtype, layout=src_logits.layout, device=src_logits.device)\n        target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1)\n\n        target_classes_onehot = target_classes_onehot[:,:,:-1]\n        loss_ce = sigmoid_focal_loss(src_logits, target_classes_onehot, num_boxes, alpha=self.focal_alpha, gamma=2)\n        loss_ce = loss_ce * src_logits.shape[1]\n        losses = {'loss_ce': loss_ce}\n\n        if log:\n            # TODO this should probably be a separate loss, not hacked in this one here\n            losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0]\n        return losses\n\n    @torch.no_grad()\n    def loss_cardinality(self, outputs, targets, indices, num_boxes):\n        \"\"\" Compute the cardinality error, ie the absolute error in the number of predicted non-empty boxes\n        This is not really a loss, it is intended for logging purposes only. It doesn't propagate gradients\n        \"\"\"\n        pred_logits = outputs['pred_logits']\n        device = pred_logits.device\n        tgt_lengths = torch.as_tensor([len(v[\"labels\"]) for v in targets], device=device)\n        # Count the number of predictions that are NOT \"no-object\" (which is the last class)\n        card_pred = (pred_logits.argmax(-1) != pred_logits.shape[-1] - 1).sum(1)\n        card_err = F.l1_loss(card_pred.float(), tgt_lengths.float())\n        losses = {'cardinality_error': card_err}\n        return losses\n\n    def loss_boxes(self, outputs, targets, indices, num_boxes):\n        \"\"\"Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss\n           targets dicts must contain the key \"boxes\" containing a tensor of dim [nb_target_boxes, 4]\n           The target boxes are expected in format (center_x, center_y, h, w), normalized by the image size.\n        \"\"\"\n        assert 'pred_boxes' in outputs\n        idx = self._get_src_permutation_idx(indices)\n        src_boxes = outputs['pred_boxes'][idx]\n        target_boxes = torch.cat([t['boxes'][i] for t, (_, i) in zip(targets, indices)], dim=0)\n\n        loss_bbox = F.l1_loss(src_boxes, target_boxes, reduction='none')\n\n        losses = {}\n        losses['loss_bbox'] = loss_bbox.sum() / num_boxes\n\n        loss_giou = 1 - torch.diag(box_ops.generalized_box_iou(\n            box_ops.box_cxcywh_to_xyxy(src_boxes),\n            box_ops.box_cxcywh_to_xyxy(target_boxes)))\n        losses['loss_giou'] = loss_giou.sum() / num_boxes\n        return losses\n\n    def loss_masks(self, outputs, targets, indices, num_boxes):\n        \"\"\"Compute the losses related to the masks: the focal loss and the dice loss.\n           targets dicts must contain the key \"masks\" containing a tensor of dim [nb_target_boxes, h, w]\n        \"\"\"\n        assert \"pred_masks\" in outputs\n\n        src_idx = self._get_src_permutation_idx(indices)\n        tgt_idx = self._get_tgt_permutation_idx(indices)\n\n        src_masks = outputs[\"pred_masks\"]\n\n        # TODO use valid to mask invalid areas due to padding in loss\n        target_masks, valid = nested_tensor_from_tensor_list([t[\"masks\"] for t in targets]).decompose()\n        target_masks = target_masks.to(src_masks)\n\n        src_masks = src_masks[src_idx]\n        # upsample predictions to the target size\n        src_masks = interpolate(src_masks[:, None], size=target_masks.shape[-2:],\n                                mode=\"bilinear\", align_corners=False)\n        src_masks = src_masks[:, 0].flatten(1)\n\n        target_masks = target_masks[tgt_idx].flatten(1)\n\n        losses = {\n            \"loss_mask\": sigmoid_focal_loss(src_masks, target_masks, num_boxes),\n            \"loss_dice\": dice_loss(src_masks, target_masks, num_boxes),\n        }\n        return losses\n    \n    def loss_mask_prediction(self, outputs, targets, indices, num_boxes, layer=None):\n        assert \"backbone_mask_prediction\" in outputs\n        assert \"sampling_locations_dec\" in outputs\n        assert \"attn_weights_dec\" in outputs\n        assert \"spatial_shapes\" in outputs\n        assert \"level_start_index\" in outputs\n\n        mask_prediction = outputs[\"backbone_mask_prediction\"] \n        loss_key = \"loss_mask_prediction\"\n\n        sampling_locations_dec = outputs[\"sampling_locations_dec\"]\n        attn_weights_dec = outputs[\"attn_weights_dec\"]\n        spatial_shapes = outputs[\"spatial_shapes\"]\n        level_start_index = outputs[\"level_start_index\"]\n\n        flat_grid_attn_map_dec = attn_map_to_flat_grid(\n            spatial_shapes, level_start_index, sampling_locations_dec, attn_weights_dec).sum(dim=(1,2))\n\n        losses = {}\n\n        if 'mask_flatten' in outputs:\n            flat_grid_attn_map_dec = flat_grid_attn_map_dec.masked_fill(\n                outputs['mask_flatten'], flat_grid_attn_map_dec.min()-1)\n                \n        sparse_token_nums = outputs[\"sparse_token_nums\"]\n        num_topk = sparse_token_nums.max()\n\n        topk_idx_tgt = torch.topk(flat_grid_attn_map_dec, num_topk)[1]\n        target = torch.zeros_like(mask_prediction)\n        for i in range(target.shape[0]):\n            target[i].scatter_(0, topk_idx_tgt[i][:sparse_token_nums[i]], 1)\n\n        losses.update({loss_key: F.multilabel_soft_margin_loss(mask_prediction, target)})\n\n        return losses\n\n    @torch.no_grad()\n    def corr(self, outputs, targets, indices, num_boxes):\n        if \"backbone_topk_proposals\" not in outputs.keys():\n            return {}\n\n        assert \"backbone_topk_proposals\" in outputs\n        assert \"sampling_locations_dec\" in outputs\n        assert \"attn_weights_dec\" in outputs\n        assert \"spatial_shapes\" in outputs\n        assert \"level_start_index\" in outputs\n\n        backbone_topk_proposals = outputs[\"backbone_topk_proposals\"]\n        sampling_locations_dec = outputs[\"sampling_locations_dec\"]\n        attn_weights_dec = outputs[\"attn_weights_dec\"]\n        spatial_shapes = outputs[\"spatial_shapes\"]\n        level_start_index = outputs[\"level_start_index\"]\n\n        flat_grid_topk = idx_to_flat_grid(spatial_shapes, backbone_topk_proposals)\n        flat_grid_attn_map_dec = attn_map_to_flat_grid(\n            spatial_shapes, level_start_index, sampling_locations_dec, attn_weights_dec).sum(dim=(1,2))\n        corr = compute_corr(flat_grid_topk, flat_grid_attn_map_dec, spatial_shapes)\n\n        losses = {}\n        losses[\"corr_mask_attn_map_dec_all\"] = corr[0].mean()\n        for i, _corr in enumerate(corr[1:]):\n            losses[f\"corr_mask_attn_map_dec_{i}\"] = _corr.mean()\n        return losses\n\n    def _get_src_permutation_idx(self, indices):\n        # permute predictions following indices\n        batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)])\n        src_idx = torch.cat([src for (src, _) in indices])\n        return batch_idx, src_idx\n\n    def _get_tgt_permutation_idx(self, indices):\n        # permute targets following indices\n        batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])\n        tgt_idx = torch.cat([tgt for (_, tgt) in indices])\n        return batch_idx, tgt_idx\n\n    def get_loss(self, loss, outputs, targets, indices, num_boxes, **kwargs):\n        loss_map = {\n            'labels': self.loss_labels,\n            'cardinality': self.loss_cardinality,\n            'boxes': self.loss_boxes,\n            'masks': self.loss_masks,\n            \"mask_prediction\": self.loss_mask_prediction,\n            \"corr\": self.corr,\n        }\n        assert loss in loss_map, f'do you really want to compute {loss} loss?'\n        return loss_map[loss](outputs, targets, indices, num_boxes, **kwargs)\n\n    def forward(self, outputs, targets):\n        \"\"\" This performs the loss computation.\n        Parameters:\n             outputs: dict of tensors, see the output specification of the model for the format\n             targets: list of dicts, such that len(targets) == batch_size.\n                      The expected keys in each dict depends on the losses applied, see each loss' doc\n        \"\"\"\n        outputs_without_aux = {k: v for k, v in outputs.items() \n                               if k not in ['aux_outputs', 'enc_outputs', 'backbone_outputs', 'mask_flatten']}\n\n        # Retrieve the matching between the outputs of the last layer and the targets\n        indices = self.matcher(outputs_without_aux, targets)\n\n        # Compute the average number of target boxes accross all nodes, for normalization purposes\n        num_boxes = sum(len(t[\"labels\"]) for t in targets)\n        num_boxes = torch.as_tensor([num_boxes], dtype=torch.float, device=next(iter(outputs.values())).device)\n        if is_dist_avail_and_initialized():\n            torch.distributed.all_reduce(num_boxes)\n        num_boxes = torch.clamp(num_boxes / get_world_size(), min=1).item()\n\n        # Compute all the requested losses\n        losses = {}\n        for loss in self.losses:\n            kwargs = {}\n            losses.update(self.get_loss(loss, outputs, targets, indices, num_boxes, **kwargs))\n\n        # In case of auxiliary losses, we repeat this process with the output of each intermediate layer.\n        if 'aux_outputs' in outputs:\n            for i, aux_outputs in enumerate(outputs['aux_outputs']):\n                indices = self.matcher(aux_outputs, targets)\n                for loss in self.losses:\n                    if loss in ['masks', \"mask_prediction\", \"corr\"]:\n                        # Intermediate masks losses are too costly to compute, we ignore them.\n                        continue\n                    kwargs = {}\n                    if loss == 'labels':\n                        # Logging is enabled only for the last layer\n                        kwargs['log'] = False\n                    l_dict = self.get_loss(loss, aux_outputs, targets, indices, num_boxes, **kwargs)\n                    l_dict = {k + f'_{i}': v for k, v in l_dict.items()}\n                    losses.update(l_dict)\n\n        if 'enc_outputs' in outputs:\n            enc_outputs = outputs['enc_outputs']\n            bin_targets = copy.deepcopy(targets)\n            if not self.eff_specific_head:\n                for bt in bin_targets:\n                    bt['labels'] = torch.zeros_like(bt['labels'])  # all labels are zero (meaning foreground)\n            indices = self.matcher(enc_outputs, bin_targets)\n            for loss in self.losses:\n                if loss in ['masks', \"mask_prediction\", \"corr\"]:\n                    # Intermediate masks losses are too costly to compute, we ignore them.\n                    continue\n                kwargs = {}\n                if loss == 'labels':\n                    # Logging is enabled only for the last layer\n                    kwargs['log'] = False\n                l_dict = self.get_loss(loss, enc_outputs, bin_targets, indices, num_boxes, **kwargs)\n                l_dict = {k + f'_enc': v for k, v in l_dict.items()}\n                losses.update(l_dict)\n\n        if 'backbone_outputs' in outputs:\n            backbone_outputs = outputs['backbone_outputs']\n            bin_targets = copy.deepcopy(targets)\n            if not self.eff_specific_head:\n                for bt in bin_targets:\n                    bt['labels'] = torch.zeros_like(bt['labels'])  # all labels are zero (meaning foreground)\n            indices = self.matcher(backbone_outputs, bin_targets)\n            for loss in self.losses:\n                if loss in ['masks', \"mask_prediction\", \"corr\"]:\n                    # Intermediate masks losses are too costly to compute, we ignore them.\n                    continue\n                kwargs = {}\n                if loss == 'labels':\n                    # Logging is enabled only for the last layer\n                    kwargs['log'] = False\n                l_dict = self.get_loss(loss, backbone_outputs, bin_targets, indices, num_boxes, **kwargs)\n                l_dict = {k + f'_backbone': v for k, v in l_dict.items()}\n                losses.update(l_dict)\n                \n        if 'aux_outputs_enc' in outputs:\n            for i, aux_outputs in enumerate(outputs['aux_outputs_enc']):\n                indices = self.matcher(aux_outputs, targets)\n                for loss in self.losses:\n                    if loss in ['masks', \"mask_prediction\", \"corr\"]:\n                        # Intermediate masks losses are too costly to compute, we ignore them.\n                        continue\n                    kwargs = {}\n                    if loss == 'labels':\n                        # Logging is enabled only for the last layer\n                        kwargs['log'] = False\n                    l_dict = self.get_loss(loss, aux_outputs, targets, indices, num_boxes, **kwargs)\n                    l_dict = {k + f'_enc_{i}': v for k, v in l_dict.items()}\n                    losses.update(l_dict)\n\n        return losses\n\n\nclass PostProcess(nn.Module):\n    \"\"\" This module converts the model's output into the format expected by the coco api\"\"\"\n\n    @torch.no_grad()\n    def forward(self, outputs, target_sizes):\n        \"\"\" Perform the computation\n        Parameters:\n            outputs: raw outputs of the model\n            target_sizes: tensor of dimension [batch_size x 2] containing the size of each images of the batch\n                          For evaluation, this must be the original image size (before any data augmentation)\n                          For visualization, this should be the image size after data augment, but before padding\n        \"\"\"\n        out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes']\n\n        assert len(out_logits) == len(target_sizes)\n        assert target_sizes.shape[1] == 2\n\n        prob = out_logits.sigmoid()\n        topk_values, topk_indexes = torch.topk(prob.view(out_logits.shape[0], -1), 100, dim=1)\n        scores = topk_values\n        topk_boxes = topk_indexes // out_logits.shape[2]\n        labels = topk_indexes % out_logits.shape[2]\n        boxes = box_ops.box_cxcywh_to_xyxy(out_bbox)\n        boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1,1,4))\n\n        # and from relative [0, 1] to absolute [0, height] coordinates\n        img_h, img_w = target_sizes.unbind(1)\n        scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1)\n        boxes = boxes * scale_fct[:, None, :]\n\n        results = [{'scores': s, 'labels': l, 'boxes': b} for s, l, b in zip(scores, labels, boxes)]\n\n        return results\n\n\nclass MLP(nn.Module):\n    \"\"\" Very simple multi-layer perceptron (also called FFN)\"\"\"\n\n    def __init__(self, input_dim, hidden_dim, output_dim, num_layers):\n        super().__init__()\n        self.num_layers = num_layers\n        h = [hidden_dim] * (num_layers - 1)\n        self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))\n\n    def forward(self, x):\n        for i, layer in enumerate(self.layers):\n            x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)\n        return x\n\n\ndef build(args):\n    num_classes = 20 if args.dataset_file != 'coco' else 91\n    if args.dataset_file == \"coco_panoptic\":\n        num_classes = 250\n    device = torch.device(args.device)\n\n    backbone = build_backbone(args)\n\n    transformer = build_deforamble_transformer(args)\n    model = DeformableDETR(\n        backbone,\n        transformer,\n        num_classes=num_classes,\n        num_queries=args.num_queries,\n        num_feature_levels=args.num_feature_levels,\n        aux_loss=args.aux_loss,\n        with_box_refine=args.with_box_refine,\n        two_stage=args.two_stage,\n        args=args,\n    )\n    if args.masks:\n        model = DETRsegm(model, freeze_detr=(args.frozen_weights is not None))\n    matcher = build_matcher(args)\n    weight_dict = {'loss_ce': args.cls_loss_coef, 'loss_bbox': args.bbox_loss_coef}\n    weight_dict['loss_giou'] = args.giou_loss_coef\n\n    if args.masks:\n        weight_dict[\"loss_mask\"] = args.mask_loss_coef\n        weight_dict[\"loss_dice\"] = args.dice_loss_coef\n        \n    # TODO this is a hack\n    aux_weight_dict = {}\n    \n    if args.aux_loss:\n        for i in range(args.dec_layers - 1):\n            aux_weight_dict.update({k + f'_{i}': v for k, v in weight_dict.items()})\n            \n    if args.two_stage:\n        aux_weight_dict.update({k + f'_enc': v for k, v in weight_dict.items()})\n        \n    if args.use_enc_aux_loss:\n        for i in range(args.enc_layers - 1):\n            aux_weight_dict.update({k + f'_enc_{i}': v for k, v in weight_dict.items()})\n            \n    if args.rho:\n        aux_weight_dict.update({k + f'_backbone': v for k, v in weight_dict.items()})\n        \n    if aux_weight_dict:\n        weight_dict.update(aux_weight_dict)\n\n    weight_dict['loss_mask_prediction'] = args.mask_prediction_coef\n\n    losses = ['labels', 'boxes', 'cardinality', \"corr\"]\n    if args.masks:\n        losses += [\"masks\"]\n    if args.rho:\n        losses += [\"mask_prediction\"]\n    \n    # num_classes, matcher, weight_dict, losses, focal_alpha=0.25\n    criterion = SetCriterion(num_classes, matcher, weight_dict, losses, args)\n    criterion.to(device)\n    postprocessors = {'bbox': PostProcess()}\n    if args.masks:\n        postprocessors['segm'] = PostProcessSegm()\n        if args.dataset_file == \"coco_panoptic\":\n            is_thing_map = {i: i <= 90 for i in range(201)}\n            postprocessors[\"panoptic\"] = PostProcessPanoptic(is_thing_map, threshold=0.85)\n\n    return model, criterion, postprocessors\n"
  },
  {
    "path": "models/deformable_transformer.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\nimport copy\nfrom typing import Optional, List\nimport math\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn, Tensor\nfrom torch.nn.init import xavier_uniform_, constant_, uniform_, normal_\n\nfrom util.misc import inverse_sigmoid\nfrom models.ops.modules import MSDeformAttn\n\n\nclass DeformableTransformer(nn.Module):\n    def __init__(self, d_model=256, nhead=8,\n                 num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=1024, dropout=0.1,\n                 activation=\"relu\", return_intermediate_dec=False,\n                 num_feature_levels=4, dec_n_points=4,  enc_n_points=4,\n                 two_stage=False, two_stage_num_proposals=300,\n                 args=None):\n        super().__init__()\n\n        self.d_model = d_model\n        self.nhead = nhead\n        self.two_stage = two_stage\n        self.two_stage_num_proposals = two_stage_num_proposals\n        self.eff_query_init = args.eff_query_init\n        self.eff_specific_head = args.eff_specific_head\n        # there's no need to compute reference points if above 2 conditions meet simultaneously\n        self._log_args('eff_query_init', 'eff_specific_head')\n\n        self.rho = args.rho\n        self.use_enc_aux_loss = args.use_enc_aux_loss\n        self.sparse_enc_head = 1 if self.two_stage and self.rho else 0\n\n        if self.rho:\n            self.enc_mask_predictor = MaskPredictor(self.d_model, self.d_model)\n        else:\n            self.enc_mask_predictor = None\n\n        encoder_layer = DeformableTransformerEncoderLayer(d_model, dim_feedforward, dropout, activation, \n                                                            num_feature_levels, nhead, enc_n_points)\n        self.encoder = DeformableTransformerEncoder(encoder_layer, num_encoder_layers, self.d_model)\n\n        decoder_layer = DeformableTransformerDecoderLayer(d_model, dim_feedforward,\n                                                          dropout, activation,\n                                                          num_feature_levels, nhead, dec_n_points)\n        self.decoder = DeformableTransformerDecoder(decoder_layer, num_decoder_layers, return_intermediate_dec)\n\n        self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model))\n\n        if self.two_stage:\n            self.enc_output = nn.Linear(d_model, d_model)\n            self.enc_output_norm = nn.LayerNorm(d_model)\n            \n        if self.two_stage:\n            self.pos_trans = nn.Linear(d_model * 2, d_model * (1 if self.eff_query_init else 2))\n            self.pos_trans_norm = nn.LayerNorm(d_model * (1 if self.eff_query_init else 2))\n    \n        if not self.two_stage:\n            self.reference_points = nn.Linear(d_model, 2)\n\n        self._reset_parameters()\n        \n    def _log_args(self, *names):\n        print('==============')\n        print(\"\\n\".join([f\"{name}: {getattr(self, name)}\" for name in names]))\n        print('==============')\n\n    def _reset_parameters(self):\n        for p in self.parameters():\n            if p.dim() > 1:\n                nn.init.xavier_uniform_(p)\n        for m in self.modules():\n            if isinstance(m, MSDeformAttn):\n                m._reset_parameters()\n        if hasattr(self, 'reference_points'):\n            xavier_uniform_(self.reference_points.weight.data, gain=1.0)\n            constant_(self.reference_points.bias.data, 0.)\n        normal_(self.level_embed)\n\n    def get_proposal_pos_embed(self, proposals):\n        # proposals: N, L(top_k), 4(bbox coords.)\n        num_pos_feats = 128\n        temperature = 10000\n        scale = 2 * math.pi\n\n        dim_t = torch.arange(num_pos_feats, dtype=torch.float32, device=proposals.device)  # 128\n        dim_t = temperature ** (2 * (dim_t // 2) / num_pos_feats)\n        proposals = proposals.sigmoid() * scale  # N, L, 4\n        pos = proposals[:, :, :, None] / dim_t  # N, L, 4, 128\n        # apply sin/cos alternatively\n        pos = torch.stack((pos[:, :, :, 0::2].sin(), pos[:, :, :, 1::2].cos()), dim=4)  # N, L, 4, 64, 2\n        pos = pos.flatten(2)  # N, L, 512 (4 x 128)\n        return pos\n\n    def gen_encoder_output_proposals(self, memory, memory_padding_mask, spatial_shapes, process_output=True):\n        \"\"\"Make region proposals for each multi-scale features considering their shapes and padding masks, \n        and project & normalize the encoder outputs corresponding to these proposals.\n            - center points: relative grid coordinates in the range of [0.01, 0.99] (additional mask)\n            - width/height:  2^(layer_id) * s (s=0.05) / see the appendix A.4\n        \n        Tensor shape example:\n            Args:\n                memory: torch.Size([2, 15060, 256])\n                memory_padding_mask: torch.Size([2, 15060])\n                spatial_shape: torch.Size([4, 2])\n            Returns:\n                output_memory: torch.Size([2, 15060, 256])\n                    - same shape with memory ( + additional mask + linear layer + layer norm )\n                output_proposals: torch.Size([2, 15060, 4]) \n                    - x, y, w, h\n        \"\"\"\n        N_, S_, C_ = memory.shape\n        proposals = []\n        _cur = 0\n        for lvl, (H_, W_) in enumerate(spatial_shapes):\n            # level of encoded feature scale\n            mask_flatten_ = memory_padding_mask[:, _cur:(_cur + H_ * W_)].view(N_, H_, W_, 1)\n            valid_H = torch.sum(~mask_flatten_[:, :, 0, 0], 1)\n            valid_W = torch.sum(~mask_flatten_[:, 0, :, 0], 1)\n\n            grid_y, grid_x = torch.meshgrid(torch.linspace(0, H_ - 1, H_, dtype=torch.float32, device=memory.device),\n                                            torch.linspace(0, W_ - 1, W_, dtype=torch.float32, device=memory.device))\n            grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1)\n\n            scale = torch.cat([valid_W.unsqueeze(-1), valid_H.unsqueeze(-1)], 1).view(N_, 1, 1, 2)\n            grid = (grid.unsqueeze(0).expand(N_, -1, -1, -1) + 0.5) / scale\n            wh = torch.ones_like(grid) * 0.05 * (2.0 ** lvl)\n            proposal = torch.cat((grid, wh), -1).view(N_, -1, 4)\n            proposals.append(proposal)\n            _cur += (H_ * W_)\n            \n        output_proposals = torch.cat(proposals, 1)\n        output_proposals_valid = ((output_proposals > 0.01) & (output_proposals < 0.99)).all(-1, keepdim=True)  \n        output_proposals = torch.log(output_proposals / (1 - output_proposals))  # inverse of sigmoid\n        output_proposals = output_proposals.masked_fill(memory_padding_mask.unsqueeze(-1), float('inf')) \n        output_proposals = output_proposals.masked_fill(~output_proposals_valid, float('inf'))  # sigmoid(inf) = 1\n\n        output_memory = memory\n        if process_output:\n            output_memory = output_memory.masked_fill(memory_padding_mask.unsqueeze(-1), float(0))\n            output_memory = output_memory.masked_fill(~output_proposals_valid, float(0))\n            output_memory = self.enc_output_norm(self.enc_output(output_memory))\n        return output_memory, output_proposals, (~memory_padding_mask).sum(axis=-1)\n\n    def get_valid_ratio(self, mask):\n        _, H, W = mask.shape\n        valid_H = torch.sum(~mask[:, :, 0], 1)\n        valid_W = torch.sum(~mask[:, 0, :], 1)\n        valid_ratio_h = valid_H.float() / H\n        valid_ratio_w = valid_W.float() / W\n        valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)\n        return valid_ratio\n\n    def forward(self, srcs, masks, pos_embeds, query_embed=None):\n        assert self.two_stage or query_embed is not None\n\n        ###########\n        # prepare input for encoder\n        src_flatten = []\n        mask_flatten = []\n        lvl_pos_embed_flatten = []\n        spatial_shapes = []\n        for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)):\n            bs, c, h, w = src.shape\n            spatial_shape = (h, w)\n            spatial_shapes.append(spatial_shape)\n            src = src.flatten(2).transpose(1, 2)\n            mask = mask.flatten(1)\n            pos_embed = pos_embed.flatten(2).transpose(1, 2)\n            lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1)\n            lvl_pos_embed_flatten.append(lvl_pos_embed)\n            src_flatten.append(src)\n            mask_flatten.append(mask)\n        src_flatten = torch.cat(src_flatten, 1)\n        mask_flatten = torch.cat(mask_flatten, 1)\n        lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1)\n        spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device)\n        level_start_index = torch.cat((spatial_shapes.new_zeros((1, )), spatial_shapes.prod(1).cumsum(0)[:-1]))\n        # valid ratios across multi-scale features of the same image can be varied,\n        # while they are interpolated and binarized on different resolutions.\n        valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)\n\n        ###########\n        # prepare for sparse encoder\n        if self.rho or self.use_enc_aux_loss:\n            backbone_output_memory, backbone_output_proposals, valid_token_nums = self.gen_encoder_output_proposals(\n                src_flatten+lvl_pos_embed_flatten, mask_flatten, spatial_shapes, \n                process_output=bool(self.rho))\n            self.valid_token_nums = valid_token_nums\n\n        if self.rho:\n            sparse_token_nums = (valid_token_nums * self.rho).int() + 1\n            backbone_topk = int(max(sparse_token_nums))\n            self.sparse_token_nums = sparse_token_nums\n\n            backbone_topk = min(backbone_topk, backbone_output_memory.shape[1])\n\n            backbone_mask_prediction = self.enc_mask_predictor(backbone_output_memory).squeeze(-1)\n            # excluding pad area\n            backbone_mask_prediction = backbone_mask_prediction.masked_fill(mask_flatten, backbone_mask_prediction.min())\n            backbone_topk_proposals = torch.topk(backbone_mask_prediction, backbone_topk, dim=1)[1]\n        else:\n            backbone_topk_proposals = None\n            backbone_outputs_class = None\n            backbone_outputs_coord_unact = None\n            sparse_token_nums= None\n\n        ###########\n        # encoder\n        if self.encoder:       \n            output_proposals = backbone_output_proposals if self.use_enc_aux_loss else None    \n            encoder_output = self.encoder(src_flatten, spatial_shapes, level_start_index, valid_ratios, \n                                  pos=lvl_pos_embed_flatten, padding_mask=mask_flatten, \n                                  topk_inds=backbone_topk_proposals, output_proposals=output_proposals,\n                                  sparse_token_nums=sparse_token_nums)\n            \n            memory, sampling_locations_enc, attn_weights_enc = encoder_output[:3]\n\n            if self.use_enc_aux_loss:\n                enc_inter_outputs_class, enc_inter_outputs_coord_unact = encoder_output[3:5]            \n        else:\n            memory = src_flatten + lvl_pos_embed_flatten\n\n        ###########\n        # prepare input for decoder\n        bs, _, c = memory.shape  # torch.Size([N, L, 256])\n        topk_proposals = None\n        if self.two_stage:\n            # finalize the first stage output\n            # project & normalize the memory and make proposal bounding boxes on them\n            output_memory, output_proposals, _ = self.gen_encoder_output_proposals(memory, mask_flatten, spatial_shapes)\n\n            # hack implementation for two-stage Deformable DETR (using the last layer registered in class/bbox_embed)\n            # 1) a linear projection for bounding box binary classification (fore/background)\n            enc_outputs_class = self.decoder.class_embed[self.decoder.num_layers](output_memory)\n            # 2) 3-layer FFN for bounding box regression\n            enc_outputs_coord_offset = self.decoder.bbox_embed[self.decoder.num_layers](output_memory)\n            enc_outputs_coord_unact = output_proposals + enc_outputs_coord_offset  # appendix A.4\n\n            # top scoring bounding boxes are picked as the final region proposals. \n            # these proposals are fed into the decoder as initial boxes for the iterative bounding box refinement.\n            topk = self.two_stage_num_proposals\n            # enc_outputs_class: torch.Size([N, L, 91])\n            \n            if self.eff_specific_head:\n                # take the best score for judging objectness with class specific head\n                enc_outputs_fg_class = enc_outputs_class.topk(1, dim=2).values[... , 0]\n            else:\n                # take the score from the binary(fore/background) classfier \n                # though outputs have 91 output dim, the 1st dim. alone will be used for the loss computation.\n                enc_outputs_fg_class = enc_outputs_class[..., 0]\n                \n            topk_proposals = torch.topk(enc_outputs_fg_class, topk, dim=1)[1]\n            topk_coords_unact = torch.gather(enc_outputs_coord_unact, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4))\n            topk_coords_unact = topk_coords_unact.detach()\n            reference_points = topk_coords_unact.sigmoid()\n\n            init_reference_out = reference_points\n            # pos_embed -> linear layer -> layer norm\n            pos_trans_out = self.pos_trans_norm(self.pos_trans(self.get_proposal_pos_embed(topk_coords_unact)))\n            \n            if self.eff_query_init:\n                # Efficient-DETR uses top-k memory as the initialization of `tgt` (query vectors)\n                tgt = torch.gather(memory, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, memory.size(-1)))\n                query_embed = pos_trans_out\n            else:\n                query_embed, tgt = torch.split(pos_trans_out, c, dim=2)\n\n        else:\n            query_embed, tgt = torch.split(query_embed, c, dim=1)\n            query_embed = query_embed.unsqueeze(0).expand(bs, -1, -1)\n            tgt = tgt.unsqueeze(0).expand(bs, -1, -1)\n            reference_points = self.reference_points(query_embed).sigmoid()\n            init_reference_out = reference_points\n\n        ###########\n        # decoder\n        hs, inter_references, sampling_locations_dec, attn_weights_dec = self.decoder(tgt, reference_points, src=memory, src_spatial_shapes=spatial_shapes, \n                                            src_level_start_index=level_start_index, src_valid_ratios=valid_ratios, \n                                            query_pos=query_embed, src_padding_mask=mask_flatten,\n                                            topk_inds=topk_proposals)\n\n        inter_references_out = inter_references\n        \n        ret = []\n        ret += [hs, init_reference_out, inter_references_out]\n        ret += [enc_outputs_class, enc_outputs_coord_unact] if self.two_stage else [None] * 2        \n        if self.rho:\n            ret += [backbone_mask_prediction]\n        else:\n            ret += [None]\n        ret += [enc_inter_outputs_class, enc_inter_outputs_coord_unact] if self.use_enc_aux_loss else [None] * 2\n        ret += [sampling_locations_enc, attn_weights_enc, sampling_locations_dec, attn_weights_dec]\n        ret += [backbone_topk_proposals, spatial_shapes, level_start_index]\n        return ret\n\n\nclass DeformableTransformerEncoderLayer(nn.Module):\n    def __init__(self,\n                 d_model=256, d_ffn=1024,\n                 dropout=0.1, activation=\"relu\",\n                 n_levels=4, n_heads=8, n_points=4):\n        super().__init__()\n\n        # self attention\n        self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points)\n        self.dropout1 = nn.Dropout(dropout)\n        self.norm1 = nn.LayerNorm(d_model)\n\n        # ffn\n        self.linear1 = nn.Linear(d_model, d_ffn)\n        self.activation = _get_activation_fn(activation)\n        self.dropout2 = nn.Dropout(dropout)\n        self.linear2 = nn.Linear(d_ffn, d_model)\n        self.dropout3 = nn.Dropout(dropout)\n        self.norm2 = nn.LayerNorm(d_model)\n\n    @staticmethod\n    def with_pos_embed(tensor, pos):\n        return tensor if pos is None else tensor + pos\n\n    def forward_ffn(self, src):\n        src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))\n        src = src + self.dropout3(src2)\n        src = self.norm2(src)\n        return src\n\n    def forward(self, src, pos, reference_points, spatial_shapes, level_start_index, padding_mask=None, tgt=None):\n        if tgt is None:\n            # self attention\n            src2, sampling_locations, attn_weights = self.self_attn(self.with_pos_embed(src, pos),\n                                reference_points, src, spatial_shapes,\n                                level_start_index, padding_mask)\n            src = src + self.dropout1(src2)\n            src = self.norm1(src)\n            # torch.Size([2, 13101, 256])\n\n            # ffn\n            src = self.forward_ffn(src)\n\n            return src, sampling_locations, attn_weights\n        else:\n            # self attention\n            tgt2, sampling_locations, attn_weights = self.self_attn(self.with_pos_embed(tgt, pos),\n                                reference_points, src, spatial_shapes,\n                                level_start_index, padding_mask)\n            tgt = tgt + self.dropout1(tgt2)\n            tgt = self.norm1(tgt)\n\n            # ffn\n            tgt = self.forward_ffn(tgt)\n\n            return tgt, sampling_locations, attn_weights\n\n\n\nclass DeformableTransformerEncoder(nn.Module):\n    def __init__(self, encoder_layer, num_layers, mask_predictor_dim=256):\n        super().__init__()\n        self.layers = _get_clones(encoder_layer, num_layers)\n        self.num_layers = num_layers\n        # hack implementation\n        self.aux_heads = False\n        self.class_embed = None\n        self.bbox_embed = None\n\n    @staticmethod\n    def get_reference_points(spatial_shapes, valid_ratios, device):\n        \"\"\"Make reference points for every single point on the multi-scale feature maps.\n        Each point has K reference points on every the multi-scale features.\n        \"\"\"\n        reference_points_list = []\n        for lvl, (H_, W_) in enumerate(spatial_shapes):\n\n            ref_y, ref_x = torch.meshgrid(torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device),\n                                          torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device))\n            ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_)\n            ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_)\n            # out-of-reference points have relative coords. larger than 1\n            ref = torch.stack((ref_x, ref_y), -1)\n            reference_points_list.append(ref)\n        reference_points = torch.cat(reference_points_list, 1)\n        reference_points = reference_points[:, :, None] * valid_ratios[:, None]\n        # >>> reference_points[:, :, None].shape\n        # torch.Size([2, 13101, 1, 2])\n        # >>> valid_ratios[:, None].shape\n        # torch.Size([2, 1, 4, 2])\n        return reference_points\n\n    def forward(self, src, spatial_shapes, level_start_index, valid_ratios, \n                pos=None, padding_mask=None, topk_inds=None, output_proposals=None, sparse_token_nums=None):\n        if self.aux_heads:\n            assert output_proposals is not None\n        else:\n            assert output_proposals is None\n            \n        output = src\n        sparsified_keys = False if topk_inds is None else True\n        reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=src.device)\n        reference_points_orig = reference_points\n        pos_orig = pos\n        output_proposals_orig = output_proposals\n        sampling_locations_all = []\n        attn_weights_all = []\n        if self.aux_heads:\n            enc_inter_outputs_class = []\n            enc_inter_outputs_coords = []\n                    \n        if sparsified_keys:\n            assert topk_inds is not None\n            B_, N_, S_, P_ = reference_points.shape\n            reference_points = torch.gather(reference_points.view(B_, N_, -1), 1, topk_inds.unsqueeze(-1).repeat(1, 1, S_*P_)).view(B_, -1, S_, P_)\n            tgt = torch.gather(output, 1, topk_inds.unsqueeze(-1).repeat(1, 1, output.size(-1)))\n            pos = torch.gather(pos, 1, topk_inds.unsqueeze(-1).repeat(1, 1, pos.size(-1)))\n            if output_proposals is not None:\n                output_proposals = output_proposals.gather(1, topk_inds.unsqueeze(-1).repeat(1, 1, output_proposals.size(-1)))\n        else:\n            tgt = None\n\n        for lid, layer in enumerate(self.layers):\n            # if tgt is None: self-attention / if tgt is not None: cross-attention w.r.t. the target queries\n            tgt, sampling_locations, attn_weights = layer(output, pos, reference_points, spatial_shapes, level_start_index, padding_mask, \n                        tgt=tgt if sparsified_keys else None)\n            sampling_locations_all.append(sampling_locations)\n            attn_weights_all.append(attn_weights)\n            if sparsified_keys:                \n                if sparse_token_nums is None:\n                    output = output.scatter(1, topk_inds.unsqueeze(-1).repeat(1, 1, tgt.size(-1)), tgt)\n                else:\n                    outputs = []\n                    for i in range(topk_inds.shape[0]):\n                        outputs.append(output[i].scatter(0, topk_inds[i][:sparse_token_nums[i]].unsqueeze(-1).repeat(1, tgt.size(-1)), tgt[i][:sparse_token_nums[i]]))\n                    output = torch.stack(outputs)\n            else:\n                output = tgt\n            \n            if self.aux_heads and lid < self.num_layers - 1:\n                # feed outputs to aux. heads\n                output_class = self.class_embed[lid](tgt)\n                output_offset = self.bbox_embed[lid](tgt)\n                output_coords_unact = output_proposals + output_offset\n                # values to be used for loss compuation\n                enc_inter_outputs_class.append(output_class)\n                enc_inter_outputs_coords.append(output_coords_unact.sigmoid())\n\n        # Change dimension from [num_layer, batch_size, ...] to [batch_size, num_layer, ...]\n        sampling_locations_all = torch.stack(sampling_locations_all, dim=1)\n        attn_weights_all = torch.stack(attn_weights_all, dim=1)\n        \n        ret = [output, sampling_locations_all, attn_weights_all]\n\n        if self.aux_heads:\n            ret += [enc_inter_outputs_class, enc_inter_outputs_coords]\n        \n        return ret\n\n\nclass DeformableTransformerDecoderLayer(nn.Module):\n    def __init__(self, d_model=256, d_ffn=1024, dropout=0.1, activation=\"relu\",\n                 n_levels=4, n_heads=8, n_points=4):\n        super().__init__()\n\n        # cross attention\n        self.cross_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points)\n        self.dropout1 = nn.Dropout(dropout)\n        self.norm1 = nn.LayerNorm(d_model)\n\n        # self attention\n        self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)\n        self.dropout2 = nn.Dropout(dropout)\n        self.norm2 = nn.LayerNorm(d_model)\n\n        # ffn\n        self.linear1 = nn.Linear(d_model, d_ffn)\n        self.activation = _get_activation_fn(activation)\n        self.dropout3 = nn.Dropout(dropout)\n        self.linear2 = nn.Linear(d_ffn, d_model)\n        self.dropout4 = nn.Dropout(dropout)\n        self.norm3 = nn.LayerNorm(d_model)\n\n    @staticmethod\n    def with_pos_embed(tensor, pos):\n        return tensor if pos is None else tensor + pos\n\n    def forward_ffn(self, tgt):\n        tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt))))\n        tgt = tgt + self.dropout4(tgt2)\n        tgt = self.norm3(tgt)\n        return tgt\n\n    def forward(self, tgt, query_pos, reference_points, src, src_spatial_shapes, \n                level_start_index, src_padding_mask=None):\n        # self attention\n        q = k = self.with_pos_embed(tgt, query_pos) \n        tgt2 = self.self_attn(q.transpose(0, 1), k.transpose(0, 1), tgt.transpose(0, 1))[0].transpose(0, 1)\n        tgt = tgt + self.dropout2(tgt2)\n        tgt = self.norm2(tgt)\n\n        # cross attention\n        assert reference_points is not None, \"deformable attention needs reference points!\"\n        tgt2, sampling_locations, attn_weights = self.cross_attn(self.with_pos_embed(tgt, query_pos),\n                                reference_points,\n                                src, src_spatial_shapes, level_start_index, src_padding_mask)\n            \n        tgt = tgt + self.dropout1(tgt2)\n        tgt = self.norm1(tgt)\n\n        # ffn\n        tgt = self.forward_ffn(tgt)\n        # torch.Size([2, 300, 256])\n\n        return tgt, sampling_locations, attn_weights\n\n\nclass DeformableTransformerDecoder(nn.Module):\n    def __init__(self, decoder_layer, num_layers, return_intermediate=False):\n        super().__init__()\n        self.layers = _get_clones(decoder_layer, num_layers)\n        self.num_layers = num_layers\n        self.return_intermediate = return_intermediate\n        # hack implementation for iterative bounding box refinement and two-stage Deformable DETR\n        self.bbox_embed = None\n        self.class_embed = None\n\n\n\n    def forward(self, tgt, reference_points, src, src_spatial_shapes, src_level_start_index, \n                src_valid_ratios, query_pos=None, src_padding_mask=None, topk_inds=None):\n        \"\"\"\n        Args:\n            tgt: torch.Size([2, 300, 256]) (query vectors)\n            reference_points: torch.Size([2, 300, 2])\n            src: torch.Size([2, 13101, 256]) (last MS feature map from the encoder)\n            query_pos: torch.Size([2, 300, 256]) (learned positional embedding of query vectors)\n            - `tgt` and `query_pos` are originated from the same query embedding. \n            - `tgt` changes through the forward pass as object query vector \n               while `query_pos` does not and is added as positional embedding.\n            \n        Returns: (when return_intermediate=True)\n            output: torch.Size([6, 2, 300, 256])\n            reference_points: torch.Size([6, 2, 300, 2])\n        \"\"\"\n        output = tgt\n\n        intermediate = []\n        intermediate_reference_points = []\n        sampling_locations_all = []\n        attn_weights_all = []\n        for lid, layer in enumerate(self.layers):\n            \n            if reference_points is None:\n                reference_points_input = None\n            elif reference_points.shape[-1] == 4:\n                # output from iterative bounding box refinement\n                # reference_points: N, top_k, 4(x/y/w/h)\n                # src_valid_ratios: N, num_feature_levels, 2(w/h)\n                # reference_points_input: N, top_k, num_feature_levels, 4(x/y/w/h)\n                reference_points_input = reference_points[:, :, None] \\\n                                        * torch.cat([src_valid_ratios, src_valid_ratios], -1)[:, None]\n            else:\n                assert reference_points.shape[-1] == 2\n                reference_points_input = reference_points[:, :, None] * src_valid_ratios[:, None]\n                \n            output, sampling_locations, attn_weights = layer(output, query_pos, reference_points_input, src, src_spatial_shapes, \n                           src_level_start_index, src_padding_mask)\n            sampling_locations_all.append(sampling_locations)\n            attn_weights_all.append(attn_weights)\n\n            # hack implementation for iterative bounding box refinement\n            if self.bbox_embed is not None:\n                assert reference_points is not None, \"box refinement needs reference points!\"\n                tmp = self.bbox_embed[lid](output)\n                if reference_points.shape[-1] == 4:\n                    new_reference_points = tmp + inverse_sigmoid(reference_points)\n                    new_reference_points = new_reference_points.sigmoid()\n                else:\n                    assert reference_points.shape[-1] == 2\n                    new_reference_points = tmp\n                    new_reference_points[..., :2] = tmp[..., :2] + inverse_sigmoid(reference_points)\n                    new_reference_points = new_reference_points.sigmoid()\n                reference_points = new_reference_points.detach()\n\n            if self.return_intermediate:\n                intermediate.append(output)\n                intermediate_reference_points.append(reference_points)\n\n        # Change dimension from [num_layer, batch_size, ...] to [batch_size, num_layer, ...]\n        sampling_locations_all = torch.stack(sampling_locations_all, dim=1)\n        attn_weights_all = torch.stack(attn_weights_all, dim=1)\n\n        if self.return_intermediate:\n            intermediate_outputs = torch.stack(intermediate)\n            if intermediate_reference_points[0] is None:\n                intermediate_reference_points = None\n            else:\n                intermediate_reference_points = torch.stack(intermediate_reference_points)\n\n            return intermediate_outputs, intermediate_reference_points, sampling_locations_all, attn_weights_all\n\n        return output, reference_points, sampling_locations_all, attn_weights_all\n\n\nclass MaskPredictor(nn.Module):\n    def __init__(self, in_dim, h_dim):\n        super().__init__()\n        self.h_dim = h_dim\n        self.layer1 = nn.Sequential(\n            nn.LayerNorm(in_dim),\n            nn.Linear(in_dim, h_dim),\n            nn.GELU()\n        )\n        self.layer2 = nn.Sequential(\n            nn.Linear(h_dim, h_dim // 2),\n            nn.GELU(),\n            nn.Linear(h_dim // 2, h_dim // 4),\n            nn.GELU(),\n            nn.Linear(h_dim // 4, 1)\n        )\n    \n    def forward(self, x):\n        z = self.layer1(x)\n        z_local, z_global = torch.split(z, self.h_dim // 2, dim=-1)\n        z_global = z_global.mean(dim=1, keepdim=True).expand(-1, z_local.shape[1], -1)\n        z = torch.cat([z_local, z_global], dim=-1)\n        out = self.layer2(z)\n        return out\n    \n\ndef _get_clones(module, N):\n    return nn.ModuleList([copy.deepcopy(module) for i in range(N)])\n\n\ndef _get_activation_fn(activation):\n    \"\"\"Return an activation function given a string\"\"\"\n    if activation == \"relu\":\n        return F.relu\n    if activation == \"gelu\":\n        return F.gelu\n    if activation == \"glu\":\n        return F.glu\n    raise RuntimeError(F\"activation should be relu/gelu, not {activation}.\")\n\n\ndef build_deforamble_transformer(args):\n    return DeformableTransformer(\n        d_model=args.hidden_dim,\n        nhead=args.nheads,\n        num_encoder_layers=args.enc_layers,\n        num_decoder_layers=args.dec_layers,\n        dim_feedforward=args.dim_feedforward,\n        dropout=args.dropout,\n        activation=\"relu\",\n        return_intermediate_dec=True,\n        num_feature_levels=args.num_feature_levels,\n        dec_n_points=args.dec_n_points,\n        enc_n_points=args.enc_n_points,\n        two_stage=args.two_stage,\n        two_stage_num_proposals=args.num_queries,\n        args=args)\n"
  },
  {
    "path": "models/matcher.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\n\"\"\"\nModules to compute the matching cost and solve the corresponding LSAP.\n\"\"\"\nimport torch\nfrom scipy.optimize import linear_sum_assignment\nfrom torch import nn\n\nfrom util.box_ops import box_cxcywh_to_xyxy, generalized_box_iou\n\n\nclass HungarianMatcher(nn.Module):\n    \"\"\"This class computes an assignment between the targets and the predictions of the network\n\n    For efficiency reasons, the targets don't include the no_object. Because of this, in general,\n    there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,\n    while the others are un-matched (and thus treated as non-objects).\n    \"\"\"\n\n    def __init__(self,\n                 cost_class: float = 1,\n                 cost_bbox: float = 1,\n                 cost_giou: float = 1):\n        \"\"\"Creates the matcher\n\n        Params:\n            cost_class: This is the relative weight of the classification error in the matching cost\n            cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost\n            cost_giou: This is the relative weight of the giou loss of the bounding box in the matching cost\n        \"\"\"\n        super().__init__()\n        self.cost_class = cost_class\n        self.cost_bbox = cost_bbox\n        self.cost_giou = cost_giou\n        assert cost_class != 0 or cost_bbox != 0 or cost_giou != 0, \"all costs cant be 0\"\n\n    def forward(self, outputs, targets):\n        \"\"\" Performs the matching\n\n        Params:\n            outputs: This is a dict that contains at least these entries:\n                 \"pred_logits\": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits\n                 \"pred_boxes\": Tensor of dim [batch_size, num_queries, 4] with the predicted box coordinates\n\n            targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:\n                 \"labels\": Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth\n                           objects in the target) containing the class labels\n                 \"boxes\": Tensor of dim [num_target_boxes, 4] containing the target box coordinates\n\n        Returns:\n            A list of size batch_size, containing tuples of (index_i, index_j) where:\n                - index_i is the indices of the selected predictions (in order)\n                - index_j is the indices of the corresponding selected targets (in order)\n            For each batch element, it holds:\n                len(index_i) = len(index_j) = min(num_queries, num_target_boxes)\n        \"\"\"\n        with torch.no_grad():\n            bs, num_queries = outputs[\"pred_logits\"].shape[:2]\n\n            # We flatten to compute the cost matrices in a batch\n            out_prob = outputs[\"pred_logits\"].flatten(0, 1).sigmoid()\n            out_bbox = outputs[\"pred_boxes\"].flatten(0, 1)  # [batch_size * num_queries, 4]\n\n            # Also concat the target labels and boxes\n            tgt_ids = torch.cat([v[\"labels\"] for v in targets])\n            tgt_bbox = torch.cat([v[\"boxes\"] for v in targets])\n\n            # Compute the classification cost.\n            alpha = 0.25\n            gamma = 2.0\n            neg_cost_class = (1 - alpha) * (out_prob ** gamma) * (-(1 - out_prob + 1e-8).log())\n            pos_cost_class = alpha * ((1 - out_prob) ** gamma) * (-(out_prob + 1e-8).log())\n            cost_class = pos_cost_class[:, tgt_ids] - neg_cost_class[:, tgt_ids]\n\n            # Compute the L1 cost between boxes\n            cost_bbox = torch.cdist(out_bbox, tgt_bbox, p=1)\n\n            # Compute the giou cost betwen boxes\n            cost_giou = -generalized_box_iou(box_cxcywh_to_xyxy(out_bbox),\n                                             box_cxcywh_to_xyxy(tgt_bbox))\n\n            # Final cost matrix\n            C = self.cost_bbox * cost_bbox + self.cost_class * cost_class + self.cost_giou * cost_giou\n            C = C.view(bs, num_queries, -1).cpu()\n\n            sizes = [len(v[\"boxes\"]) for v in targets]\n            indices = [linear_sum_assignment(c[i]) for i, c in enumerate(C.split(sizes, -1))]\n            return [(torch.as_tensor(i, dtype=torch.int64), torch.as_tensor([_j % size for _j in j], dtype=torch.int64)) \n                    for (i, j), size in zip(indices, sizes)]\n\n\ndef build_matcher(args):\n    return HungarianMatcher(cost_class=args.set_cost_class,\n                            cost_bbox=args.set_cost_bbox,\n                            cost_giou=args.set_cost_giou)\n"
  },
  {
    "path": "models/ops/functions/__init__.py",
    "content": "# ------------------------------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------------------\n# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n# ------------------------------------------------------------------------------------------------\n\nfrom .ms_deform_attn_func import MSDeformAttnFunction\n\n"
  },
  {
    "path": "models/ops/functions/ms_deform_attn_func.py",
    "content": "# ------------------------------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------------------\n# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n# ------------------------------------------------------------------------------------------------\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import division\n\nimport torch\nimport torch.nn.functional as F\nfrom torch.autograd import Function\nfrom torch.autograd.function import once_differentiable\n\nimport MultiScaleDeformableAttention as MSDA\n\n\nclass MSDeformAttnFunction(Function):\n    @staticmethod\n    def forward(ctx, value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, im2col_step):\n        ctx.im2col_step = im2col_step\n        output = MSDA.ms_deform_attn_forward(\n            value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, ctx.im2col_step)\n        ctx.save_for_backward(value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights)\n        return output\n\n    @staticmethod\n    @once_differentiable\n    def backward(ctx, grad_output):\n        value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights = ctx.saved_tensors\n        grad_value, grad_sampling_loc, grad_attn_weight = \\\n            MSDA.ms_deform_attn_backward(\n                value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, grad_output, ctx.im2col_step)\n\n        return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None\n\n\ndef ms_deform_attn_core_pytorch(value, value_spatial_shapes, sampling_locations, attention_weights):\n    # for debug and test only,\n    # need to use cuda version instead\n    N_, S_, M_, D_ = value.shape\n    _, Lq_, M_, L_, P_, _ = sampling_locations.shape\n    value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1)\n    sampling_grids = 2 * sampling_locations - 1\n    sampling_value_list = []\n    for lid_, (H_, W_) in enumerate(value_spatial_shapes):\n        # N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, D_, H_, W_\n        value_l_ = value_list[lid_].flatten(2).transpose(1, 2).reshape(N_*M_, D_, H_, W_)\n        # N_, Lq_, M_, P_, 2 -> N_, M_, Lq_, P_, 2 -> N_*M_, Lq_, P_, 2\n        sampling_grid_l_ = sampling_grids[:, :, :, lid_].transpose(1, 2).flatten(0, 1)\n        # N_*M_, D_, Lq_, P_\n        sampling_value_l_ = F.grid_sample(value_l_, sampling_grid_l_,\n                                          mode='bilinear', padding_mode='zeros', align_corners=False)\n        sampling_value_list.append(sampling_value_l_)\n    # (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_, M_, 1, Lq_, L_*P_)\n    attention_weights = attention_weights.transpose(1, 2).reshape(N_*M_, 1, Lq_, L_*P_)\n    output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights).sum(-1).view(N_, M_*D_, Lq_)\n    return output.transpose(1, 2).contiguous()\n"
  },
  {
    "path": "models/ops/make.sh",
    "content": "#!/usr/bin/env bash\n# ------------------------------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------------------\n# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n# ------------------------------------------------------------------------------------------------\n\npython setup.py build install\n"
  },
  {
    "path": "models/ops/modules/__init__.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------------------\n# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n# ------------------------------------------------------------------------------------------------\n\nfrom .ms_deform_attn import MSDeformAttn\n"
  },
  {
    "path": "models/ops/modules/ms_deform_attn.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------------------\n# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n# ------------------------------------------------------------------------------------------------\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import division\n\nimport warnings\nimport math\n\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torch.nn.init import xavier_uniform_, constant_\n\nfrom ..functions import MSDeformAttnFunction\n\n\ndef _is_power_of_2(n):\n    if (not isinstance(n, int)) or (n < 0):\n        raise ValueError(\"invalid input for _is_power_of_2: {} (type: {})\".format(n, type(n)))\n    return (n & (n-1) == 0) and n != 0\n\n\nclass MSDeformAttn(nn.Module):\n    def __init__(self, d_model=256, n_levels=4, n_heads=8, n_points=4):\n        \"\"\"\n        Multi-Scale Deformable Attention Module\n        :param d_model      hidden dimension\n        :param n_levels     number of feature levels\n        :param n_heads      number of attention heads\n        :param n_points     number of sampling points per attention head per feature level\n        \"\"\"\n        super().__init__()\n        if d_model % n_heads != 0:\n            raise ValueError('d_model must be divisible by n_heads, but got {} and {}'.format(d_model, n_heads))\n        _d_per_head = d_model // n_heads\n        # you'd better set _d_per_head to a power of 2 which is more efficient in our CUDA implementation\n        if not _is_power_of_2(_d_per_head):\n            warnings.warn(\"You'd better set d_model in MSDeformAttn to make the dimension of each attention head a power of 2 \"\n                          \"which is more efficient in our CUDA implementation.\")\n\n        self.im2col_step = 64\n\n        self.d_model = d_model\n        self.n_levels = n_levels\n        self.n_heads = n_heads\n        self.n_points = n_points\n\n        self.sampling_offsets = nn.Linear(d_model, n_heads * n_levels * n_points * 2)\n        self.attention_weights = nn.Linear(d_model, n_heads * n_levels * n_points)\n        self.value_proj = nn.Linear(d_model, d_model)\n        self.output_proj = nn.Linear(d_model, d_model)\n        self.python_ops_for_test = False\n\n        self._reset_parameters()\n\n    def _reset_parameters(self):\n        constant_(self.sampling_offsets.weight.data, 0.)\n        thetas = torch.arange(self.n_heads, dtype=torch.float32) * (2.0 * math.pi / self.n_heads)\n        grid_init = torch.stack([thetas.cos(), thetas.sin()], -1)\n        grid_init = (grid_init / grid_init.abs().max(-1, keepdim=True)[0]).view(self.n_heads, 1, 1, 2).repeat(1, self.n_levels, self.n_points, 1)\n        for i in range(self.n_points):\n            grid_init[:, :, i, :] *= i + 1\n        with torch.no_grad():\n            self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1))\n        constant_(self.attention_weights.weight.data, 0.)\n        constant_(self.attention_weights.bias.data, 0.)\n        xavier_uniform_(self.value_proj.weight.data)\n        constant_(self.value_proj.bias.data, 0.)\n        xavier_uniform_(self.output_proj.weight.data)\n        constant_(self.output_proj.bias.data, 0.)\n\n    def forward(self, query, reference_points, input_flatten, input_spatial_shapes, input_level_start_index, input_padding_mask=None):\n        \"\"\"\n        :param query                       (N, Length_{query}, C)\n        :param reference_points            (N, Length_{query}, n_levels, 2), range in [0, 1], top-left (0,0), bottom-right (1, 1), including padding area\n                                        or (N, Length_{query}, n_levels, 4), add additional (w, h) to form reference boxes\n        :param input_flatten               (N, \\sum_{l=0}^{L-1} H_l \\cdot W_l, C)\n        :param input_spatial_shapes        (n_levels, 2), [(H_0, W_0), (H_1, W_1), ..., (H_{L-1}, W_{L-1})]\n        :param input_level_start_index     (n_levels, ), [0, H_0*W_0, H_0*W_0+H_1*W_1, H_0*W_0+H_1*W_1+H_2*W_2, ..., H_0*W_0+H_1*W_1+...+H_{L-1}*W_{L-1}]\n        :param input_padding_mask          (N, \\sum_{l=0}^{L-1} H_l \\cdot W_l), True for padding elements, False for non-padding elements\n\n        :return output                     (N, Length_{query}, C)\n        \"\"\"\n        N, Len_q, _ = query.shape\n        N, Len_in, _ = input_flatten.shape\n        assert (input_spatial_shapes[:, 0] * input_spatial_shapes[:, 1]).sum() == Len_in\n\n        value = self.value_proj(input_flatten)\n        if input_padding_mask is not None:\n            value = value.masked_fill(input_padding_mask[..., None], float(0))\n        value = value.view(N, Len_in, self.n_heads, self.d_model // self.n_heads)\n        sampling_offsets = self.sampling_offsets(query).view(N, Len_q, self.n_heads, self.n_levels, self.n_points, 2)\n        attention_weights = self.attention_weights(query).view(N, Len_q, self.n_heads, self.n_levels * self.n_points)\n        attention_weights = F.softmax(attention_weights, -1).view(N, Len_q, self.n_heads, self.n_levels, self.n_points)\n        # N, Len_q, n_heads, n_levels, n_points, 2\n        if reference_points.shape[-1] == 2:\n            offset_normalizer = torch.stack([input_spatial_shapes[..., 1], input_spatial_shapes[..., 0]], -1)\n            sampling_locations = reference_points[:, :, None, :, None, :] \\\n                                 + sampling_offsets / offset_normalizer[None, None, None, :, None, :]\n        elif reference_points.shape[-1] == 4:\n            sampling_locations = reference_points[:, :, None, :, None, :2] \\\n                                 + sampling_offsets / self.n_points * reference_points[:, :, None, :, None, 2:] * 0.5\n        else:\n            raise ValueError(\n                'Last dim of reference_points must be 2 or 4, but get {} instead.'.format(reference_points.shape[-1]))\n        if not self.python_ops_for_test:\n            output = MSDeformAttnFunction.apply(\n                value, input_spatial_shapes, input_level_start_index, sampling_locations, attention_weights, self.im2col_step)\n        else:\n            output = ms_deform_attn_core_pytorch(value, input_spatial_shapes, sampling_locations, attention_weights)\n        output = self.output_proj(output)\n        return output, sampling_locations, attention_weights\n\n\ndef ms_deform_attn_core_pytorch(value, value_spatial_shapes, sampling_locations, attention_weights):\n    # for debug and test only,\n    # need to use cuda version instead\n    N_, S_, M_, D_ = value.shape\n    _, Lq_, M_, L_, P_, _ = sampling_locations.shape\n    value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1)\n    sampling_grids = 2 * sampling_locations - 1\n    sampling_value_list = []\n    for lid_, (H_, W_) in enumerate(value_spatial_shapes):\n        # N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, D_, H_, W_\n        value_l_ = value_list[lid_].flatten(2).transpose(1, 2).reshape(N_*M_, D_, H_, W_)\n        # N_, Lq_, M_, P_, 2 -> N_, M_, Lq_, P_, 2 -> N_*M_, Lq_, P_, 2\n        sampling_grid_l_ = sampling_grids[:, :, :, lid_].transpose(1, 2).flatten(0, 1)\n        # N_*M_, D_, Lq_, P_\n        sampling_value_l_ = F.grid_sample(value_l_, sampling_grid_l_,\n                                          mode='bilinear', padding_mode='zeros', align_corners=False)\n        sampling_value_list.append(sampling_value_l_)\n    # (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_, M_, 1, Lq_, L_*P_)\n    attention_weights = attention_weights.transpose(1, 2).reshape(N_*M_, 1, Lq_, L_*P_)\n    output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights).sum(-1).view(N_, M_*D_, Lq_)\n    return output.transpose(1, 2).contiguous()\n"
  },
  {
    "path": "models/ops/setup.py",
    "content": "# ------------------------------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------------------\n# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n# ------------------------------------------------------------------------------------------------\n\nimport os\nimport glob\n\nimport torch\n\nfrom torch.utils.cpp_extension import CUDA_HOME\nfrom torch.utils.cpp_extension import CppExtension\nfrom torch.utils.cpp_extension import CUDAExtension\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nrequirements = [\"torch\", \"torchvision\"]\n\ndef get_extensions():\n    this_dir = os.path.dirname(os.path.abspath(__file__))\n    extensions_dir = os.path.join(this_dir, \"src\")\n\n    main_file = glob.glob(os.path.join(extensions_dir, \"*.cpp\"))\n    source_cpu = glob.glob(os.path.join(extensions_dir, \"cpu\", \"*.cpp\"))\n    source_cuda = glob.glob(os.path.join(extensions_dir, \"cuda\", \"*.cu\"))\n\n    sources = main_file + source_cpu\n    extension = CppExtension\n    extra_compile_args = {\"cxx\": []}\n    define_macros = []\n\n    if torch.cuda.is_available() and CUDA_HOME is not None:\n        extension = CUDAExtension\n        sources += source_cuda\n        define_macros += [(\"WITH_CUDA\", None)]\n        extra_compile_args[\"nvcc\"] = [\n            \"-DCUDA_HAS_FP16=1\",\n            \"-D__CUDA_NO_HALF_OPERATORS__\",\n            \"-D__CUDA_NO_HALF_CONVERSIONS__\",\n            \"-D__CUDA_NO_HALF2_OPERATORS__\",\n        ]\n    else:\n        raise NotImplementedError('Cuda is not availabel')\n\n    sources = [os.path.join(extensions_dir, s) for s in sources]\n    include_dirs = [extensions_dir]\n    ext_modules = [\n        extension(\n            \"MultiScaleDeformableAttention\",\n            sources,\n            include_dirs=include_dirs,\n            define_macros=define_macros,\n            extra_compile_args=extra_compile_args,\n        )\n    ]\n    return ext_modules\n\nsetup(\n    name=\"MultiScaleDeformableAttention\",\n    version=\"1.0\",\n    author=\"Weijie Su\",\n    url=\"https://github.com/fundamentalvision/Deformable-DETR\",\n    description=\"PyTorch Wrapper for CUDA Functions of Multi-Scale Deformable Attention\",\n    packages=find_packages(exclude=(\"configs\", \"tests\",)),\n    ext_modules=get_extensions(),\n    cmdclass={\"build_ext\": torch.utils.cpp_extension.BuildExtension},\n)\n"
  },
  {
    "path": "models/ops/src/cpu/ms_deform_attn_cpu.cpp",
    "content": "/*!\n**************************************************************************************************\n* Deformable DETR\n* Copyright (c) 2020 SenseTime. All Rights Reserved.\n* Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n**************************************************************************************************\n* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n**************************************************************************************************\n*/\n\n#include <vector>\n\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n\n\nat::Tensor\nms_deform_attn_cpu_forward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const int im2col_step)\n{\n    AT_ERROR(\"Not implement on cpu\");\n}\n\nstd::vector<at::Tensor>\nms_deform_attn_cpu_backward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const at::Tensor &grad_output,\n    const int im2col_step)\n{\n    AT_ERROR(\"Not implement on cpu\");\n}\n\n"
  },
  {
    "path": "models/ops/src/cpu/ms_deform_attn_cpu.h",
    "content": "/*!\n**************************************************************************************************\n* Deformable DETR\n* Copyright (c) 2020 SenseTime. All Rights Reserved.\n* Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n**************************************************************************************************\n* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n**************************************************************************************************\n*/\n\n#pragma once\n#include <torch/extension.h>\n\nat::Tensor\nms_deform_attn_cpu_forward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const int im2col_step);\n\nstd::vector<at::Tensor>\nms_deform_attn_cpu_backward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const at::Tensor &grad_output,\n    const int im2col_step);\n\n\n"
  },
  {
    "path": "models/ops/src/cuda/ms_deform_attn_cuda.cu",
    "content": "/*!\n**************************************************************************************************\n* Deformable DETR\n* Copyright (c) 2020 SenseTime. All Rights Reserved.\n* Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n**************************************************************************************************\n* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n**************************************************************************************************\n*/\n\n#include <vector>\n#include \"cuda/ms_deform_im2col_cuda.cuh\"\n\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n\nat::Tensor ms_deform_attn_cuda_forward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const int im2col_step)\n{\n    AT_ASSERTM(value.is_contiguous(), \"value tensor has to be contiguous\");\n    AT_ASSERTM(spatial_shapes.is_contiguous(), \"spatial_shapes tensor has to be contiguous\");\n    AT_ASSERTM(level_start_index.is_contiguous(), \"level_start_index tensor has to be contiguous\");\n    AT_ASSERTM(sampling_loc.is_contiguous(), \"sampling_loc tensor has to be contiguous\");\n    AT_ASSERTM(attn_weight.is_contiguous(), \"attn_weight tensor has to be contiguous\");\n\n    AT_ASSERTM(value.type().is_cuda(), \"value must be a CUDA tensor\");\n    AT_ASSERTM(spatial_shapes.type().is_cuda(), \"spatial_shapes must be a CUDA tensor\");\n    AT_ASSERTM(level_start_index.type().is_cuda(), \"level_start_index must be a CUDA tensor\");\n    AT_ASSERTM(sampling_loc.type().is_cuda(), \"sampling_loc must be a CUDA tensor\");\n    AT_ASSERTM(attn_weight.type().is_cuda(), \"attn_weight must be a CUDA tensor\");\n\n    const int batch = value.size(0);\n    const int spatial_size = value.size(1);\n    const int num_heads = value.size(2);\n    const int channels = value.size(3);\n\n    const int num_levels = spatial_shapes.size(0);\n\n    const int num_query = sampling_loc.size(1);\n    const int num_point = sampling_loc.size(4);\n\n    const int im2col_step_ = std::min(batch, im2col_step);\n\n    AT_ASSERTM(batch % im2col_step_ == 0, \"batch(%d) must divide im2col_step(%d)\", batch, im2col_step_);\n    \n    auto output = at::zeros({batch, num_query, num_heads, channels}, value.options());\n\n    const int batch_n = im2col_step_;\n    auto output_n = output.view({batch/im2col_step_, batch_n, num_query, num_heads, channels});\n    auto per_value_size = spatial_size * num_heads * channels;\n    auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2;\n    auto per_attn_weight_size = num_query * num_heads * num_levels * num_point;\n    for (int n = 0; n < batch/im2col_step_; ++n)\n    {\n        auto columns = output_n.select(0, n);\n        AT_DISPATCH_FLOATING_TYPES(value.type(), \"ms_deform_attn_forward_cuda\", ([&] {\n            ms_deformable_im2col_cuda(at::cuda::getCurrentCUDAStream(),\n                value.data<scalar_t>() + n * im2col_step_ * per_value_size,\n                spatial_shapes.data<int64_t>(),\n                level_start_index.data<int64_t>(),\n                sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size,\n                attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size,\n                batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,\n                columns.data<scalar_t>());\n\n        }));\n    }\n\n    output = output.view({batch, num_query, num_heads*channels});\n\n    return output;\n}\n\n\nstd::vector<at::Tensor> ms_deform_attn_cuda_backward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const at::Tensor &grad_output,\n    const int im2col_step)\n{\n\n    AT_ASSERTM(value.is_contiguous(), \"value tensor has to be contiguous\");\n    AT_ASSERTM(spatial_shapes.is_contiguous(), \"spatial_shapes tensor has to be contiguous\");\n    AT_ASSERTM(level_start_index.is_contiguous(), \"level_start_index tensor has to be contiguous\");\n    AT_ASSERTM(sampling_loc.is_contiguous(), \"sampling_loc tensor has to be contiguous\");\n    AT_ASSERTM(attn_weight.is_contiguous(), \"attn_weight tensor has to be contiguous\");\n    AT_ASSERTM(grad_output.is_contiguous(), \"grad_output tensor has to be contiguous\");\n\n    AT_ASSERTM(value.type().is_cuda(), \"value must be a CUDA tensor\");\n    AT_ASSERTM(spatial_shapes.type().is_cuda(), \"spatial_shapes must be a CUDA tensor\");\n    AT_ASSERTM(level_start_index.type().is_cuda(), \"level_start_index must be a CUDA tensor\");\n    AT_ASSERTM(sampling_loc.type().is_cuda(), \"sampling_loc must be a CUDA tensor\");\n    AT_ASSERTM(attn_weight.type().is_cuda(), \"attn_weight must be a CUDA tensor\");\n    AT_ASSERTM(grad_output.type().is_cuda(), \"grad_output must be a CUDA tensor\");\n\n    const int batch = value.size(0);\n    const int spatial_size = value.size(1);\n    const int num_heads = value.size(2);\n    const int channels = value.size(3);\n\n    const int num_levels = spatial_shapes.size(0);\n\n    const int num_query = sampling_loc.size(1);\n    const int num_point = sampling_loc.size(4);\n\n    const int im2col_step_ = std::min(batch, im2col_step);\n\n    AT_ASSERTM(batch % im2col_step_ == 0, \"batch(%d) must divide im2col_step(%d)\", batch, im2col_step_);\n\n    auto grad_value = at::zeros_like(value);\n    auto grad_sampling_loc = at::zeros_like(sampling_loc);\n    auto grad_attn_weight = at::zeros_like(attn_weight);\n\n    const int batch_n = im2col_step_;\n    auto per_value_size = spatial_size * num_heads * channels;\n    auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2;\n    auto per_attn_weight_size = num_query * num_heads * num_levels * num_point;\n    auto grad_output_n = grad_output.view({batch/im2col_step_, batch_n, num_query, num_heads, channels});\n    \n    for (int n = 0; n < batch/im2col_step_; ++n)\n    {\n        auto grad_output_g = grad_output_n.select(0, n);\n        AT_DISPATCH_FLOATING_TYPES(value.type(), \"ms_deform_attn_backward_cuda\", ([&] {\n            ms_deformable_col2im_cuda(at::cuda::getCurrentCUDAStream(),\n                                    grad_output_g.data<scalar_t>(),\n                                    value.data<scalar_t>() + n * im2col_step_ * per_value_size,\n                                    spatial_shapes.data<int64_t>(),\n                                    level_start_index.data<int64_t>(),\n                                    sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size,\n                                    attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size,\n                                    batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,\n                                    grad_value.data<scalar_t>() +  n * im2col_step_ * per_value_size,\n                                    grad_sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size,\n                                    grad_attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size);\n\n        }));\n    }\n\n    return {\n        grad_value, grad_sampling_loc, grad_attn_weight\n    };\n}"
  },
  {
    "path": "models/ops/src/cuda/ms_deform_attn_cuda.h",
    "content": "/*!\n**************************************************************************************************\n* Deformable DETR\n* Copyright (c) 2020 SenseTime. All Rights Reserved.\n* Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n**************************************************************************************************\n* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n**************************************************************************************************\n*/\n\n#pragma once\n#include <torch/extension.h>\n\nat::Tensor ms_deform_attn_cuda_forward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const int im2col_step);\n\nstd::vector<at::Tensor> ms_deform_attn_cuda_backward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const at::Tensor &grad_output,\n    const int im2col_step);\n\n"
  },
  {
    "path": "models/ops/src/cuda/ms_deform_im2col_cuda.cuh",
    "content": "/*!\n**************************************************************************\n* Deformable DETR\n* Copyright (c) 2020 SenseTime. All Rights Reserved.\n* Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n**************************************************************************\n* Modified from DCN (https://github.com/msracver/Deformable-ConvNets)\n* Copyright (c) 2018 Microsoft\n**************************************************************************\n*/\n\n#include <cstdio>\n#include <algorithm>\n#include <cstring>\n\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n\n#include <THC/THCAtomics.cuh>\n\n#define CUDA_KERNEL_LOOP(i, n)                          \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x;   \\\n      i < (n);                                          \\\n      i += blockDim.x * gridDim.x)\n\nconst int CUDA_NUM_THREADS = 1024;\ninline int GET_BLOCKS(const int N, const int num_threads)\n{\n  return (N + num_threads - 1) / num_threads;\n}\n\n\ntemplate <typename scalar_t>\n__device__ scalar_t ms_deform_attn_im2col_bilinear(const scalar_t* &bottom_data, \n                                                   const int &height, const int &width, const int &nheads, const int &channels,\n                                                   const scalar_t &h, const scalar_t &w, const int &m, const int &c)\n{\n  const int h_low = floor(h);\n  const int w_low = floor(w);\n  const int h_high = h_low + 1;\n  const int w_high = w_low + 1;\n\n  const scalar_t lh = h - h_low;\n  const scalar_t lw = w - w_low;\n  const scalar_t hh = 1 - lh, hw = 1 - lw;\n\n  const int w_stride = nheads * channels;\n  const int h_stride = width * w_stride;\n  const int h_low_ptr_offset = h_low * h_stride;\n  const int h_high_ptr_offset = h_low_ptr_offset + h_stride;\n  const int w_low_ptr_offset = w_low * w_stride;\n  const int w_high_ptr_offset = w_low_ptr_offset + w_stride;\n  const int base_ptr = m * channels + c;\n\n  scalar_t v1 = 0;\n  if (h_low >= 0 && w_low >= 0)\n  {\n    const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;\n    v1 = bottom_data[ptr1];\n  }\n  scalar_t v2 = 0;\n  if (h_low >= 0 && w_high <= width - 1)\n  {\n    const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;\n    v2 = bottom_data[ptr2];\n  }\n  scalar_t v3 = 0;\n  if (h_high <= height - 1 && w_low >= 0)\n  {\n    const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;\n    v3 = bottom_data[ptr3];\n  }\n  scalar_t v4 = 0;\n  if (h_high <= height - 1 && w_high <= width - 1)\n  {\n    const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;\n    v4 = bottom_data[ptr4];\n  }\n\n  const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;\n\n  const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\n  return val;\n}\n\n\ntemplate <typename scalar_t>\n__device__ void ms_deform_attn_col2im_bilinear(const scalar_t* &bottom_data, \n                                                   const int &height, const int &width, const int &nheads, const int &channels,\n                                                   const scalar_t &h, const scalar_t &w, const int &m, const int &c,\n                                                   const scalar_t &top_grad,\n                                                   const scalar_t &attn_weight,\n                                                   scalar_t* &grad_value, \n                                                   scalar_t* grad_sampling_loc,\n                                                   scalar_t* grad_attn_weight)\n{\n  const int h_low = floor(h);\n  const int w_low = floor(w);\n  const int h_high = h_low + 1;\n  const int w_high = w_low + 1;\n\n  const scalar_t lh = h - h_low;\n  const scalar_t lw = w - w_low;\n  const scalar_t hh = 1 - lh, hw = 1 - lw;\n\n  const int w_stride = nheads * channels;\n  const int h_stride = width * w_stride;\n  const int h_low_ptr_offset = h_low * h_stride;\n  const int h_high_ptr_offset = h_low_ptr_offset + h_stride;\n  const int w_low_ptr_offset = w_low * w_stride;\n  const int w_high_ptr_offset = w_low_ptr_offset + w_stride;\n  const int base_ptr = m * channels + c;\n\n  const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;\n  const scalar_t top_grad_value = top_grad * attn_weight;\n  scalar_t grad_h_weight = 0, grad_w_weight = 0;\n\n  scalar_t v1 = 0;\n  if (h_low >= 0 && w_low >= 0)\n  {\n    const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;\n    v1 = bottom_data[ptr1];\n    grad_h_weight -= hw * v1;\n    grad_w_weight -= hh * v1;\n    atomicAdd(grad_value+ptr1, w1*top_grad_value);\n  }\n  scalar_t v2 = 0;\n  if (h_low >= 0 && w_high <= width - 1)\n  {\n    const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;\n    v2 = bottom_data[ptr2];\n    grad_h_weight -= lw * v2;\n    grad_w_weight += hh * v2;\n    atomicAdd(grad_value+ptr2, w2*top_grad_value);\n  }\n  scalar_t v3 = 0;\n  if (h_high <= height - 1 && w_low >= 0)\n  {\n    const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;\n    v3 = bottom_data[ptr3];\n    grad_h_weight += hw * v3;\n    grad_w_weight -= lh * v3;\n    atomicAdd(grad_value+ptr3, w3*top_grad_value); \n  }\n  scalar_t v4 = 0;\n  if (h_high <= height - 1 && w_high <= width - 1)\n  {\n    const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;\n    v4 = bottom_data[ptr4];\n    grad_h_weight += lw * v4;\n    grad_w_weight += lh * v4;\n    atomicAdd(grad_value+ptr4, w4*top_grad_value);\n  }\n\n  const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\n  *grad_attn_weight = top_grad * val;\n  *grad_sampling_loc = width * grad_w_weight * top_grad_value;\n  *(grad_sampling_loc + 1) = height * grad_h_weight * top_grad_value;\n}\n\n\ntemplate <typename scalar_t>\n__device__ void ms_deform_attn_col2im_bilinear_gm(const scalar_t* &bottom_data, \n                                                   const int &height, const int &width, const int &nheads, const int &channels,\n                                                   const scalar_t &h, const scalar_t &w, const int &m, const int &c,\n                                                   const scalar_t &top_grad,\n                                                   const scalar_t &attn_weight,\n                                                   scalar_t* &grad_value, \n                                                   scalar_t* grad_sampling_loc,\n                                                   scalar_t* grad_attn_weight)\n{\n  const int h_low = floor(h);\n  const int w_low = floor(w);\n  const int h_high = h_low + 1;\n  const int w_high = w_low + 1;\n\n  const scalar_t lh = h - h_low;\n  const scalar_t lw = w - w_low;\n  const scalar_t hh = 1 - lh, hw = 1 - lw;\n\n  const int w_stride = nheads * channels;\n  const int h_stride = width * w_stride;\n  const int h_low_ptr_offset = h_low * h_stride;\n  const int h_high_ptr_offset = h_low_ptr_offset + h_stride;\n  const int w_low_ptr_offset = w_low * w_stride;\n  const int w_high_ptr_offset = w_low_ptr_offset + w_stride;\n  const int base_ptr = m * channels + c;\n\n  const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;\n  const scalar_t top_grad_value = top_grad * attn_weight;\n  scalar_t grad_h_weight = 0, grad_w_weight = 0;\n\n  scalar_t v1 = 0;\n  if (h_low >= 0 && w_low >= 0)\n  {\n    const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;\n    v1 = bottom_data[ptr1];\n    grad_h_weight -= hw * v1;\n    grad_w_weight -= hh * v1;\n    atomicAdd(grad_value+ptr1, w1*top_grad_value);\n  }\n  scalar_t v2 = 0;\n  if (h_low >= 0 && w_high <= width - 1)\n  {\n    const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;\n    v2 = bottom_data[ptr2];\n    grad_h_weight -= lw * v2;\n    grad_w_weight += hh * v2;\n    atomicAdd(grad_value+ptr2, w2*top_grad_value);\n  }\n  scalar_t v3 = 0;\n  if (h_high <= height - 1 && w_low >= 0)\n  {\n    const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;\n    v3 = bottom_data[ptr3];\n    grad_h_weight += hw * v3;\n    grad_w_weight -= lh * v3;\n    atomicAdd(grad_value+ptr3, w3*top_grad_value); \n  }\n  scalar_t v4 = 0;\n  if (h_high <= height - 1 && w_high <= width - 1)\n  {\n    const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;\n    v4 = bottom_data[ptr4];\n    grad_h_weight += lw * v4;\n    grad_w_weight += lh * v4;\n    atomicAdd(grad_value+ptr4, w4*top_grad_value);\n  }\n\n  const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\n  atomicAdd(grad_attn_weight, top_grad * val); \n  atomicAdd(grad_sampling_loc, width * grad_w_weight * top_grad_value);\n  atomicAdd(grad_sampling_loc + 1, height * grad_h_weight * top_grad_value);\n}\n\n\ntemplate <typename scalar_t>\n__global__ void ms_deformable_im2col_gpu_kernel(const int n,\n                                                const scalar_t *data_value, \n                                                const int64_t *data_spatial_shapes,\n                                                const int64_t *data_level_start_index, \n                                                const scalar_t *data_sampling_loc,\n                                                const scalar_t *data_attn_weight,\n                                                const int batch_size, \n                                                const int spatial_size, \n                                                const int num_heads,\n                                                const int channels, \n                                                const int num_levels,\n                                                const int num_query,\n                                                const int num_point,\n                                                scalar_t *data_col)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    int _temp = index;\n    const int c_col = _temp % channels;\n    _temp /= channels;\n    const int sampling_index = _temp; \n    const int m_col = _temp % num_heads;\n    _temp /= num_heads;\n    const int q_col = _temp % num_query;\n    _temp /= num_query;\n    const int b_col = _temp;\n\n    scalar_t *data_col_ptr = data_col + index;\n    int data_weight_ptr = sampling_index * num_levels * num_point;\n    int data_loc_w_ptr = data_weight_ptr << 1;\n    const int qid_stride = num_heads * channels;\n    const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;\n    scalar_t col = 0;\n    \n    for (int l_col=0; l_col < num_levels; ++l_col)\n    {\n      const int level_start_id = data_level_start_index[l_col];\n      const int spatial_h_ptr = l_col << 1;\n      const int spatial_h = data_spatial_shapes[spatial_h_ptr];\n      const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];\n      const scalar_t *data_value_ptr = data_value + (data_value_ptr_init_offset + level_start_id * qid_stride);\n      for (int p_col=0; p_col < num_point; ++p_col)\n      {\n        const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];\n        const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];\n        const scalar_t weight = data_attn_weight[data_weight_ptr];\n\n        const scalar_t h_im = loc_h * spatial_h - 0.5;\n        const scalar_t w_im = loc_w * spatial_w - 0.5;\n\n        if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)\n        {\n          col += ms_deform_attn_im2col_bilinear(data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col) * weight;\n        }\n\n        data_weight_ptr += 1;\n        data_loc_w_ptr += 2;\n      }\n    }\n    *data_col_ptr = col;\n  }\n}\n\ntemplate <typename scalar_t, unsigned int blockSize>\n__global__ void ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1(const int n,\n                                                const scalar_t *grad_col,\n                                                const scalar_t *data_value,\n                                                const int64_t *data_spatial_shapes,\n                                                const int64_t *data_level_start_index, \n                                                const scalar_t *data_sampling_loc,\n                                                const scalar_t *data_attn_weight,\n                                                const int batch_size, \n                                                const int spatial_size, \n                                                const int num_heads,\n                                                const int channels, \n                                                const int num_levels,\n                                                const int num_query,\n                                                const int num_point,\n                                                scalar_t *grad_value,\n                                                scalar_t *grad_sampling_loc,\n                                                scalar_t *grad_attn_weight)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    __shared__ scalar_t cache_grad_sampling_loc[blockSize * 2];\n    __shared__ scalar_t cache_grad_attn_weight[blockSize];\n    unsigned int tid = threadIdx.x;\n    int _temp = index;\n    const int c_col = _temp % channels;\n    _temp /= channels;\n    const int sampling_index = _temp; \n    const int m_col = _temp % num_heads;\n    _temp /= num_heads;\n    const int q_col = _temp % num_query;\n    _temp /= num_query;\n    const int b_col = _temp;\n\n    const scalar_t top_grad = grad_col[index];\n\n    int data_weight_ptr = sampling_index * num_levels * num_point;\n    int data_loc_w_ptr = data_weight_ptr << 1;\n    const int grad_sampling_ptr = data_weight_ptr;\n    grad_sampling_loc += grad_sampling_ptr << 1;\n    grad_attn_weight += grad_sampling_ptr;\n    const int grad_weight_stride = 1;\n    const int grad_loc_stride = 2;\n    const int qid_stride = num_heads * channels;\n    const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;\n\n    for (int l_col=0; l_col < num_levels; ++l_col)\n    {\n      const int level_start_id = data_level_start_index[l_col];\n      const int spatial_h_ptr = l_col << 1;\n      const int spatial_h = data_spatial_shapes[spatial_h_ptr];\n      const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];\n      const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;\n      const scalar_t *data_value_ptr = data_value + value_ptr_offset;\n      scalar_t *grad_value_ptr = grad_value + value_ptr_offset;\n\n      for (int p_col=0; p_col < num_point; ++p_col)\n      {\n        const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];\n        const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];\n        const scalar_t weight = data_attn_weight[data_weight_ptr];\n\n        const scalar_t h_im = loc_h * spatial_h - 0.5;\n        const scalar_t w_im = loc_w * spatial_w - 0.5;\n        *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;\n        *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;\n        *(cache_grad_attn_weight+threadIdx.x)=0;\n        if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)\n        {\n          ms_deform_attn_col2im_bilinear(\n            data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,\n            top_grad, weight, grad_value_ptr, \n            cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);\n        }\n        \n        __syncthreads();\n        if (tid == 0)\n        {\n          scalar_t _grad_w=cache_grad_sampling_loc[0], _grad_h=cache_grad_sampling_loc[1], _grad_a=cache_grad_attn_weight[0];\n          int sid=2;\n          for (unsigned int tid = 1; tid < blockSize; ++tid)\n          {\n            _grad_w += cache_grad_sampling_loc[sid];\n            _grad_h += cache_grad_sampling_loc[sid + 1];\n            _grad_a += cache_grad_attn_weight[tid];\n            sid += 2;\n          }\n          \n          \n          *grad_sampling_loc = _grad_w;\n          *(grad_sampling_loc + 1) = _grad_h;\n          *grad_attn_weight = _grad_a;\n        }\n        __syncthreads();\n\n        data_weight_ptr += 1;\n        data_loc_w_ptr += 2;\n        grad_attn_weight += grad_weight_stride;\n        grad_sampling_loc += grad_loc_stride;\n      }\n    }\n  }\n}\n\n\ntemplate <typename scalar_t, unsigned int blockSize>\n__global__ void ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2(const int n,\n                                                const scalar_t *grad_col,\n                                                const scalar_t *data_value,\n                                                const int64_t *data_spatial_shapes,\n                                                const int64_t *data_level_start_index, \n                                                const scalar_t *data_sampling_loc,\n                                                const scalar_t *data_attn_weight,\n                                                const int batch_size, \n                                                const int spatial_size, \n                                                const int num_heads,\n                                                const int channels, \n                                                const int num_levels,\n                                                const int num_query,\n                                                const int num_point,\n                                                scalar_t *grad_value,\n                                                scalar_t *grad_sampling_loc,\n                                                scalar_t *grad_attn_weight)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    __shared__ scalar_t cache_grad_sampling_loc[blockSize * 2];\n    __shared__ scalar_t cache_grad_attn_weight[blockSize];\n    unsigned int tid = threadIdx.x;\n    int _temp = index;\n    const int c_col = _temp % channels;\n    _temp /= channels;\n    const int sampling_index = _temp; \n    const int m_col = _temp % num_heads;\n    _temp /= num_heads;\n    const int q_col = _temp % num_query;\n    _temp /= num_query;\n    const int b_col = _temp;\n\n    const scalar_t top_grad = grad_col[index];\n\n    int data_weight_ptr = sampling_index * num_levels * num_point;\n    int data_loc_w_ptr = data_weight_ptr << 1;\n    const int grad_sampling_ptr = data_weight_ptr;\n    grad_sampling_loc += grad_sampling_ptr << 1;\n    grad_attn_weight += grad_sampling_ptr;\n    const int grad_weight_stride = 1;\n    const int grad_loc_stride = 2;\n    const int qid_stride = num_heads * channels;\n    const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;\n\n    for (int l_col=0; l_col < num_levels; ++l_col)\n    {\n      const int level_start_id = data_level_start_index[l_col];\n      const int spatial_h_ptr = l_col << 1;\n      const int spatial_h = data_spatial_shapes[spatial_h_ptr];\n      const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];\n      const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;\n      const scalar_t *data_value_ptr = data_value + value_ptr_offset;\n      scalar_t *grad_value_ptr = grad_value + value_ptr_offset;\n\n      for (int p_col=0; p_col < num_point; ++p_col)\n      {\n        const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];\n        const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];\n        const scalar_t weight = data_attn_weight[data_weight_ptr];\n\n        const scalar_t h_im = loc_h * spatial_h - 0.5;\n        const scalar_t w_im = loc_w * spatial_w - 0.5;\n        *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;\n        *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;\n        *(cache_grad_attn_weight+threadIdx.x)=0;\n        if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)\n        {\n          ms_deform_attn_col2im_bilinear(\n            data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,\n            top_grad, weight, grad_value_ptr, \n            cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);\n        }\n        \n        __syncthreads();\n\n        for (unsigned int s=blockSize/2; s>0; s>>=1)\n        {\n          if (tid < s) {\n            const unsigned int xid1 = tid << 1;\n            const unsigned int xid2 = (tid + s) << 1;\n            cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];\n            cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];\n            cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];\n          }\n          __syncthreads();\n        }\n\n        if (tid == 0)\n        { \n          *grad_sampling_loc = cache_grad_sampling_loc[0];\n          *(grad_sampling_loc + 1) = cache_grad_sampling_loc[1];\n          *grad_attn_weight = cache_grad_attn_weight[0];\n        }\n        __syncthreads();\n\n        data_weight_ptr += 1;\n        data_loc_w_ptr += 2;\n        grad_attn_weight += grad_weight_stride;\n        grad_sampling_loc += grad_loc_stride;\n      }\n    }\n  }\n}\n\n\ntemplate <typename scalar_t>\n__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v1(const int n,\n                                                const scalar_t *grad_col,\n                                                const scalar_t *data_value,\n                                                const int64_t *data_spatial_shapes,\n                                                const int64_t *data_level_start_index, \n                                                const scalar_t *data_sampling_loc,\n                                                const scalar_t *data_attn_weight,\n                                                const int batch_size, \n                                                const int spatial_size, \n                                                const int num_heads,\n                                                const int channels, \n                                                const int num_levels,\n                                                const int num_query,\n                                                const int num_point,\n                                                scalar_t *grad_value,\n                                                scalar_t *grad_sampling_loc,\n                                                scalar_t *grad_attn_weight)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    extern __shared__ int _s[];\n    scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;\n    scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;\n    unsigned int tid = threadIdx.x;\n    int _temp = index;\n    const int c_col = _temp % channels;\n    _temp /= channels;\n    const int sampling_index = _temp; \n    const int m_col = _temp % num_heads;\n    _temp /= num_heads;\n    const int q_col = _temp % num_query;\n    _temp /= num_query;\n    const int b_col = _temp;\n\n    const scalar_t top_grad = grad_col[index];\n\n    int data_weight_ptr = sampling_index * num_levels * num_point;\n    int data_loc_w_ptr = data_weight_ptr << 1;\n    const int grad_sampling_ptr = data_weight_ptr;\n    grad_sampling_loc += grad_sampling_ptr << 1;\n    grad_attn_weight += grad_sampling_ptr;\n    const int grad_weight_stride = 1;\n    const int grad_loc_stride = 2;\n    const int qid_stride = num_heads * channels;\n    const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;\n\n    for (int l_col=0; l_col < num_levels; ++l_col)\n    {\n      const int level_start_id = data_level_start_index[l_col];\n      const int spatial_h_ptr = l_col << 1;\n      const int spatial_h = data_spatial_shapes[spatial_h_ptr];\n      const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];\n      const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;\n      const scalar_t *data_value_ptr = data_value + value_ptr_offset;\n      scalar_t *grad_value_ptr = grad_value + value_ptr_offset;\n\n      for (int p_col=0; p_col < num_point; ++p_col)\n      {\n        const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];\n        const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];\n        const scalar_t weight = data_attn_weight[data_weight_ptr];\n\n        const scalar_t h_im = loc_h * spatial_h - 0.5;\n        const scalar_t w_im = loc_w * spatial_w - 0.5;\n        *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;\n        *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;\n        *(cache_grad_attn_weight+threadIdx.x)=0;\n        if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)\n        {\n          ms_deform_attn_col2im_bilinear(\n            data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,\n            top_grad, weight, grad_value_ptr, \n            cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);\n        }\n        \n        __syncthreads();\n        if (tid == 0)\n        {\n          scalar_t _grad_w=cache_grad_sampling_loc[0], _grad_h=cache_grad_sampling_loc[1], _grad_a=cache_grad_attn_weight[0];\n          int sid=2;\n          for (unsigned int tid = 1; tid < blockDim.x; ++tid)\n          {\n            _grad_w += cache_grad_sampling_loc[sid];\n            _grad_h += cache_grad_sampling_loc[sid + 1];\n            _grad_a += cache_grad_attn_weight[tid];\n            sid += 2;\n          }\n          \n          \n          *grad_sampling_loc = _grad_w;\n          *(grad_sampling_loc + 1) = _grad_h;\n          *grad_attn_weight = _grad_a;\n        }\n        __syncthreads();\n\n        data_weight_ptr += 1;\n        data_loc_w_ptr += 2;\n        grad_attn_weight += grad_weight_stride;\n        grad_sampling_loc += grad_loc_stride;\n      }\n    }\n  }\n}\n\ntemplate <typename scalar_t>\n__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v2(const int n,\n                                                const scalar_t *grad_col,\n                                                const scalar_t *data_value,\n                                                const int64_t *data_spatial_shapes,\n                                                const int64_t *data_level_start_index, \n                                                const scalar_t *data_sampling_loc,\n                                                const scalar_t *data_attn_weight,\n                                                const int batch_size, \n                                                const int spatial_size, \n                                                const int num_heads,\n                                                const int channels, \n                                                const int num_levels,\n                                                const int num_query,\n                                                const int num_point,\n                                                scalar_t *grad_value,\n                                                scalar_t *grad_sampling_loc,\n                                                scalar_t *grad_attn_weight)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    extern __shared__ int _s[];\n    scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;\n    scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;\n    unsigned int tid = threadIdx.x;\n    int _temp = index;\n    const int c_col = _temp % channels;\n    _temp /= channels;\n    const int sampling_index = _temp; \n    const int m_col = _temp % num_heads;\n    _temp /= num_heads;\n    const int q_col = _temp % num_query;\n    _temp /= num_query;\n    const int b_col = _temp;\n\n    const scalar_t top_grad = grad_col[index];\n\n    int data_weight_ptr = sampling_index * num_levels * num_point;\n    int data_loc_w_ptr = data_weight_ptr << 1;\n    const int grad_sampling_ptr = data_weight_ptr;\n    grad_sampling_loc += grad_sampling_ptr << 1;\n    grad_attn_weight += grad_sampling_ptr;\n    const int grad_weight_stride = 1;\n    const int grad_loc_stride = 2;\n    const int qid_stride = num_heads * channels;\n    const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;\n\n    for (int l_col=0; l_col < num_levels; ++l_col)\n    {\n      const int level_start_id = data_level_start_index[l_col];\n      const int spatial_h_ptr = l_col << 1;\n      const int spatial_h = data_spatial_shapes[spatial_h_ptr];\n      const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];\n      const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;\n      const scalar_t *data_value_ptr = data_value + value_ptr_offset;\n      scalar_t *grad_value_ptr = grad_value + value_ptr_offset;\n\n      for (int p_col=0; p_col < num_point; ++p_col)\n      {\n        const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];\n        const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];\n        const scalar_t weight = data_attn_weight[data_weight_ptr];\n\n        const scalar_t h_im = loc_h * spatial_h - 0.5;\n        const scalar_t w_im = loc_w * spatial_w - 0.5;\n        *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;\n        *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;\n        *(cache_grad_attn_weight+threadIdx.x)=0;\n        if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)\n        {\n          ms_deform_attn_col2im_bilinear(\n            data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,\n            top_grad, weight, grad_value_ptr, \n            cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);\n        }\n        \n        __syncthreads();\n\n        for (unsigned int s=blockDim.x/2, spre=blockDim.x; s>0; s>>=1, spre>>=1)\n        {\n          if (tid < s) {\n            const unsigned int xid1 = tid << 1;\n            const unsigned int xid2 = (tid + s) << 1;\n            cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];\n            cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];\n            cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];\n            if (tid + (s << 1) < spre)\n            {\n              cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + (s << 1)];\n              cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2 + (s << 1)];\n              cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1 + (s << 1)];\n            } \n          }\n          __syncthreads();\n        }\n\n        if (tid == 0)\n        {\n          *grad_sampling_loc = cache_grad_sampling_loc[0];\n          *(grad_sampling_loc + 1) = cache_grad_sampling_loc[1];\n          *grad_attn_weight = cache_grad_attn_weight[0];\n        }\n        __syncthreads();\n\n        data_weight_ptr += 1;\n        data_loc_w_ptr += 2;\n        grad_attn_weight += grad_weight_stride;\n        grad_sampling_loc += grad_loc_stride;\n      }\n    }\n  }\n}\n\ntemplate <typename scalar_t>\n__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v2_multi_blocks(const int n,\n                                                const scalar_t *grad_col,\n                                                const scalar_t *data_value,\n                                                const int64_t *data_spatial_shapes,\n                                                const int64_t *data_level_start_index, \n                                                const scalar_t *data_sampling_loc,\n                                                const scalar_t *data_attn_weight,\n                                                const int batch_size, \n                                                const int spatial_size, \n                                                const int num_heads,\n                                                const int channels, \n                                                const int num_levels,\n                                                const int num_query,\n                                                const int num_point,\n                                                scalar_t *grad_value,\n                                                scalar_t *grad_sampling_loc,\n                                                scalar_t *grad_attn_weight)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    extern __shared__ int _s[];\n    scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;\n    scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;\n    unsigned int tid = threadIdx.x;\n    int _temp = index;\n    const int c_col = _temp % channels;\n    _temp /= channels;\n    const int sampling_index = _temp; \n    const int m_col = _temp % num_heads;\n    _temp /= num_heads;\n    const int q_col = _temp % num_query;\n    _temp /= num_query;\n    const int b_col = _temp;\n\n    const scalar_t top_grad = grad_col[index];\n\n    int data_weight_ptr = sampling_index * num_levels * num_point;\n    int data_loc_w_ptr = data_weight_ptr << 1;\n    const int grad_sampling_ptr = data_weight_ptr;\n    grad_sampling_loc += grad_sampling_ptr << 1;\n    grad_attn_weight += grad_sampling_ptr;\n    const int grad_weight_stride = 1;\n    const int grad_loc_stride = 2;\n    const int qid_stride = num_heads * channels;\n    const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;\n\n    for (int l_col=0; l_col < num_levels; ++l_col)\n    {\n      const int level_start_id = data_level_start_index[l_col];\n      const int spatial_h_ptr = l_col << 1;\n      const int spatial_h = data_spatial_shapes[spatial_h_ptr];\n      const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];\n      const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;\n      const scalar_t *data_value_ptr = data_value + value_ptr_offset;\n      scalar_t *grad_value_ptr = grad_value + value_ptr_offset;\n\n      for (int p_col=0; p_col < num_point; ++p_col)\n      {\n        const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];\n        const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];\n        const scalar_t weight = data_attn_weight[data_weight_ptr];\n\n        const scalar_t h_im = loc_h * spatial_h - 0.5;\n        const scalar_t w_im = loc_w * spatial_w - 0.5;\n        *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;\n        *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;\n        *(cache_grad_attn_weight+threadIdx.x)=0;\n        if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)\n        {\n          ms_deform_attn_col2im_bilinear(\n            data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,\n            top_grad, weight, grad_value_ptr, \n            cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);\n        }\n        \n        __syncthreads();\n\n        for (unsigned int s=blockDim.x/2, spre=blockDim.x; s>0; s>>=1, spre>>=1)\n        {\n          if (tid < s) {\n            const unsigned int xid1 = tid << 1;\n            const unsigned int xid2 = (tid + s) << 1;\n            cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];\n            cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];\n            cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];\n            if (tid + (s << 1) < spre)\n            {\n              cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + (s << 1)];\n              cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2 + (s << 1)];\n              cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1 + (s << 1)];\n            }\n          }\n          __syncthreads();\n        }\n\n        if (tid == 0)\n        {\n          atomicAdd(grad_sampling_loc, cache_grad_sampling_loc[0]);\n          atomicAdd(grad_sampling_loc + 1, cache_grad_sampling_loc[1]);\n          atomicAdd(grad_attn_weight, cache_grad_attn_weight[0]);\n        }\n        __syncthreads();\n\n        data_weight_ptr += 1;\n        data_loc_w_ptr += 2;\n        grad_attn_weight += grad_weight_stride;\n        grad_sampling_loc += grad_loc_stride;\n      }\n    }\n  }\n}\n\n\ntemplate <typename scalar_t>\n__global__ void ms_deformable_col2im_gpu_kernel_gm(const int n,\n                                                const scalar_t *grad_col,\n                                                const scalar_t *data_value,\n                                                const int64_t *data_spatial_shapes,\n                                                const int64_t *data_level_start_index, \n                                                const scalar_t *data_sampling_loc,\n                                                const scalar_t *data_attn_weight,\n                                                const int batch_size, \n                                                const int spatial_size, \n                                                const int num_heads,\n                                                const int channels, \n                                                const int num_levels,\n                                                const int num_query,\n                                                const int num_point,\n                                                scalar_t *grad_value,\n                                                scalar_t *grad_sampling_loc,\n                                                scalar_t *grad_attn_weight)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    int _temp = index;\n    const int c_col = _temp % channels;\n    _temp /= channels;\n    const int sampling_index = _temp; \n    const int m_col = _temp % num_heads;\n    _temp /= num_heads;\n    const int q_col = _temp % num_query;\n    _temp /= num_query;\n    const int b_col = _temp;\n\n    const scalar_t top_grad = grad_col[index];\n\n    int data_weight_ptr = sampling_index * num_levels * num_point;\n    int data_loc_w_ptr = data_weight_ptr << 1;\n    const int grad_sampling_ptr = data_weight_ptr;\n    grad_sampling_loc += grad_sampling_ptr << 1;\n    grad_attn_weight += grad_sampling_ptr;\n    const int grad_weight_stride = 1;\n    const int grad_loc_stride = 2;\n    const int qid_stride = num_heads * channels;\n    const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;\n\n    for (int l_col=0; l_col < num_levels; ++l_col)\n    {\n      const int level_start_id = data_level_start_index[l_col];\n      const int spatial_h_ptr = l_col << 1;\n      const int spatial_h = data_spatial_shapes[spatial_h_ptr];\n      const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];\n      const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;\n      const scalar_t *data_value_ptr = data_value + value_ptr_offset;\n      scalar_t *grad_value_ptr = grad_value + value_ptr_offset;\n\n      for (int p_col=0; p_col < num_point; ++p_col)\n      {\n        const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];\n        const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];\n        const scalar_t weight = data_attn_weight[data_weight_ptr];\n\n        const scalar_t h_im = loc_h * spatial_h - 0.5;\n        const scalar_t w_im = loc_w * spatial_w - 0.5;\n        if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)\n        {\n          ms_deform_attn_col2im_bilinear_gm(\n            data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,\n            top_grad, weight, grad_value_ptr, \n            grad_sampling_loc, grad_attn_weight);\n        }\n        data_weight_ptr += 1;\n        data_loc_w_ptr += 2;\n        grad_attn_weight += grad_weight_stride;\n        grad_sampling_loc += grad_loc_stride;\n      }\n    }\n  }\n}\n\n\ntemplate <typename scalar_t>\nvoid ms_deformable_im2col_cuda(cudaStream_t stream,\n                              const scalar_t* data_value,\n                              const int64_t* data_spatial_shapes, \n                              const int64_t* data_level_start_index, \n                              const scalar_t* data_sampling_loc,\n                              const scalar_t* data_attn_weight,\n                              const int batch_size,\n                              const int spatial_size, \n                              const int num_heads, \n                              const int channels, \n                              const int num_levels, \n                              const int num_query,\n                              const int num_point,\n                              scalar_t* data_col)\n{\n  const int num_kernels = batch_size * num_query * num_heads * channels;\n  const int num_actual_kernels = batch_size * num_query * num_heads * channels;\n  const int num_threads = CUDA_NUM_THREADS;\n  ms_deformable_im2col_gpu_kernel<scalar_t>\n      <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n          0, stream>>>(\n      num_kernels, data_value, data_spatial_shapes, data_level_start_index, data_sampling_loc, data_attn_weight, \n      batch_size, spatial_size, num_heads, channels, num_levels, num_query, num_point, data_col);\n  \n  cudaError_t err = cudaGetLastError();\n  if (err != cudaSuccess)\n  {\n    printf(\"error in ms_deformable_im2col_cuda: %s\\n\", cudaGetErrorString(err));\n  }\n\n}\n\ntemplate <typename scalar_t>\nvoid ms_deformable_col2im_cuda(cudaStream_t stream,\n                              const scalar_t* grad_col,\n                              const scalar_t* data_value,\n                              const int64_t * data_spatial_shapes,\n                              const int64_t * data_level_start_index,\n                              const scalar_t * data_sampling_loc,\n                              const scalar_t * data_attn_weight,\n                              const int batch_size, \n                              const int spatial_size, \n                              const int num_heads,\n                              const int channels, \n                              const int num_levels,\n                              const int num_query,\n                              const int num_point, \n                              scalar_t* grad_value,\n                              scalar_t* grad_sampling_loc,\n                              scalar_t* grad_attn_weight)\n{\n  const int num_threads = (channels > CUDA_NUM_THREADS)?CUDA_NUM_THREADS:channels;\n  const int num_kernels = batch_size * num_query * num_heads * channels;\n  const int num_actual_kernels = batch_size * num_query * num_heads * channels;\n  if (channels > 1024)\n  {\n    if ((channels & 1023) == 0)\n    {\n      ms_deformable_col2im_gpu_kernel_shm_reduce_v2_multi_blocks<scalar_t>\n          <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n              num_threads*3*sizeof(scalar_t), stream>>>(\n                        num_kernels, \n                        grad_col,\n                        data_value,\n                        data_spatial_shapes,\n                        data_level_start_index, \n                        data_sampling_loc,\n                        data_attn_weight,\n                        batch_size, \n                        spatial_size, \n                        num_heads,\n                        channels, \n                        num_levels,\n                        num_query,\n                        num_point,\n                        grad_value,\n                        grad_sampling_loc,\n                        grad_attn_weight);\n    }\n    else\n    {\n      ms_deformable_col2im_gpu_kernel_gm<scalar_t>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n    }\n  }\n  else{\n    switch(channels)\n    {\n      case 1:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1<scalar_t, 1>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 2:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1<scalar_t, 2>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 4:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1<scalar_t, 4>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 8:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1<scalar_t, 8>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 16:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1<scalar_t, 16>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 32:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1<scalar_t, 32>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 64:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2<scalar_t, 64>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 128:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2<scalar_t, 128>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 256:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2<scalar_t, 256>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 512:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2<scalar_t, 512>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      case 1024:\n        ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2<scalar_t, 1024>\n        <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n            0, stream>>>(\n                      num_kernels, \n                      grad_col,\n                      data_value,\n                      data_spatial_shapes,\n                      data_level_start_index, \n                      data_sampling_loc,\n                      data_attn_weight,\n                      batch_size, \n                      spatial_size, \n                      num_heads,\n                      channels, \n                      num_levels,\n                      num_query,\n                      num_point,\n                      grad_value,\n                      grad_sampling_loc,\n                      grad_attn_weight);\n        break;\n      default:\n        if (channels < 64)\n        {\n          ms_deformable_col2im_gpu_kernel_shm_reduce_v1<scalar_t>\n          <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n              num_threads*3*sizeof(scalar_t), stream>>>(\n                        num_kernels, \n                        grad_col,\n                        data_value,\n                        data_spatial_shapes,\n                        data_level_start_index, \n                        data_sampling_loc,\n                        data_attn_weight,\n                        batch_size, \n                        spatial_size, \n                        num_heads,\n                        channels, \n                        num_levels,\n                        num_query,\n                        num_point,\n                        grad_value,\n                        grad_sampling_loc,\n                        grad_attn_weight);\n        }\n        else\n        {\n          ms_deformable_col2im_gpu_kernel_shm_reduce_v2<scalar_t>\n          <<<GET_BLOCKS(num_actual_kernels, num_threads), num_threads,\n              num_threads*3*sizeof(scalar_t), stream>>>(\n                        num_kernels, \n                        grad_col,\n                        data_value,\n                        data_spatial_shapes,\n                        data_level_start_index, \n                        data_sampling_loc,\n                        data_attn_weight,\n                        batch_size, \n                        spatial_size, \n                        num_heads,\n                        channels, \n                        num_levels,\n                        num_query,\n                        num_point,\n                        grad_value,\n                        grad_sampling_loc,\n                        grad_attn_weight);\n        }\n    }\n  }\n  cudaError_t err = cudaGetLastError();\n  if (err != cudaSuccess)\n  {\n    printf(\"error in ms_deformable_col2im_cuda: %s\\n\", cudaGetErrorString(err));\n  }\n\n}"
  },
  {
    "path": "models/ops/src/ms_deform_attn.h",
    "content": "/*!\n**************************************************************************************************\n* Deformable DETR\n* Copyright (c) 2020 SenseTime. All Rights Reserved.\n* Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n**************************************************************************************************\n* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n**************************************************************************************************\n*/\n\n#pragma once\n\n#include \"cpu/ms_deform_attn_cpu.h\"\n\n#ifdef WITH_CUDA\n#include \"cuda/ms_deform_attn_cuda.h\"\n#endif\n\n\nat::Tensor\nms_deform_attn_forward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const int im2col_step)\n{\n    if (value.type().is_cuda())\n    {\n#ifdef WITH_CUDA\n        return ms_deform_attn_cuda_forward(\n            value, spatial_shapes, level_start_index, sampling_loc, attn_weight, im2col_step);\n#else\n        AT_ERROR(\"Not compiled with GPU support\");\n#endif\n    }\n    AT_ERROR(\"Not implemented on the CPU\");\n}\n\nstd::vector<at::Tensor>\nms_deform_attn_backward(\n    const at::Tensor &value, \n    const at::Tensor &spatial_shapes,\n    const at::Tensor &level_start_index,\n    const at::Tensor &sampling_loc,\n    const at::Tensor &attn_weight,\n    const at::Tensor &grad_output,\n    const int im2col_step)\n{\n    if (value.type().is_cuda())\n    {\n#ifdef WITH_CUDA\n        return ms_deform_attn_cuda_backward(\n            value, spatial_shapes, level_start_index, sampling_loc, attn_weight, grad_output, im2col_step);\n#else\n        AT_ERROR(\"Not compiled with GPU support\");\n#endif\n    }\n    AT_ERROR(\"Not implemented on the CPU\");\n}\n\n"
  },
  {
    "path": "models/ops/src/vision.cpp",
    "content": "/*!\n**************************************************************************************************\n* Deformable DETR\n* Copyright (c) 2020 SenseTime. All Rights Reserved.\n* Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n**************************************************************************************************\n* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n**************************************************************************************************\n*/\n\n#include \"ms_deform_attn.h\"\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"ms_deform_attn_forward\", &ms_deform_attn_forward, \"ms_deform_attn_forward\");\n  m.def(\"ms_deform_attn_backward\", &ms_deform_attn_backward, \"ms_deform_attn_backward\");\n}\n"
  },
  {
    "path": "models/ops/test.py",
    "content": "# ------------------------------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------------------\n# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0\n# ------------------------------------------------------------------------------------------------\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import division\n\nimport time\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import gradcheck\n\nfrom functions.ms_deform_attn_func import MSDeformAttnFunction, ms_deform_attn_core_pytorch\n\n\nN, M, D = 1, 2, 2\nLq, L, P = 2, 2, 2\nshapes = torch.as_tensor([(6, 4), (3, 2)], dtype=torch.long).cuda()\nlevel_start_index = torch.cat((shapes.new_zeros((1, )), shapes.prod(1).cumsum(0)[:-1]))\nS = sum([(H*W).item() for H, W in shapes])\n\n\ntorch.manual_seed(3)\n\n\n@torch.no_grad()\ndef check_forward_equal_with_pytorch_double():\n    value = torch.rand(N, S, M, D).cuda() * 0.01\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\n    im2col_step = 2\n    output_pytorch = ms_deform_attn_core_pytorch(value.double(), shapes, sampling_locations.double(), attention_weights.double()).detach().cpu()\n    output_cuda = MSDeformAttnFunction.apply(value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step).detach().cpu()\n    fwdok = torch.allclose(output_cuda, output_pytorch)\n    max_abs_err = (output_cuda - output_pytorch).abs().max()\n    max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max()\n\n    print(f'* {fwdok} check_forward_equal_with_pytorch_double: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}')\n\n\n@torch.no_grad()\ndef check_forward_equal_with_pytorch_float():\n    value = torch.rand(N, S, M, D).cuda() * 0.01\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\n    im2col_step = 2\n    output_pytorch = ms_deform_attn_core_pytorch(value, shapes, sampling_locations, attention_weights).detach().cpu()\n    output_cuda = MSDeformAttnFunction.apply(value, shapes, level_start_index, sampling_locations, attention_weights, im2col_step).detach().cpu()\n    fwdok = torch.allclose(output_cuda, output_pytorch, rtol=1e-2, atol=1e-3)\n    max_abs_err = (output_cuda - output_pytorch).abs().max()\n    max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max()\n\n    print(f'* {fwdok} check_forward_equal_with_pytorch_float: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}')\n\n\ndef check_gradient_numerical(channels=4, grad_value=True, grad_sampling_loc=True, grad_attn_weight=True):\n\n    value = torch.rand(N, S, M, channels).cuda() * 0.01\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\n    im2col_step = 2\n    func = MSDeformAttnFunction.apply\n\n    value.requires_grad = grad_value\n    sampling_locations.requires_grad = grad_sampling_loc\n    attention_weights.requires_grad = grad_attn_weight\n\n    gradok = gradcheck(func, (value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step))\n\n    print(f'* {gradok} check_gradient_numerical(D={channels})')\n\n\nif __name__ == '__main__':\n    check_forward_equal_with_pytorch_double()\n    check_forward_equal_with_pytorch_float()\n\n    for channels in [30, 32, 64, 71, 1025, 2048, 3096]:\n        check_gradient_numerical(channels, True, True, True)\n\n\n\n"
  },
  {
    "path": "models/position_encoding.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\n\"\"\"\nVarious positional encodings for the transformer.\n\"\"\"\nimport math\nimport torch\nfrom torch import nn\n\nfrom util.misc import NestedTensor\n\n\nclass PositionEmbeddingSine(nn.Module):\n    \"\"\"\n    This is a more standard version of the position embedding, very similar to the one\n    used by the Attention is all you need paper, generalized to work on images.\n    \"\"\"\n    def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):\n        super().__init__()\n        self.num_pos_feats = num_pos_feats\n        self.temperature = temperature\n        self.normalize = normalize\n        if scale is not None and normalize is False:\n            raise ValueError(\"normalize should be True if scale is passed\")\n        if scale is None:\n            scale = 2 * math.pi\n        self.scale = scale\n\n    def forward(self, tensor_list: NestedTensor):\n        x = tensor_list.tensors\n        mask = tensor_list.mask\n        assert mask is not None\n        not_mask = ~mask\n        y_embed = not_mask.cumsum(1, dtype=torch.float32)\n        x_embed = not_mask.cumsum(2, dtype=torch.float32)\n        if self.normalize:\n            eps = 1e-6\n            y_embed = (y_embed - 0.5) / (y_embed[:, -1:, :] + eps) * self.scale\n            x_embed = (x_embed - 0.5) / (x_embed[:, :, -1:] + eps) * self.scale\n\n        dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)\n        dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)\n\n        pos_x = x_embed[:, :, :, None] / dim_t\n        pos_y = y_embed[:, :, :, None] / dim_t\n        pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3)\n        pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3)\n        pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)\n        return pos\n\n\nclass PositionEmbeddingLearned(nn.Module):\n    \"\"\"\n    Absolute pos embedding, learned.\n    \"\"\"\n    def __init__(self, num_pos_feats=256):\n        super().__init__()\n        self.row_embed = nn.Embedding(50, num_pos_feats)\n        self.col_embed = nn.Embedding(50, num_pos_feats)\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        nn.init.uniform_(self.row_embed.weight)\n        nn.init.uniform_(self.col_embed.weight)\n\n    def forward(self, tensor_list: NestedTensor):\n        x = tensor_list.tensors\n        h, w = x.shape[-2:]\n        i = torch.arange(w, device=x.device)\n        j = torch.arange(h, device=x.device)\n        x_emb = self.col_embed(i)\n        y_emb = self.row_embed(j)\n        pos = torch.cat([\n            x_emb.unsqueeze(0).repeat(h, 1, 1),\n            y_emb.unsqueeze(1).repeat(1, w, 1),\n        ], dim=-1).permute(2, 0, 1).unsqueeze(0).repeat(x.shape[0], 1, 1, 1)\n        return pos\n\n\ndef build_position_encoding(args):\n    N_steps = args.hidden_dim // 2\n    if args.position_embedding in ('v2', 'sine'):\n        # TODO find a better way of exposing other arguments\n        position_embedding = PositionEmbeddingSine(N_steps, normalize=True)\n    elif args.position_embedding in ('v3', 'learned'):\n        position_embedding = PositionEmbeddingLearned(N_steps)\n    else:\n        raise ValueError(f\"not supported {args.position_embedding}\")\n\n    return position_embedding\n"
  },
  {
    "path": "models/segmentation.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------------------\n\n\n\"\"\"\nThis file provides the definition of the convolutional heads used to predict masks, as well as the losses\n\"\"\"\nimport io\nfrom collections import defaultdict\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom PIL import Image\n\nimport util.box_ops as box_ops\nfrom util.misc import NestedTensor, interpolate, nested_tensor_from_tensor_list\n\ntry:\n    from panopticapi.utils import id2rgb, rgb2id\nexcept ImportError:\n    pass\n\n\nclass DETRsegm(nn.Module):\n    def __init__(self, detr, freeze_detr=False):\n        super().__init__()\n        self.detr = detr\n\n        if freeze_detr:\n            for p in self.parameters():\n                p.requires_grad_(False)\n\n        hidden_dim, nheads = detr.transformer.d_model, detr.transformer.nhead\n        self.bbox_attention = MHAttentionMap(hidden_dim, hidden_dim, nheads, dropout=0)\n        self.mask_head = MaskHeadSmallConv(hidden_dim + nheads, [1024, 512, 256], hidden_dim)\n\n    def forward(self, samples: NestedTensor):\n        if not isinstance(samples, NestedTensor):\n            samples = nested_tensor_from_tensor_list(samples)\n        features, pos = self.detr.backbone(samples)\n\n        bs = features[-1].tensors.shape[0]\n\n        src, mask = features[-1].decompose()\n        src_proj = self.detr.input_proj(src)\n        hs, memory = self.detr.transformer(src_proj, mask, self.detr.query_embed.weight, pos[-1])\n\n        outputs_class = self.detr.class_embed(hs)\n        outputs_coord = self.detr.bbox_embed(hs).sigmoid()\n        out = {\"pred_logits\": outputs_class[-1], \"pred_boxes\": outputs_coord[-1]}\n        if self.detr.aux_loss:\n            out[\"aux_outputs\"] = [\n                {\"pred_logits\": a, \"pred_boxes\": b} for a, b in zip(outputs_class[:-1], outputs_coord[:-1])\n            ]\n\n        # FIXME h_boxes takes the last one computed, keep this in mind\n        bbox_mask = self.bbox_attention(hs[-1], memory, mask=mask)\n\n        seg_masks = self.mask_head(src_proj, bbox_mask, [features[2].tensors, features[1].tensors, features[0].tensors])\n        outputs_seg_masks = seg_masks.view(bs, self.detr.num_queries, seg_masks.shape[-2], seg_masks.shape[-1])\n\n        out[\"pred_masks\"] = outputs_seg_masks\n        return out\n\n\nclass MaskHeadSmallConv(nn.Module):\n    \"\"\"\n    Simple convolutional head, using group norm.\n    Upsampling is done using a FPN approach\n    \"\"\"\n\n    def __init__(self, dim, fpn_dims, context_dim):\n        super().__init__()\n\n        inter_dims = [dim, context_dim // 2, context_dim // 4, context_dim // 8, context_dim // 16, context_dim // 64]\n        self.lay1 = torch.nn.Conv2d(dim, dim, 3, padding=1)\n        self.gn1 = torch.nn.GroupNorm(8, dim)\n        self.lay2 = torch.nn.Conv2d(dim, inter_dims[1], 3, padding=1)\n        self.gn2 = torch.nn.GroupNorm(8, inter_dims[1])\n        self.lay3 = torch.nn.Conv2d(inter_dims[1], inter_dims[2], 3, padding=1)\n        self.gn3 = torch.nn.GroupNorm(8, inter_dims[2])\n        self.lay4 = torch.nn.Conv2d(inter_dims[2], inter_dims[3], 3, padding=1)\n        self.gn4 = torch.nn.GroupNorm(8, inter_dims[3])\n        self.lay5 = torch.nn.Conv2d(inter_dims[3], inter_dims[4], 3, padding=1)\n        self.gn5 = torch.nn.GroupNorm(8, inter_dims[4])\n        self.out_lay = torch.nn.Conv2d(inter_dims[4], 1, 3, padding=1)\n\n        self.dim = dim\n\n        self.adapter1 = torch.nn.Conv2d(fpn_dims[0], inter_dims[1], 1)\n        self.adapter2 = torch.nn.Conv2d(fpn_dims[1], inter_dims[2], 1)\n        self.adapter3 = torch.nn.Conv2d(fpn_dims[2], inter_dims[3], 1)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_uniform_(m.weight, a=1)\n                nn.init.constant_(m.bias, 0)\n\n    def forward(self, x, bbox_mask, fpns):\n        def expand(tensor, length):\n            return tensor.unsqueeze(1).repeat(1, int(length), 1, 1, 1).flatten(0, 1)\n\n        x = torch.cat([expand(x, bbox_mask.shape[1]), bbox_mask.flatten(0, 1)], 1)\n\n        x = self.lay1(x)\n        x = self.gn1(x)\n        x = F.relu(x)\n        x = self.lay2(x)\n        x = self.gn2(x)\n        x = F.relu(x)\n\n        cur_fpn = self.adapter1(fpns[0])\n        if cur_fpn.size(0) != x.size(0):\n            cur_fpn = expand(cur_fpn, x.size(0) / cur_fpn.size(0))\n        x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode=\"nearest\")\n        x = self.lay3(x)\n        x = self.gn3(x)\n        x = F.relu(x)\n\n        cur_fpn = self.adapter2(fpns[1])\n        if cur_fpn.size(0) != x.size(0):\n            cur_fpn = expand(cur_fpn, x.size(0) / cur_fpn.size(0))\n        x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode=\"nearest\")\n        x = self.lay4(x)\n        x = self.gn4(x)\n        x = F.relu(x)\n\n        cur_fpn = self.adapter3(fpns[2])\n        if cur_fpn.size(0) != x.size(0):\n            cur_fpn = expand(cur_fpn, x.size(0) / cur_fpn.size(0))\n        x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode=\"nearest\")\n        x = self.lay5(x)\n        x = self.gn5(x)\n        x = F.relu(x)\n\n        x = self.out_lay(x)\n        return x\n\n\nclass MHAttentionMap(nn.Module):\n    \"\"\"This is a 2D attention module, which only returns the attention softmax (no multiplication by value)\"\"\"\n\n    def __init__(self, query_dim, hidden_dim, num_heads, dropout=0, bias=True):\n        super().__init__()\n        self.num_heads = num_heads\n        self.hidden_dim = hidden_dim\n        self.dropout = nn.Dropout(dropout)\n\n        self.q_linear = nn.Linear(query_dim, hidden_dim, bias=bias)\n        self.k_linear = nn.Linear(query_dim, hidden_dim, bias=bias)\n\n        nn.init.zeros_(self.k_linear.bias)\n        nn.init.zeros_(self.q_linear.bias)\n        nn.init.xavier_uniform_(self.k_linear.weight)\n        nn.init.xavier_uniform_(self.q_linear.weight)\n        self.normalize_fact = float(hidden_dim / self.num_heads) ** -0.5\n\n    def forward(self, q, k, mask=None):\n        q = self.q_linear(q)\n        k = F.conv2d(k, self.k_linear.weight.unsqueeze(-1).unsqueeze(-1), self.k_linear.bias)\n        qh = q.view(q.shape[0], q.shape[1], self.num_heads, self.hidden_dim // self.num_heads)\n        kh = k.view(k.shape[0], self.num_heads, self.hidden_dim // self.num_heads, k.shape[-2], k.shape[-1])\n        weights = torch.einsum(\"bqnc,bnchw->bqnhw\", qh * self.normalize_fact, kh)\n\n        if mask is not None:\n            weights.masked_fill_(mask.unsqueeze(1).unsqueeze(1), float(\"-inf\"))\n        weights = F.softmax(weights.flatten(2), dim=-1).view_as(weights)\n        weights = self.dropout(weights)\n        return weights\n\n\ndef dice_loss(inputs, targets, num_boxes):\n    \"\"\"\n    Compute the DICE loss, similar to generalized IOU for masks\n    Args:\n        inputs: A float tensor of arbitrary shape.\n                The predictions for each example.\n        targets: A float tensor with the same shape as inputs. Stores the binary\n                 classification label for each element in inputs\n                (0 for the negative class and 1 for the positive class).\n    \"\"\"\n    inputs = inputs.sigmoid()\n    inputs = inputs.flatten(1)\n    numerator = 2 * (inputs * targets).sum(1)\n    denominator = inputs.sum(-1) + targets.sum(-1)\n    loss = 1 - (numerator + 1) / (denominator + 1)\n    return loss.sum() / num_boxes\n\n\ndef sigmoid_focal_loss(inputs, targets, num_boxes, alpha: float = 0.25, gamma: float = 2, idx=None):\n    \"\"\"\n    Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.\n    Args:\n        inputs: A float tensor of arbitrary shape.\n                The predictions for each example.\n        targets: A float tensor with the same shape as inputs. Stores the binary\n                 classification label for each element in inputs\n                (0 for the negative class and 1 for the positive class).\n        alpha: (optional) Weighting factor in range (0,1) to balance\n                positive vs negative examples. Default = -1 (no weighting).\n        gamma: Exponent of the modulating factor (1 - p_t) to\n               balance easy vs hard examples.\n    Returns:\n        Loss tensor\n    \"\"\"\n    prob = inputs.sigmoid()\n    ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction=\"none\")\n    p_t = prob * targets + (1 - prob) * (1 - targets)\n    loss = ce_loss * ((1 - p_t) ** gamma)\n\n    if alpha >= 0:\n        alpha_t = alpha * targets + (1 - alpha) * (1 - targets)\n        loss = alpha_t * loss\n    if idx is not None:\n        return loss[idx].mean(1).sum() / num_boxes\n    return loss.mean(1).sum() / num_boxes\n\n\nclass PostProcessSegm(nn.Module):\n    def __init__(self, threshold=0.5):\n        super().__init__()\n        self.threshold = threshold\n\n    @torch.no_grad()\n    def forward(self, results, outputs, orig_target_sizes, max_target_sizes):\n        assert len(orig_target_sizes) == len(max_target_sizes)\n        max_h, max_w = max_target_sizes.max(0)[0].tolist()\n        outputs_masks = outputs[\"pred_masks\"].squeeze(2)\n        outputs_masks = F.interpolate(outputs_masks, size=(max_h, max_w), mode=\"bilinear\", align_corners=False)\n        outputs_masks = (outputs_masks.sigmoid() > self.threshold).cpu()\n\n        for i, (cur_mask, t, tt) in enumerate(zip(outputs_masks, max_target_sizes, orig_target_sizes)):\n            img_h, img_w = t[0], t[1]\n            results[i][\"masks\"] = cur_mask[:, :img_h, :img_w].unsqueeze(1)\n            results[i][\"masks\"] = F.interpolate(\n                results[i][\"masks\"].float(), size=tuple(tt.tolist()), mode=\"nearest\"\n            ).byte()\n\n        return results\n\n\nclass PostProcessPanoptic(nn.Module):\n    \"\"\"This class converts the output of the model to the final panoptic result, in the format expected by the\n    coco panoptic API \"\"\"\n\n    def __init__(self, is_thing_map, threshold=0.85):\n        \"\"\"\n        Parameters:\n           is_thing_map: This is a whose keys are the class ids, and the values a boolean indicating whether\n                          the class is  a thing (True) or a stuff (False) class\n           threshold: confidence threshold: segments with confidence lower than this will be deleted\n        \"\"\"\n        super().__init__()\n        self.threshold = threshold\n        self.is_thing_map = is_thing_map\n\n    def forward(self, outputs, processed_sizes, target_sizes=None):\n        \"\"\" This function computes the panoptic prediction from the model's predictions.\n        Parameters:\n            outputs: This is a dict coming directly from the model. See the model doc for the content.\n            processed_sizes: This is a list of tuples (or torch tensors) of sizes of the images that were passed to the\n                             model, ie the size after data augmentation but before batching.\n            target_sizes: This is a list of tuples (or torch tensors) corresponding to the requested final size\n                          of each prediction. If left to None, it will default to the processed_sizes\n            \"\"\"\n        if target_sizes is None:\n            target_sizes = processed_sizes\n        assert len(processed_sizes) == len(target_sizes)\n        out_logits, raw_masks, raw_boxes = outputs[\"pred_logits\"], outputs[\"pred_masks\"], outputs[\"pred_boxes\"]\n        assert len(out_logits) == len(raw_masks) == len(target_sizes)\n        preds = []\n\n        def to_tuple(tup):\n            if isinstance(tup, tuple):\n                return tup\n            return tuple(tup.cpu().tolist())\n\n        for cur_logits, cur_masks, cur_boxes, size, target_size in zip(\n            out_logits, raw_masks, raw_boxes, processed_sizes, target_sizes\n        ):\n            # we filter empty queries and detection below threshold\n            scores, labels = cur_logits.softmax(-1).max(-1)\n            keep = labels.ne(outputs[\"pred_logits\"].shape[-1] - 1) & (scores > self.threshold)\n            cur_scores, cur_classes = cur_logits.softmax(-1).max(-1)\n            cur_scores = cur_scores[keep]\n            cur_classes = cur_classes[keep]\n            cur_masks = cur_masks[keep]\n            cur_masks = interpolate(cur_masks[None], to_tuple(size), mode=\"bilinear\").squeeze(0)\n            cur_boxes = box_ops.box_cxcywh_to_xyxy(cur_boxes[keep])\n\n            h, w = cur_masks.shape[-2:]\n            assert len(cur_boxes) == len(cur_classes)\n\n            # It may be that we have several predicted masks for the same stuff class.\n            # In the following, we track the list of masks ids for each stuff class (they are merged later on)\n            cur_masks = cur_masks.flatten(1)\n            stuff_equiv_classes = defaultdict(lambda: [])\n            for k, label in enumerate(cur_classes):\n                if not self.is_thing_map[label.item()]:\n                    stuff_equiv_classes[label.item()].append(k)\n\n            def get_ids_area(masks, scores, dedup=False):\n                # This helper function creates the final panoptic segmentation image\n                # It also returns the area of the masks that appears on the image\n\n                m_id = masks.transpose(0, 1).softmax(-1)\n\n                if m_id.shape[-1] == 0:\n                    # We didn't detect any mask :(\n                    m_id = torch.zeros((h, w), dtype=torch.long, device=m_id.device)\n                else:\n                    m_id = m_id.argmax(-1).view(h, w)\n\n                if dedup:\n                    # Merge the masks corresponding to the same stuff class\n                    for equiv in stuff_equiv_classes.values():\n                        if len(equiv) > 1:\n                            for eq_id in equiv:\n                                m_id.masked_fill_(m_id.eq(eq_id), equiv[0])\n\n                final_h, final_w = to_tuple(target_size)\n\n                seg_img = Image.fromarray(id2rgb(m_id.view(h, w).cpu().numpy()))\n                seg_img = seg_img.resize(size=(final_w, final_h), resample=Image.NEAREST)\n\n                np_seg_img = (\n                    torch.ByteTensor(torch.ByteStorage.from_buffer(seg_img.tobytes())).view(final_h, final_w, 3).numpy()\n                )\n                m_id = torch.from_numpy(rgb2id(np_seg_img))\n\n                area = []\n                for i in range(len(scores)):\n                    area.append(m_id.eq(i).sum().item())\n                return area, seg_img\n\n            area, seg_img = get_ids_area(cur_masks, cur_scores, dedup=True)\n            if cur_classes.numel() > 0:\n                # We know filter empty masks as long as we find some\n                while True:\n                    filtered_small = torch.as_tensor(\n                        [area[i] <= 4 for i, c in enumerate(cur_classes)], dtype=torch.bool, device=keep.device\n                    )\n                    if filtered_small.any().item():\n                        cur_scores = cur_scores[~filtered_small]\n                        cur_classes = cur_classes[~filtered_small]\n                        cur_masks = cur_masks[~filtered_small]\n                        area, seg_img = get_ids_area(cur_masks, cur_scores)\n                    else:\n                        break\n\n            else:\n                cur_classes = torch.ones(1, dtype=torch.long, device=cur_classes.device)\n\n            segments_info = []\n            for i, a in enumerate(area):\n                cat = cur_classes[i].item()\n                segments_info.append({\"id\": i, \"isthing\": self.is_thing_map[cat], \"category_id\": cat, \"area\": a})\n            del cur_classes\n\n            with io.BytesIO() as out:\n                seg_img.save(out, format=\"PNG\")\n                predictions = {\"png_string\": out.getvalue(), \"segments_info\": segments_info}\n            preds.append(predictions)\n        return preds\n"
  },
  {
    "path": "models/swin_transformer/__init__.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n\n\nfrom .build import build_model\n"
  },
  {
    "path": "models/swin_transformer/build.py",
    "content": "# ------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------\n\n\nfrom collections import abc, OrderedDict\nimport os\nimport yaml\n\nfrom .swin_transformer import SwinTransformer\nfrom .config import Config\n\nimport torch\n\n\nCONFIG_MAP = {\n    \"swin-t\": \"models/swin_transformer/configs/swin_tiny_patch4_window7_224.yaml\",\n    \"swin-s\": \"models/swin_transformer/configs/swin_small_patch4_window7_224.yaml\",\n    \"swin-b\": \"models/swin_transformer/configs/swin_base_patch4_window7_224.yaml\",\n    \"swin-l\": \"models/swin_transformer/configs/swin_large_patch4_window7_224.yaml\",\n}\n\n\nCHECKPOINT_MAP = {\n    \"swin-t\": \"/data/public/rw/team-autolearn/pretrainedmodels/swin/swin_tiny_patch4_window7_224.pth\",\n}\n\n\ndef build_model(name, out_indices, frozen_stages, pretrained):\n    config_file = CONFIG_MAP[name]\n    config = load_config_yaml(config_file)\n    config = Config(config)\n    config.freeze()\n    \n    model_type = config.MODEL.TYPE\n    if model_type == 'swin':\n        model = SwinTransformer(pretrain_img_size=config.DATA.IMG_SIZE,\n                                patch_size=config.MODEL.SWIN.PATCH_SIZE,\n                                in_chans=config.MODEL.SWIN.IN_CHANS,\n                                embed_dim=config.MODEL.SWIN.EMBED_DIM,\n                                depths=config.MODEL.SWIN.DEPTHS,\n                                num_heads=config.MODEL.SWIN.NUM_HEADS,\n                                window_size=config.MODEL.SWIN.WINDOW_SIZE,\n                                mlp_ratio=config.MODEL.SWIN.MLP_RATIO,\n                                qkv_bias=config.MODEL.SWIN.QKV_BIAS,\n                                qk_scale=config.MODEL.SWIN.QK_SCALE,\n                                drop_rate=config.MODEL.DROP_RATE,\n                                drop_path_rate=config.MODEL.DROP_PATH_RATE,\n                                ape=config.MODEL.SWIN.APE,\n                                patch_norm=config.MODEL.SWIN.PATCH_NORM,\n                                use_checkpoint=config.TRAIN.USE_CHECKPOINT,\n                                out_indices=out_indices,\n                                frozen_stages=frozen_stages)\n    else:\n        raise NotImplementedError(f\"Unkown model: {model_type}\")\n    \n    if pretrained:\n        ckpt_path = CHECKPOINT_MAP[name]\n        state_dict = torch.load(ckpt_path)\n        model.load_state_dict(state_dict['model'], strict=False)\n        \n    return model\n\n\ndef _update_dict(tar, src):\n    \"\"\"recursive dict update.\"\"\"\n    for k, v in src.items():\n        if isinstance(v, abc.Mapping):\n            tar[k] = _update_dict(tar.get(k, {}), v)\n        else:\n            tar[k] = v\n    return tar\n\n\ndef load_config_yaml(cfg_file, config=None):\n    if config is None:\n        config = OrderedDict()\n    \n    with open(cfg_file, 'r') as f:\n        config_src = yaml.load(f, Loader=yaml.FullLoader)\n\n    for cfg in config_src.setdefault('BASE', ['']):\n        if cfg:\n            load_config_yaml(\n                os.path.join(os.path.dirname(cfg_file), cfg), config\n            )\n    print('=> merge config from {}'.format(cfg_file))\n    _update_dict(config, config_src)\n    return config\n"
  },
  {
    "path": "models/swin_transformer/config.py",
    "content": "# ------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------\n\n\nimport collections\nfrom collections import OrderedDict\nfrom copy import deepcopy\nimport logging\nfrom os.path import basename, splitext\nfrom pprint import pformat\nfrom types import SimpleNamespace\nimport yaml\n\n\nclass Config(SimpleNamespace):\n    \"\"\"Dictionary-based but also dot-accessible configuration object, which will \n    rescue you from the messy brackets and quotation marks while accessing \n    nested dictionaries.\n        \n    As the usage example below, a value can be easily assigned to a new field \n    with hierarchies by using Python's usual assignment syntax. Due to the side \n    effects of this feature, it is safe that the user call '.freeze()' before \n    using the Config instance as a fixed configuration. Otherwise, even when \n    a wanted attribute is called with an incorrect name, AttributeError will be \n    silently ignored and returns an empty config, which could be resulting in \n    unwanted consequences.\n    \n    Usage:\n        >>> cfg = Config()\n        >>> cfg.foo = 1\n        >>> cfg.bar.baz = 2\n        >>> cfg['bar']['baz'] == cfg.bar.baz\n        True\n        >>> cfg.pprint()\n        ---\n        foo: 1\n        bar:\n            baz: 2\n        ...\n        >>> cfg.freeze()\n        >>> cfg.new = 3\n        RuntimeError: Can't set new attribute after being freezed!\n            \n    \"\"\"\n    def __init__(self, _dict=None, **kwargs):\n        super().__init__(**kwargs)\n        self._freezed = False\n        self._order = list()\n        if _dict is not None:\n            self._set_with_nested_dict(_dict)\n\n    def _set_with_nested_dict(self, _dict):\n        for key, value in _dict.items():\n            if isinstance(value, dict):\n                self.__setattr__(key, Config(value))\n            else:\n                self.__setattr__(key, value)\n                self._order.append(key)\n                \n    @property\n    def freezed(self):\n        return self._freezed\n                \n    @classmethod\n    def from_yaml(cls, yaml_file):\n        \"\"\"Initialize configuration with a YAML file.\"\"\"\n        return cls(OrderedDict(yaml.load(open(yaml_file, \"r\"), \n                                         Loader=yaml.FullLoader)))\n\n    def __repr__(self):\n        return 'Config' + self.to_dict().__repr__()\n\n    def __getitem__(self, item):\n        return self.__getattr__(item)\n\n    def __getattr__(self, item):\n        try:\n            return self.__getattribute__(item)\n        except AttributeError as e:\n            if self._freezed:\n                raise AttributeError(f\"Can't find the field: {item}\") from e\n            else:\n                # if there's no attribute with the given name, \n                # make new one and assign an empty config. \n                self.__setattr__(item, Config())\n                return self.__getattribute__(item)\n        \n    def __setattr__(self, item, value):\n        if item != '_freezed' and self.__dict__['_freezed']:\n            raise RuntimeError(\"Can't set new attribute after being freezed!\")\n        super().__setattr__(item, value)\n\n    def __bool__(self):\n        return len([k for k in self.to_dict().keys() \n                    if not k.startswith('_')]) > 0\n\n    def __len__(self):\n        return len(self.to_dict())\n\n    def __getstate__(self):\n        return self.to_dict()\n\n    def __setstate__(self, state):\n        self._set_with_nested_dict(state)\n\n    def __contains__(self, item):\n        return self.to_dict().__contains__(item)\n\n    def __deepcopy__(self, memodict={}):\n        return Config(_dict=deepcopy(self.to_dict()))\n\n    def __iter__(self):\n        # for iterable unpacking\n        return self.to_dict().__iter__()\n    \n    def pformat(self):\n        return yaml.dump(self.to_dict(), indent=4, sort_keys=False,\n                         explicit_start=True, explicit_end=True)\n                                        \n    def pprint(self):\n        return print(self.pformat())\n    \n    def freeze(self):\n        self._freezed = True\n        for value in self.__dict__.values():\n            if isinstance(value, Config):\n                value.freeze()\n        \n        return self\n        \n    def defrost(self):\n        self._freezed = False\n        for value in self.__dict__.values():\n            if isinstance(value, Config):\n                value.defrost()\n        return self\n\n    def get(self, *args, **kwargs):\n        return self.to_dict().get(*args, **kwargs)\n\n    def keys(self):\n        return self.to_dict().keys()\n\n    def values(self):\n        return self.to_dict().values()\n\n    def items(self):\n        return self.to_dict().items()\n\n    def clone(self):\n        return self.__deepcopy__()\n\n    def update(self, dict_, delimiter='/'):\n        for k, v in dict_.items():\n            self._update(k, v, delimiter)\n\n    def _update(self, key, value, delimiter='/'):\n        obj = self\n        keys = key.split(delimiter)\n        for k in keys[:-1]:\n            obj = obj.__getattr__(k)\n        obj.__setattr__(keys[-1], value)\n\n    def to_dict(self):\n        out_dict = OrderedDict()\n        for key, value in self.__dict__.items():\n            if isinstance(value, Config):\n                out_dict[key] = value.to_dict()\n            else:\n                if not key.startswith('_'):\n                    out_dict[key] = value\n        return dict(out_dict)\n"
  },
  {
    "path": "models/swin_transformer/configs/default.yaml",
    "content": "DATA:\n  IMG_SIZE: 224\nTRAIN:\n  USE_CHECKPOINT: false\nMODEL:\n  SWIN:\n    APE: false\n    DEPTHS: [2, 2, 6, 2]\n    EMBED_DIM: 96\n    IN_CHANS: 3\n    MLP_RATIO: 4.0\n    NUM_HEADS: [3, 6, 12, 24]\n    PATCH_NORM: true\n    PATCH_SIZE: 4\n    QKV_BIAS: true\n    QK_SCALE: null\n    WINDOW_SIZE: 7\n  DROP_RATE: 0.0\n  DROP_PATH_RATE: 0.1\n  NUM_CLASSES: 1000\n"
  },
  {
    "path": "models/swin_transformer/configs/swin_base_patch4_window7_224.yaml",
    "content": "BASE: ['default.yaml']\nMODEL:\n  TYPE: swin\n  NAME: swin_base_patch4_window7_224\n  DROP_PATH_RATE: 0.5\n  SWIN:\n    EMBED_DIM: 128\n    DEPTHS: [ 2, 2, 18, 2 ]\n    NUM_HEADS: [ 4, 8, 16, 32 ]\n    WINDOW_SIZE: 7\n"
  },
  {
    "path": "models/swin_transformer/configs/swin_large_patch4_window7_224.yaml",
    "content": "BASE: ['default.yaml']\nMODEL:\n  TYPE: swin\n  NAME: swin_large_patch4_window7_224\n  SWIN:\n    EMBED_DIM: 192\n    DEPTHS: [ 2, 2, 18, 2 ]\n    NUM_HEADS: [ 6, 12, 24, 48 ]\n    WINDOW_SIZE: 7\n"
  },
  {
    "path": "models/swin_transformer/configs/swin_small_patch4_window7_224.yaml",
    "content": "BASE: ['default.yaml']\nMODEL:\n  TYPE: swin\n  NAME: swin_small_patch4_window7_224\n  DROP_PATH_RATE: 0.3\n  SWIN:\n    EMBED_DIM: 96\n    DEPTHS: [ 2, 2, 18, 2 ]\n    NUM_HEADS: [ 3, 6, 12, 24 ]\n    WINDOW_SIZE: 7\n"
  },
  {
    "path": "models/swin_transformer/configs/swin_tiny_patch4_window7_224.yaml",
    "content": "BASE: ['default.yaml']\nMODEL:\n  TYPE: swin\n  NAME: swin_tiny_patch4_window7_224\n  DROP_PATH_RATE: 0.2\n  SWIN:\n    EMBED_DIM: 96\n    DEPTHS: [ 2, 2, 6, 2 ]\n    NUM_HEADS: [ 3, 6, 12, 24 ]\n    WINDOW_SIZE: 7\n"
  },
  {
    "path": "models/swin_transformer/swin_transformer.py",
    "content": "# ------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------\n# Modified from Swin Transformer (https://github.com/microsoft/Swin-Transformer)\n# Copyright (c) 2021 Microsoft. All Rights Reserved.\n# Written by Ze Liu\n# ------------------------------------------------------------------------------\n\n\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.utils.checkpoint as checkpoint\nimport torch.nn.functional as F\nfrom timm.models.layers import DropPath, to_2tuple, trunc_normal_\n\n\nclass Mlp(nn.Module):\n    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):\n        super().__init__()\n        out_features = out_features or in_features\n        hidden_features = hidden_features or in_features\n        self.fc1 = nn.Linear(in_features, hidden_features)\n        self.act = act_layer()\n        self.fc2 = nn.Linear(hidden_features, out_features)\n        self.drop = nn.Dropout(drop)\n\n    def forward(self, x):\n        x = self.fc1(x)\n        x = self.act(x)\n        x = self.drop(x)\n        x = self.fc2(x)\n        x = self.drop(x)\n        return x\n\n\ndef window_partition(x, window_size):\n    \"\"\"\n    Args:\n        x: (B, H, W, C)\n        window_size (int): window size\n\n    Returns:\n        windows: (num_windows*B, window_size, window_size, C)\n    \"\"\"\n    B, H, W, C = x.shape\n    x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)\n    windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)\n    return windows\n\n\ndef window_reverse(windows, window_size, H, W):\n    \"\"\"\n    Args:\n        windows: (num_windows*B, window_size, window_size, C)\n        window_size (int): Window size\n        H (int): Height of image\n        W (int): Width of image\n\n    Returns:\n        x: (B, H, W, C)\n    \"\"\"\n    B = int(windows.shape[0] / (float(H) * float(W) / window_size / window_size))\n    x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)\n    x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)\n    return x\n\n\nclass WindowAttention(nn.Module):\n    \"\"\" Window based multi-head self attention (W-MSA) module with relative position bias.\n    It supports both of shifted and non-shifted window.\n    Args:\n        dim (int): Number of input channels.\n        window_size (tuple[int]): The height and width of the window.\n        num_heads (int): Number of attention heads.\n        qkv_bias (bool, optional):  If True, add a learnable bias to query, key, value. Default: True\n        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set\n        attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0\n        proj_drop (float, optional): Dropout ratio of output. Default: 0.0\n    \"\"\"\n\n    def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):\n\n        super().__init__()\n        self.dim = dim\n        self.window_size = window_size  # Wh, Ww\n        self.num_heads = num_heads\n        head_dim = dim // num_heads\n        self.scale = qk_scale or head_dim ** -0.5\n\n        # define a parameter table of relative position bias\n        self.relative_position_bias_table = nn.Parameter(\n            torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads))  # 2*Wh-1 * 2*Ww-1, nH\n\n        # get pair-wise relative position index for each token inside the window\n        coords_h = torch.arange(self.window_size[0])\n        coords_w = torch.arange(self.window_size[1])\n        coords = torch.stack(torch.meshgrid([coords_h, coords_w]))  # 2, Wh, Ww\n        coords_flatten = torch.flatten(coords, 1)  # 2, Wh*Ww\n        relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :]  # 2, Wh*Ww, Wh*Ww\n        relative_coords = relative_coords.permute(1, 2, 0).contiguous()  # Wh*Ww, Wh*Ww, 2\n        relative_coords[:, :, 0] += self.window_size[0] - 1  # shift to start from 0\n        relative_coords[:, :, 1] += self.window_size[1] - 1\n        relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1\n        relative_position_index = relative_coords.sum(-1)  # Wh*Ww, Wh*Ww\n        self.register_buffer(\"relative_position_index\", relative_position_index)\n\n        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)\n        self.attn_drop = nn.Dropout(attn_drop)\n        self.proj = nn.Linear(dim, dim)\n        self.proj_drop = nn.Dropout(proj_drop)\n\n        trunc_normal_(self.relative_position_bias_table, std=.02)\n        self.softmax = nn.Softmax(dim=-1)\n\n    def forward(self, x, mask=None):\n        \"\"\" Forward function.\n        Args:\n            x: input features with shape of (num_windows*B, N, C)\n            mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None\n        \"\"\"\n        B_, N, C = x.shape\n        qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)\n        q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)\n\n        q = q * self.scale\n        attn = (q @ k.transpose(-2, -1))\n\n        relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(\n            self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1)  # Wh*Ww,Wh*Ww,nH\n        relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous()  # nH, Wh*Ww, Wh*Ww\n        attn = attn + relative_position_bias.unsqueeze(0)\n\n        if mask is not None:\n            nW = mask.shape[0]\n            attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)\n            attn = attn.view(-1, self.num_heads, N, N)\n            attn = self.softmax(attn)\n        else:\n            attn = self.softmax(attn)\n\n        attn = self.attn_drop(attn)\n\n        x = (attn @ v).transpose(1, 2).reshape(B_, N, C)\n        x = self.proj(x)\n        x = self.proj_drop(x)\n        return x\n\n\nclass SwinTransformerBlock(nn.Module):\n    \"\"\" Swin Transformer Block.\n    Args:\n        dim (int): Number of input channels.\n        num_heads (int): Number of attention heads.\n        window_size (int): Window size.\n        shift_size (int): Shift size for SW-MSA.\n        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.\n        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True\n        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.\n        drop (float, optional): Dropout rate. Default: 0.0\n        attn_drop (float, optional): Attention dropout rate. Default: 0.0\n        drop_path (float, optional): Stochastic depth rate. Default: 0.0\n        act_layer (nn.Module, optional): Activation layer. Default: nn.GELU\n        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm\n    \"\"\"\n\n    def __init__(self, dim, num_heads, window_size=7, shift_size=0,\n                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,\n                 act_layer=nn.GELU, norm_layer=nn.LayerNorm):\n        super().__init__()\n        self.dim = dim\n        self.num_heads = num_heads\n        self.window_size = window_size\n        self.shift_size = shift_size\n        self.mlp_ratio = mlp_ratio\n        assert 0 <= self.shift_size < self.window_size, \"shift_size must in 0-window_size\"\n\n        self.norm1 = norm_layer(dim)\n        self.attn = WindowAttention(\n            dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,\n            qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)\n\n        self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()\n        self.norm2 = norm_layer(dim)\n        mlp_hidden_dim = int(dim * mlp_ratio)\n        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)\n\n        self.H = None\n        self.W = None\n\n    def forward(self, x, mask_matrix):\n        \"\"\" Forward function.\n        Args:\n            x: Input feature, tensor size (B, H*W, C).\n            H, W: Spatial resolution of the input feature.\n            mask_matrix: Attention mask for cyclic shift.\n        \"\"\"\n        B, L, C = x.shape\n        H, W = self.H, self.W\n        assert L == H * W, \"input feature has wrong size\"\n\n        shortcut = x\n        x = self.norm1(x)\n        x = x.view(B, H, W, C)\n\n        # pad feature maps to multiples of window size\n        pad_l = pad_t = 0\n        pad_r = (self.window_size - W % self.window_size) % self.window_size\n        pad_b = (self.window_size - H % self.window_size) % self.window_size\n        x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))\n        _, Hp, Wp, _ = x.shape\n\n        # cyclic shift\n        if self.shift_size > 0:\n            shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))\n            attn_mask = mask_matrix\n        else:\n            shifted_x = x\n            attn_mask = None\n\n        # partition windows\n        x_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, C\n        x_windows = x_windows.view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C\n\n        # W-MSA/SW-MSA\n        attn_windows = self.attn(x_windows, mask=attn_mask)  # nW*B, window_size*window_size, C\n\n        # merge windows\n        attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)\n        shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp)  # B H' W' C\n\n        # reverse cyclic shift\n        if self.shift_size > 0:\n            x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))\n        else:\n            x = shifted_x\n\n        if pad_r > 0 or pad_b > 0:\n            x = x[:, :H, :W, :].contiguous()\n\n        x = x.view(B, H * W, C)\n\n        # FFN\n        x = shortcut + self.drop_path(x)\n        x = x + self.drop_path(self.mlp(self.norm2(x)))\n\n        return x\n\n\nclass PatchMerging(nn.Module):\n    \"\"\" Patch Merging Layer\n    Args:\n        dim (int): Number of input channels.\n        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm\n    \"\"\"\n    def __init__(self, dim, norm_layer=nn.LayerNorm):\n        super().__init__()\n        self.dim = dim\n        self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)\n        self.norm = norm_layer(4 * dim)\n\n    def forward(self, x, H, W):\n        \"\"\" Forward function.\n        Args:\n            x: Input feature, tensor size (B, H*W, C).\n            H, W: Spatial resolution of the input feature.\n        \"\"\"\n        B, L, C = x.shape\n        assert L == H * W, \"input feature has wrong size\"\n\n        x = x.view(B, H, W, C)\n\n        # padding\n        pad_input = (H % 2 == 1) or (W % 2 == 1)\n        if pad_input:\n            x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))\n\n        x0 = x[:, 0::2, 0::2, :]  # B H/2 W/2 C\n        x1 = x[:, 1::2, 0::2, :]  # B H/2 W/2 C\n        x2 = x[:, 0::2, 1::2, :]  # B H/2 W/2 C\n        x3 = x[:, 1::2, 1::2, :]  # B H/2 W/2 C\n        x = torch.cat([x0, x1, x2, x3], -1)  # B H/2 W/2 4*C\n        x = x.view(B, -1, 4 * C)  # B H/2*W/2 4*C\n\n        x = self.norm(x)\n        x = self.reduction(x)\n\n        return x\n\n\nclass BasicLayer(nn.Module):\n    \"\"\" A basic Swin Transformer layer for one stage.\n    Args:\n        dim (int): Number of feature channels\n        depth (int): Depths of this stage.\n        num_heads (int): Number of attention head.\n        window_size (int): Local window size. Default: 7.\n        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.\n        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True\n        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.\n        drop (float, optional): Dropout rate. Default: 0.0\n        attn_drop (float, optional): Attention dropout rate. Default: 0.0\n        drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0\n        norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm\n        downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None\n        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.\n    \"\"\"\n\n    def __init__(self,\n                 dim,\n                 depth,\n                 num_heads,\n                 window_size=7,\n                 mlp_ratio=4.,\n                 qkv_bias=True,\n                 qk_scale=None,\n                 drop=0.,\n                 attn_drop=0.,\n                 drop_path=0.,\n                 norm_layer=nn.LayerNorm,\n                 downsample=None,\n                 use_checkpoint=False):\n        super().__init__()\n        self.window_size = window_size\n        self.shift_size = window_size // 2\n        self.depth = depth\n        self.use_checkpoint = use_checkpoint\n\n        # build blocks\n        self.blocks = nn.ModuleList([\n            SwinTransformerBlock(\n                dim=dim,\n                num_heads=num_heads,\n                window_size=window_size,\n                shift_size=0 if (i % 2 == 0) else window_size // 2,\n                mlp_ratio=mlp_ratio,\n                qkv_bias=qkv_bias,\n                qk_scale=qk_scale,\n                drop=drop,\n                attn_drop=attn_drop,\n                drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,\n                norm_layer=norm_layer)\n            for i in range(depth)])\n\n        # patch merging layer\n        if downsample is not None:\n            self.downsample = downsample(dim=dim, norm_layer=norm_layer)\n        else:\n            self.downsample = None\n\n    def forward(self, x, H, W):\n        \"\"\" Forward function.\n        Args:\n            x: Input feature, tensor size (B, H*W, C).\n            H, W: Spatial resolution of the input feature.\n        \"\"\"\n\n        # calculate attention mask for SW-MSA\n        Hp = int(np.ceil(float(H) / self.window_size)) * self.window_size\n        Wp = int(np.ceil(float(W) / self.window_size)) * self.window_size\n        img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device)  # 1 Hp Wp 1\n        h_slices = (slice(0, -self.window_size),\n                    slice(-self.window_size, -self.shift_size),\n                    slice(-self.shift_size, None))\n        w_slices = (slice(0, -self.window_size),\n                    slice(-self.window_size, -self.shift_size),\n                    slice(-self.shift_size, None))\n        cnt = 0\n        for h in h_slices:\n            for w in w_slices:\n                img_mask[:, h, w, :] = cnt\n                cnt += 1\n\n        mask_windows = window_partition(img_mask, self.window_size)  # nW, window_size, window_size, 1\n        mask_windows = mask_windows.view(-1, self.window_size * self.window_size)\n        attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)\n        attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))\n\n        for blk in self.blocks:\n            blk.H, blk.W = H, W\n            if self.use_checkpoint:\n                x = checkpoint.checkpoint(blk, x, attn_mask)\n            else:\n                x = blk(x, attn_mask)\n        if self.downsample is not None:\n            x_down = self.downsample(x, H, W)\n            Wh, Ww = (H + 1) // 2, (W + 1) // 2\n            return x, H, W, x_down, Wh, Ww\n        else:\n            return x, H, W, x, H, W\n\n\nclass PatchEmbed(nn.Module):\n    \"\"\" Image to Patch Embedding\n    Args:\n        patch_size (int): Patch token size. Default: 4.\n        in_chans (int): Number of input image channels. Default: 3.\n        embed_dim (int): Number of linear projection output channels. Default: 96.\n        norm_layer (nn.Module, optional): Normalization layer. Default: None\n    \"\"\"\n\n    def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):\n        super().__init__()\n        patch_size = to_2tuple(patch_size)\n        self.patch_size = patch_size\n\n        self.in_chans = in_chans\n        self.embed_dim = embed_dim\n\n        self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)\n        if norm_layer is not None:\n            self.norm = norm_layer(embed_dim)\n        else:\n            self.norm = None\n\n    def forward(self, x):\n        \"\"\"Forward function.\"\"\"\n        # padding\n        _, _, H, W = x.size()\n        if W % self.patch_size[1] != 0:\n            x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))\n        if H % self.patch_size[0] != 0:\n            x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))\n\n        x = self.proj(x)  # B C Wh Ww\n        if self.norm is not None:\n            Wh, Ww = x.size(2), x.size(3)\n            x = x.flatten(2).transpose(1, 2)\n            x = self.norm(x)\n            x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)\n\n        return x\n\n\nclass SwinTransformer(nn.Module):\n    \"\"\" Swin Transformer backbone.\n        A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows`  -\n          https://arxiv.org/pdf/2103.14030\n    Args:\n        pretrain_img_size (int): Input image size for training the pretrained model,\n            used in absolute postion embedding. Default 224.\n        patch_size (int | tuple(int)): Patch size. Default: 4.\n        in_chans (int): Number of input image channels. Default: 3.\n        embed_dim (int): Number of linear projection output channels. Default: 96.\n        depths (tuple[int]): Depths of each Swin Transformer stage.\n        num_heads (tuple[int]): Number of attention head of each stage.\n        window_size (int): Window size. Default: 7.\n        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.\n        qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True\n        qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.\n        drop_rate (float): Dropout rate.\n        attn_drop_rate (float): Attention dropout rate. Default: 0.\n        drop_path_rate (float): Stochastic depth rate. Default: 0.2.\n        norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.\n        ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.\n        patch_norm (bool): If True, add normalization after patch embedding. Default: True.\n        out_indices (Sequence[int]): Output from which stages.\n        frozen_stages (int): Stages to be frozen (stop grad and set eval mode).\n            -1 means not freezing any parameters.\n        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.\n    \"\"\"\n\n    def __init__(self,\n                 pretrain_img_size=224,\n                 patch_size=4,\n                 in_chans=3,\n                 embed_dim=96,\n                 depths=[2, 2, 6, 2],\n                 num_heads=[3, 6, 12, 24],\n                 window_size=7,\n                 mlp_ratio=4.,\n                 qkv_bias=True,\n                 qk_scale=None,\n                 drop_rate=0.,\n                 attn_drop_rate=0.,\n                 drop_path_rate=0.2,\n                 norm_layer=nn.LayerNorm,\n                 ape=False,\n                 patch_norm=True,\n                 out_indices=(0, 1, 2, 3),\n                 frozen_stages=-1,\n                 use_checkpoint=False):\n        super().__init__()\n\n        self.pretrain_img_size = pretrain_img_size\n        self.num_layers = len(depths)\n        self.embed_dim = embed_dim\n        self.ape = ape\n        self.patch_norm = patch_norm\n        self.out_indices = out_indices\n        self.frozen_stages = frozen_stages\n\n        # split image into non-overlapping patches\n        self.patch_embed = PatchEmbed(\n            patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,\n            norm_layer=norm_layer if self.patch_norm else None)\n\n        # absolute position embedding\n        if self.ape:\n            pretrain_img_size = to_2tuple(pretrain_img_size)\n            patch_size = to_2tuple(patch_size)\n            patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]\n\n            self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))\n            trunc_normal_(self.absolute_pos_embed, std=.02)\n\n        self.pos_drop = nn.Dropout(p=drop_rate)\n\n        # stochastic depth\n        dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))]  # stochastic depth decay rule\n\n        # build layers\n        self.layers = nn.ModuleList()\n        for i_layer in range(self.num_layers):\n            layer = BasicLayer(\n                dim=int(embed_dim * 2 ** i_layer),\n                depth=depths[i_layer],\n                num_heads=num_heads[i_layer],\n                window_size=window_size,\n                mlp_ratio=mlp_ratio,\n                qkv_bias=qkv_bias,\n                qk_scale=qk_scale,\n                drop=drop_rate,\n                attn_drop=attn_drop_rate,\n                drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],\n                norm_layer=norm_layer,\n                downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,\n                use_checkpoint=use_checkpoint)\n            self.layers.append(layer)\n\n        num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]\n        self.num_features = num_features\n\n        # add a norm layer for each output\n        for i_layer in out_indices:\n            layer = norm_layer(num_features[i_layer])\n            layer_name = f'norm{i_layer}'\n            self.add_module(layer_name, layer)\n\n        self.apply(self._init_weights)\n        self._freeze_stages()\n\n    def _freeze_stages(self):\n        if self.frozen_stages >= 0:\n            self.patch_embed.eval()\n            for param in self.patch_embed.parameters():\n                param.requires_grad = False\n\n        if self.frozen_stages >= 1 and self.ape:\n            self.absolute_pos_embed.requires_grad = False\n\n        if self.frozen_stages >= 2:\n            self.pos_drop.eval()\n            for i in range(0, self.frozen_stages - 1):\n                m = self.layers[i]\n                m.eval()\n                for param in m.parameters():\n                    param.requires_grad = False\n\n    def _init_weights(self, m):\n        if isinstance(m, nn.Linear):\n            trunc_normal_(m.weight, std=.02)\n            if isinstance(m, nn.Linear) and m.bias is not None:\n                nn.init.constant_(m.bias, 0)\n        elif isinstance(m, nn.LayerNorm):\n            nn.init.constant_(m.bias, 0)\n            nn.init.constant_(m.weight, 1.0)\n\n    def forward(self, x):\n        \"\"\"Forward function.\"\"\"\n        x = self.patch_embed(x)\n\n        Wh, Ww = x.size(2), x.size(3)\n        if self.ape:\n            # interpolate the position embedding to the corresponding size\n            absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')\n            x = (x + absolute_pos_embed).flatten(2).transpose(1, 2)  # B Wh*Ww C\n        else:\n            x = x.flatten(2).transpose(1, 2)\n        x = self.pos_drop(x)\n\n        outs = {}\n        for i in range(self.num_layers):\n            layer = self.layers[i]\n            x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)\n\n            if i in self.out_indices:\n                norm_layer = getattr(self, f'norm{i}')\n                x_out = norm_layer(x_out)\n\n                out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()\n                outs[str(len(outs))] = out\n\n        return outs\n"
  },
  {
    "path": "requirements.txt",
    "content": "pycocotools\ntqdm\nscipy\ntimm\nfvcore\ntensorboard\n"
  },
  {
    "path": "tools/launch.py",
    "content": "# --------------------------------------------------------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# --------------------------------------------------------------------------------------------------------------------------\n# Modified from https://github.com/pytorch/pytorch/blob/173f224570017b4b1a3a1a13d0bff280a54d9cd9/torch/distributed/launch.py\n# --------------------------------------------------------------------------------------------------------------------------\n\nr\"\"\"\n`torch.distributed.launch` is a module that spawns up multiple distributed\ntraining processes on each of the training nodes.\nThe utility can be used for single-node distributed training, in which one or\nmore processes per node will be spawned. The utility can be used for either\nCPU training or GPU training. If the utility is used for GPU training,\neach distributed process will be operating on a single GPU. This can achieve\nwell-improved single-node training performance. It can also be used in\nmulti-node distributed training, by spawning up multiple processes on each node\nfor well-improved multi-node distributed training performance as well.\nThis will especially be benefitial for systems with multiple Infiniband\ninterfaces that have direct-GPU support, since all of them can be utilized for\naggregated communication bandwidth.\nIn both cases of single-node distributed training or multi-node distributed\ntraining, this utility will launch the given number of processes per node\n(``--nproc_per_node``). If used for GPU training, this number needs to be less\nor euqal to the number of GPUs on the current system (``nproc_per_node``),\nand each process will be operating on a single GPU from *GPU 0 to\nGPU (nproc_per_node - 1)*.\n**How to use this module:**\n1. Single-Node multi-process distributed training\n::\n    >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE\n               YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other\n               arguments of your training script)\n2. Multi-Node multi-process distributed training: (e.g. two nodes)\nNode 1: *(IP: 192.168.1.1, and has a free port: 1234)*\n::\n    >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE\n               --nnodes=2 --node_rank=0 --master_addr=\"192.168.1.1\"\n               --master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3\n               and all other arguments of your training script)\nNode 2:\n::\n    >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE\n               --nnodes=2 --node_rank=1 --master_addr=\"192.168.1.1\"\n               --master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3\n               and all other arguments of your training script)\n3. To look up what optional arguments this module offers:\n::\n    >>> python -m torch.distributed.launch --help\n**Important Notices:**\n1. This utilty and multi-process distributed (single-node or\nmulti-node) GPU training currently only achieves the best performance using\nthe NCCL distributed backend. Thus NCCL backend is the recommended backend to\nuse for GPU training.\n2. In your training program, you must parse the command-line argument:\n``--local_rank=LOCAL_PROCESS_RANK``, which will be provided by this module.\nIf your training program uses GPUs, you should ensure that your code only\nruns on the GPU device of LOCAL_PROCESS_RANK. This can be done by:\nParsing the local_rank argument\n::\n    >>> import argparse\n    >>> parser = argparse.ArgumentParser()\n    >>> parser.add_argument(\"--local_rank\", type=int)\n    >>> args = parser.parse_args()\nSet your device to local rank using either\n::\n    >>> torch.cuda.set_device(arg.local_rank)  # before your code runs\nor\n::\n    >>> with torch.cuda.device(arg.local_rank):\n    >>>    # your code to run\n3. In your training program, you are supposed to call the following function\nat the beginning to start the distributed backend. You need to make sure that\nthe init_method uses ``env://``, which is the only supported ``init_method``\nby this module.\n::\n    torch.distributed.init_process_group(backend='YOUR BACKEND',\n                                         init_method='env://')\n4. In your training program, you can either use regular distributed functions\nor use :func:`torch.nn.parallel.DistributedDataParallel` module. If your\ntraining program uses GPUs for training and you would like to use\n:func:`torch.nn.parallel.DistributedDataParallel` module,\nhere is how to configure it.\n::\n    model = torch.nn.parallel.DistributedDataParallel(model,\n                                                      device_ids=[arg.local_rank],\n                                                      output_device=arg.local_rank)\nPlease ensure that ``device_ids`` argument is set to be the only GPU device id\nthat your code will be operating on. This is generally the local rank of the\nprocess. In other words, the ``device_ids`` needs to be ``[args.local_rank]``,\nand ``output_device`` needs to be ``args.local_rank`` in order to use this\nutility\n5. Another way to pass ``local_rank`` to the subprocesses via environment variable\n``LOCAL_RANK``. This behavior is enabled when you launch the script with\n``--use_env=True``. You must adjust the subprocess example above to replace\n``args.local_rank`` with ``os.environ['LOCAL_RANK']``; the launcher\nwill not pass ``--local_rank`` when you specify this flag.\n.. warning::\n    ``local_rank`` is NOT globally unique: it is only unique per process\n    on a machine.  Thus, don't use it to decide if you should, e.g.,\n    write to a networked filesystem.  See\n    https://github.com/pytorch/pytorch/issues/12042 for an example of\n    how things can go wrong if you don't do this correctly.\n\"\"\"\n\n\nimport sys\nimport subprocess\nimport os\nimport socket\nfrom argparse import ArgumentParser, REMAINDER\n\nimport torch\n\n\ndef parse_args():\n    \"\"\"\n    Helper function parsing the command line options\n    @retval ArgumentParser\n    \"\"\"\n    parser = ArgumentParser(description=\"PyTorch distributed training launch \"\n                                        \"helper utilty that will spawn up \"\n                                        \"multiple distributed processes\")\n\n    # Optional arguments for the launch helper\n    parser.add_argument(\"--nnodes\", type=int, default=1,\n                        help=\"The number of nodes to use for distributed \"\n                             \"training\")\n    parser.add_argument(\"--node_rank\", type=int, default=0,\n                        help=\"The rank of the node for multi-node distributed \"\n                             \"training\")\n    parser.add_argument(\"--nproc_per_node\", type=int, default=1,\n                        help=\"The number of processes to launch on each node, \"\n                             \"for GPU training, this is recommended to be set \"\n                             \"to the number of GPUs in your system so that \"\n                             \"each process can be bound to a single GPU.\")\n    parser.add_argument(\"--master_addr\", default=\"127.0.0.1\", type=str,\n                        help=\"Master node (rank 0)'s address, should be either \"\n                             \"the IP address or the hostname of node 0, for \"\n                             \"single node multi-proc training, the \"\n                             \"--master_addr can simply be 127.0.0.1\")\n    parser.add_argument(\"--master_port\", default=29500, type=int,\n                        help=\"Master node (rank 0)'s free port that needs to \"\n                             \"be used for communciation during distributed \"\n                             \"training\")\n\n    # positional\n    parser.add_argument(\"training_script\", type=str,\n                        help=\"The full path to the single GPU training \"\n                             \"program/script to be launched in parallel, \"\n                             \"followed by all the arguments for the \"\n                             \"training script\")\n\n    # rest from the training program\n    parser.add_argument('training_script_args', nargs=REMAINDER)\n    return parser.parse_args()\n\n\ndef main():\n    args = parse_args()\n\n    # world size in terms of number of processes\n    dist_world_size = args.nproc_per_node * args.nnodes\n\n    # set PyTorch distributed related environmental variables\n    current_env = os.environ.copy()\n    current_env[\"MASTER_ADDR\"] = args.master_addr\n    current_env[\"MASTER_PORT\"] = str(args.master_port)\n    current_env[\"WORLD_SIZE\"] = str(dist_world_size)\n\n    processes = []\n\n    for local_rank in range(0, args.nproc_per_node):\n        # each process's rank\n        dist_rank = args.nproc_per_node * args.node_rank + local_rank\n        current_env[\"RANK\"] = str(dist_rank)\n        current_env[\"LOCAL_RANK\"] = str(local_rank)\n\n        cmd = [args.training_script] + args.training_script_args\n\n        process = subprocess.Popen(cmd, env=current_env)\n        processes.append(process)\n\n    for process in processes:\n        process.wait()\n        if process.returncode != 0:\n            raise subprocess.CalledProcessError(returncode=process.returncode,\n                                                cmd=process.args)\n\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "tools/run_dist_launch.sh",
    "content": "#!/usr/bin/env bash\n# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n\nset -x\n\nGPUS=$1\nRUN_COMMAND=${@:2}\nif [ $GPUS -lt 8 ]; then\n    GPUS_PER_NODE=${GPUS_PER_NODE:-$GPUS}\nelse\n    GPUS_PER_NODE=${GPUS_PER_NODE:-8}\nfi\nMASTER_ADDR=${MASTER_ADDR:-\"127.0.0.1\"}\nMASTER_PORT=${MASTER_PORT:-\"29500\"}\nNODE_RANK=${NODE_RANK:-0}\n\nlet \"NNODES=GPUS/GPUS_PER_NODE\"\n\npython ./tools/launch.py \\\n    --nnodes ${NNODES} \\\n    --node_rank ${NODE_RANK} \\\n    --master_addr ${MASTER_ADDR} \\\n    --master_port ${MASTER_PORT} \\\n    --nproc_per_node ${GPUS_PER_NODE} \\\n    ${RUN_COMMAND}"
  },
  {
    "path": "util/__init__.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n"
  },
  {
    "path": "util/benchmark.py",
    "content": "from collections import defaultdict\nimport time\nfrom typing import Any, Counter, DefaultDict, Tuple, Dict, Optional\nimport warnings\n\nimport numpy as np\nimport torch\nfrom torch import nn\nimport tqdm\n\nfrom util.misc import nested_tensor_from_tensor_list\nfrom fvcore.nn import FlopCountAnalysis\nfrom fvcore.nn.jit_handles import Handle\n\n\n@torch.no_grad()\ndef measure_average_inference_time(model, inputs, num_iters=100, warm_iters=5):\n    ts = []\n    # note that warm-up iters. are excluded from the total iters.\n    for iter_ in tqdm.tqdm(range(warm_iters + num_iters)):\n        torch.cuda.synchronize()\n        t_ = time.perf_counter()\n        model(inputs)\n        torch.cuda.synchronize()\n        t = time.perf_counter() - t_\n        if iter_ >= warm_iters:\n          ts.append(t)\n    return sum(ts) / len(ts)\n\n\ndef python_ops_mode_for_deform_attn(model, ops_mode):\n    def change_ops_mode(module):\n        if hasattr(module, \"python_ops_for_test\"):\n            module.python_ops_for_test = ops_mode\n    model.apply(change_ops_mode)\n    \n    \n@torch.no_grad()\ndef compute_fps(model, dataset, num_iters=300, warm_iters=5, batch_size=4):\n    print(f\"computing fps.. (num_iters={num_iters}, batch_size={batch_size}) \"\n          f\"warm_iters={warm_iters}, batch_size={batch_size}]\")\n    assert num_iters > 0 and warm_iters >= 0 and batch_size > 0\n    model.cuda()\n    model.eval()\n    inputs = nested_tensor_from_tensor_list(\n        [dataset.__getitem__(0)[0].cuda() for _ in range(batch_size)])\n    t = measure_average_inference_time(model, inputs, num_iters, warm_iters)\n    model.train()\n    print(f\"FPS: {1.0 / t * batch_size}\")  \n    return 1.0 / t * batch_size\n      \n        \n@torch.no_grad()\ndef compute_gflops(model, dataset, approximated=True):\n    print(f\"computing flops.. (approximated={approximated})\")\n    model.eval()\n    python_ops_mode_for_deform_attn(model, True)\n    if approximated:\n        # use just a single image to approximate the full compuation\n        # the size of the image was found heuristically\n        images = [torch.randn((3, 850, 1040))]\n    else:\n        # full computation: get the first 100 images of COCO val2017\n        images = []\n        for idx in range(100):\n            img, _ = dataset[idx]\n            images.append(img)\n    \n    gflops_list = []\n    imsize_list = []\n    \n    for img in tqdm.tqdm(images):\n        inputs = [img.cuda()]\n        with warnings.catch_warnings():\n            warnings.filterwarnings(\"ignore\", category=RuntimeWarning)\n            res = flop_count_without_warnings(model, (inputs,), )[0]\n        gflops = sum(res.values())\n        gflops_list.append(gflops)\n        imsize_list.append(list(img.shape))\n    \n    if approximated:\n        print(f\"The image size used for approximation: [3, 850, 1040]\")\n    else:\n        print(\"Average image size of first 100 image of COCO val2017 : \"\n              f\"{np.array(imsize_list).mean(0)}\")\n        \n    print(f\"GFLOPs : {np.array(gflops_list).mean()}\")\n    model.train()\n    python_ops_mode_for_deform_attn(model, False)\n    return gflops\n\n\ndef flop_count_without_warnings(\n    \n    model: nn.Module,\n    inputs: Tuple[Any, ...],\n    supported_ops: Optional[Dict[str, Handle]] = None,\n) -> Tuple[DefaultDict[str, float], Counter[str]]:\n    \"\"\"copied and modified from fvcore.nn.flop_count.py\n    \n    Given a model and an input to the model, compute the per-operator Gflops\n    of the given model.\n    Args:\n        model (nn.Module): The model to compute flop counts.\n        inputs (tuple): Inputs that are passed to `model` to count flops.\n            Inputs need to be in a tuple.\n        supported_ops (dict(str,Callable) or None) : provide additional\n            handlers for extra ops, or overwrite the existing handlers for\n            convolution and matmul and einsum. The key is operator name and the value\n            is a function that takes (inputs, outputs) of the op. We count\n            one Multiply-Add as one FLOP.\n    Returns:\n        tuple[defaultdict, Counter]: A dictionary that records the number of\n            gflops for each operation and a Counter that records the number of\n            unsupported operations.\n    \"\"\"\n    if supported_ops is None:\n        supported_ops = {}\n    flop_counter = FlopCountAnalysis(model, inputs).set_op_handle(**supported_ops)\n    flop_counter.unsupported_ops_warnings(False)\n    flop_counter.uncalled_modules_warnings(False)\n    flop_counter.tracer_warnings(\"no_tracer_warning\")\n    giga_flops = defaultdict(float)\n    for op, flop in flop_counter.by_operator().items():\n        giga_flops[op] = flop / 1e9\n    return giga_flops, flop_counter.unsupported_ops() \n"
  },
  {
    "path": "util/box_ops.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\n\"\"\"\nUtilities for bounding box manipulation and GIoU.\n\"\"\"\nimport torch\nfrom torchvision.ops.boxes import box_area\n\n\ndef box_cxcywh_to_xyxy(x):\n    x_c, y_c, w, h = x.unbind(-1)\n    b = [(x_c - 0.5 * w), (y_c - 0.5 * h),\n         (x_c + 0.5 * w), (y_c + 0.5 * h)]\n    return torch.stack(b, dim=-1)\n\n\ndef box_xyxy_to_cxcywh(x):\n    x0, y0, x1, y1 = x.unbind(-1)\n    b = [(x0 + x1) / 2, (y0 + y1) / 2,\n         (x1 - x0), (y1 - y0)]\n    return torch.stack(b, dim=-1)\n\n\n# modified from torchvision to also return the union\ndef box_iou(boxes1, boxes2):\n    area1 = box_area(boxes1)\n    area2 = box_area(boxes2)\n\n    lt = torch.max(boxes1[:, None, :2], boxes2[:, :2])  # [N,M,2]\n    rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:])  # [N,M,2]\n\n    wh = (rb - lt).clamp(min=0)  # [N,M,2]\n    inter = wh[:, :, 0] * wh[:, :, 1]  # [N,M]\n\n    union = area1[:, None] + area2 - inter\n\n    iou = inter / union\n    return iou, union\n\n\ndef generalized_box_iou(boxes1, boxes2):\n    \"\"\"\n    Generalized IoU from https://giou.stanford.edu/\n\n    The boxes should be in [x0, y0, x1, y1] format\n\n    Returns a [N, M] pairwise matrix, where N = len(boxes1)\n    and M = len(boxes2)\n    \"\"\"\n    # degenerate boxes gives inf / nan results\n    # so do an early check\n    assert (boxes1[:, 2:] >= boxes1[:, :2]).all()\n    assert (boxes2[:, 2:] >= boxes2[:, :2]).all()\n    iou, union = box_iou(boxes1, boxes2)\n\n    lt = torch.min(boxes1[:, None, :2], boxes2[:, :2])\n    rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])\n\n    wh = (rb - lt).clamp(min=0)  # [N,M,2]\n    area = wh[:, :, 0] * wh[:, :, 1]\n\n    return iou - (area - union) / area\n\n\ndef masks_to_boxes(masks):\n    \"\"\"Compute the bounding boxes around the provided masks\n\n    The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions.\n\n    Returns a [N, 4] tensors, with the boxes in xyxy format\n    \"\"\"\n    if masks.numel() == 0:\n        return torch.zeros((0, 4), device=masks.device)\n\n    h, w = masks.shape[-2:]\n\n    y = torch.arange(0, h, dtype=torch.float)\n    x = torch.arange(0, w, dtype=torch.float)\n    y, x = torch.meshgrid(y, x)\n\n    x_mask = (masks * x.unsqueeze(0))\n    x_max = x_mask.flatten(1).max(-1)[0]\n    x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]\n\n    y_mask = (masks * y.unsqueeze(0))\n    y_max = y_mask.flatten(1).max(-1)[0]\n    y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]\n\n    return torch.stack([x_min, y_min, x_max, y_max], 1)\n"
  },
  {
    "path": "util/dam.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n\n\nfrom pathlib import Path\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\n\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\n\nfrom util.box_ops import box_cxcywh_to_xyxy\nfrom util.misc import unwrap\n\n\ndef idx_to_flat_grid(spatial_shapes, idx):\n    flat_grid_shape = (idx.shape[0], int(torch.sum(spatial_shapes[..., 0] * spatial_shapes[..., 1])))\n    flat_grid = torch.zeros(flat_grid_shape, device=idx.device, dtype=torch.float32)\n    flat_grid.scatter_(1, idx.to(torch.int64), 1)\n\n    return flat_grid\n\n\ndef attn_map_to_flat_grid(spatial_shapes, level_start_index, sampling_locations, attention_weights):\n    # sampling_locations: [N, n_layers, Len_q, n_heads, n_levels, n_points, 2]\n    # attention_weights: [N, n_layers, Len_q, n_heads, n_levels, n_points]\n    N, n_layers, _, n_heads, *_ = sampling_locations.shape\n    sampling_locations = sampling_locations.permute(0, 1, 3, 2, 5, 4, 6).flatten(0, 2).flatten(1, 2)\n    # [N * n_layers * n_heads, Len_q * n_points, n_levels, 2]\n    attention_weights = attention_weights.permute(0, 1, 3, 2, 5, 4).flatten(0, 2).flatten(1, 2)\n    # [N * n_layers * n_heads, Len_q * n_points, n_levels]\n\n    rev_spatial_shapes = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], dim=-1) # hw -> wh (xy)\n    col_row_float = sampling_locations * rev_spatial_shapes\n\n    col_row_ll = col_row_float.floor().to(torch.int64)\n    zero = torch.zeros(*col_row_ll.shape[:-1], dtype=torch.int64, device=col_row_ll.device)\n    one = torch.ones(*col_row_ll.shape[:-1], dtype=torch.int64, device=col_row_ll.device)\n    col_row_lh = col_row_ll + torch.stack([zero, one], dim=-1)\n    col_row_hl = col_row_ll + torch.stack([one, zero], dim=-1)\n    col_row_hh = col_row_ll + 1\n\n    margin_ll = (col_row_float - col_row_ll).prod(dim=-1)\n    margin_lh = -(col_row_float - col_row_lh).prod(dim=-1)\n    margin_hl = -(col_row_float - col_row_hl).prod(dim=-1)\n    margin_hh = (col_row_float - col_row_hh).prod(dim=-1)\n\n    flat_grid_shape = (attention_weights.shape[0], int(torch.sum(spatial_shapes[..., 0] * spatial_shapes[..., 1])))\n    flat_grid = torch.zeros(flat_grid_shape, dtype=torch.float32, device=attention_weights.device)\n\n    zipped = [(col_row_ll, margin_hh), (col_row_lh, margin_hl), (col_row_hl, margin_lh), (col_row_hh, margin_ll)]\n    for col_row, margin in zipped:\n        valid_mask = torch.logical_and(\n            torch.logical_and(col_row[..., 0] >= 0, col_row[..., 0] < rev_spatial_shapes[..., 0]),\n            torch.logical_and(col_row[..., 1] >= 0, col_row[..., 1] < rev_spatial_shapes[..., 1]),\n        )\n        idx = col_row[..., 1] * spatial_shapes[..., 1] + col_row[..., 0] + level_start_index\n        idx = (idx * valid_mask).flatten(1, 2)\n        weights = (attention_weights * valid_mask * margin).flatten(1)\n        flat_grid.scatter_add_(1, idx, weights)\n\n    return flat_grid.reshape(N, n_layers, n_heads, -1)\n\n\ndef compute_corr(flat_grid_topk, flat_grid_attn_map, spatial_shapes):\n    if len(flat_grid_topk.shape) == 1:\n        flat_grid_topk = flat_grid_topk.unsqueeze(0)\n        flat_grid_attn_map = flat_grid_attn_map.unsqueeze(0)\n        \n    tot = flat_grid_attn_map.sum(-1)\n    hit = (flat_grid_topk * flat_grid_attn_map).sum(-1)\n\n    corr = [hit / tot]\n    flat_grid_idx = 0\n\n    for shape in spatial_shapes:\n        level_range = np.arange(int(flat_grid_idx), int(flat_grid_idx + shape[0] * shape[1]))\n        tot = (flat_grid_attn_map[:, level_range]).sum(-1)\n        hit = (flat_grid_topk[:, level_range] * flat_grid_attn_map[:, level_range]).sum(-1)\n        flat_grid_idx += shape[0] * shape[1]\n        corr.append(hit / tot)\n    return corr\n\n"
  },
  {
    "path": "util/misc.py",
    "content": "# ------------------------------------------------------------------------------------\n# Sparse DETR\n# Copyright (c) 2021 KakaoBrain. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------------------\n# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\n\"\"\"\nMisc functions, including distributed helpers.\n\nMostly copy-paste from torchvision references.\n\"\"\"\nimport os\nimport subprocess\nimport time\nfrom collections import defaultdict, deque\nimport datetime\nimport pickle\nimport socket\nfrom typing import Optional, List\n\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\nfrom torch import Tensor\nfrom torch.nn.parallel import DistributedDataParallel\n\n\n# needed due to empty tensor bug in pytorch and torchvision 0.5\nimport torchvision\nif float(torchvision.__version__[:3]) < 0.5:\n    import math\n    from torchvision.ops.misc import _NewEmptyTensorOp\n    def _check_size_scale_factor(dim, size, scale_factor):\n        # type: (int, Optional[List[int]], Optional[float]) -> None\n        if size is None and scale_factor is None:\n            raise ValueError(\"either size or scale_factor should be defined\")\n        if size is not None and scale_factor is not None:\n            raise ValueError(\"only one of size or scale_factor should be defined\")\n        if not (scale_factor is not None and len(scale_factor) != dim):\n            raise ValueError(\n                \"scale_factor shape must match input shape. \"\n                \"Input is {}D, scale_factor size is {}\".format(dim, len(scale_factor))\n            )\n    def _output_size(dim, input, size, scale_factor):\n        # type: (int, Tensor, Optional[List[int]], Optional[float]) -> List[int]\n        assert dim == 2\n        _check_size_scale_factor(dim, size, scale_factor)\n        if size is not None:\n            return size\n        # if dim is not 2 or scale_factor is iterable use _ntuple instead of concat\n        assert scale_factor is not None and isinstance(scale_factor, (int, float))\n        scale_factors = [scale_factor, scale_factor]\n        # math.floor might return float in py2.7\n        return [\n            int(math.floor(input.size(i + 2) * scale_factors[i])) for i in range(dim)\n        ]\nelif float(torchvision.__version__[:3]) < 0.7:\n    from torchvision.ops import _new_empty_tensor\n    from torchvision.ops.misc import _output_size\n\n\nclass SmoothedValue(object):\n    \"\"\"Track a series of values and provide access to smoothed values over a\n    window or the global series average.\n    \"\"\"\n\n    def __init__(self, window_size=20, fmt=None):\n        if fmt is None:\n            fmt = \"{median:.4f} ({global_avg:.4f})\"\n        self.deque = deque(maxlen=window_size)\n        self.total = 0.0\n        self.count = 0\n        self.fmt = fmt\n\n    def update(self, value, n=1):\n        self.deque.append(value)\n        self.count += n\n        self.total += value * n\n\n    def synchronize_between_processes(self):\n        \"\"\"\n        Warning: does not synchronize the deque!\n        \"\"\"\n        if not is_dist_avail_and_initialized():\n            return\n        t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda')\n        dist.barrier()\n        dist.all_reduce(t)\n        t = t.tolist()\n        self.count = int(t[0])\n        self.total = t[1]\n\n    @property\n    def median(self):\n        d = torch.tensor(list(self.deque))\n        return d.median().item()\n\n    @property\n    def avg(self):\n        d = torch.tensor(list(self.deque), dtype=torch.float32)\n        return d.mean().item()\n\n    @property\n    def global_avg(self):\n        return self.total / self.count\n\n    @property\n    def max(self):\n        return max(self.deque)\n\n    @property\n    def value(self):\n        return self.deque[-1]\n\n    def __str__(self):\n        return self.fmt.format(\n            median=self.median,\n            avg=self.avg,\n            global_avg=self.global_avg,\n            max=self.max,\n            value=self.value)\n        \n        \ndef unwrap(wrapped_module):\n    if isinstance(wrapped_module, DistributedDataParallel):\n        module = wrapped_module.module\n    else:\n        module = wrapped_module\n    return module\n\n\ndef check_unused_parameters(model, loss_dict, weight_dict):\n    print(\"=== Check unused parameters ===\")\n    # print unused parameters\n    print(f\"set(loss_dict) - set(weight_dict) = {set(loss_dict.keys()) - set(weight_dict.keys())}\")\n    print(f\"set(weight_dict) - set(loss_dict) = {set(weight_dict.keys()) - set(loss_dict.keys())}\")\n    \n    unused_params = [name for name, param in unwrap(model).named_parameters() \n                        if param.grad is None and not name.startswith('backbone')]\n    if unused_params:\n        raise RuntimeError(f\"Unused parameters: {unused_params}\")\n    else:\n        print(\"All the parameters are used.\")\n\n\ndef all_gather(data):\n    \"\"\"\n    Run all_gather on arbitrary picklable data (not necessarily tensors)\n    Args:\n        data: any picklable object\n    Returns:\n        list[data]: list of data gathered from each rank\n    \"\"\"\n    world_size = get_world_size()\n    if world_size == 1:\n        return [data]\n\n    # serialized to a Tensor\n    buffer = pickle.dumps(data)\n    storage = torch.ByteStorage.from_buffer(buffer)\n    tensor = torch.ByteTensor(storage).to(\"cuda\")\n\n    # obtain Tensor size of each rank\n    local_size = torch.tensor([tensor.numel()], device=\"cuda\")\n    size_list = [torch.tensor([0], device=\"cuda\") for _ in range(world_size)]\n    dist.all_gather(size_list, local_size)\n    size_list = [int(size.item()) for size in size_list]\n    max_size = max(size_list)\n\n    # receiving Tensor from all ranks\n    # we pad the tensor because torch all_gather does not support\n    # gathering tensors of different shapes\n    tensor_list = []\n    for _ in size_list:\n        tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=\"cuda\"))\n    if local_size != max_size:\n        padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device=\"cuda\")\n        tensor = torch.cat((tensor, padding), dim=0)\n    dist.all_gather(tensor_list, tensor)\n\n    data_list = []\n    for size, tensor in zip(size_list, tensor_list):\n        buffer = tensor.cpu().numpy().tobytes()[:size]\n        data_list.append(pickle.loads(buffer))\n\n    return data_list\n\n\ndef reduce_dict(input_dict, average=True):\n    \"\"\"\n    Args:\n        input_dict (dict): all the values will be reduced\n        average (bool): whether to do average or sum\n    Reduce the values in the dictionary from all processes so that all processes\n    have the averaged results. Returns a dict with the same fields as\n    input_dict, after reduction.\n    \"\"\"\n    world_size = get_world_size()\n    if world_size < 2:\n        return input_dict\n    with torch.no_grad():\n        names = []\n        values = []\n        # sort the keys so that they are consistent across processes\n        for k in sorted(input_dict.keys()):\n            names.append(k)\n            values.append(input_dict[k])\n        values = torch.stack(values, dim=0)\n        dist.all_reduce(values)\n        if average:\n            values /= world_size\n        reduced_dict = {k: v for k, v in zip(names, values)}\n    return reduced_dict\n\n\nclass MetricLogger(object):\n    def __init__(self, delimiter=\"\\t\"):\n        self.meters = defaultdict(SmoothedValue)\n        self.delimiter = delimiter\n\n    def update(self, **kwargs):\n        for k, v in kwargs.items():\n            if isinstance(v, torch.Tensor):\n                v = v.item()\n            assert isinstance(v, (float, int))\n            self.meters[k].update(v)\n\n    def __getattr__(self, attr):\n        if attr in self.meters:\n            return self.meters[attr]\n        if attr in self.__dict__:\n            return self.__dict__[attr]\n        raise AttributeError(\"'{}' object has no attribute '{}'\".format(\n            type(self).__name__, attr))\n\n    def __str__(self):\n        loss_str = []\n        for name, meter in self.meters.items():\n            loss_str.append(\n                \"{}: {}\".format(name, str(meter))\n            )\n        return self.delimiter.join(loss_str)\n\n    def synchronize_between_processes(self):\n        for meter in self.meters.values():\n            meter.synchronize_between_processes()\n\n    def add_meter(self, name, meter):\n        self.meters[name] = meter\n\n    def log_every(self, iterable, print_freq, header=None):\n        i = 0\n        if not header:\n            header = ''\n        start_time = time.time()\n        end = time.time()\n        iter_time = SmoothedValue(fmt='{avg:.4f}')\n        data_time = SmoothedValue(fmt='{avg:.4f}')\n        space_fmt = ':' + str(len(str(len(iterable)))) + 'd'\n        if torch.cuda.is_available():\n            log_msg = self.delimiter.join([\n                header,\n                '[{0' + space_fmt + '}/{1}]',\n                'eta: {eta}',\n                '{meters}',\n                'time: {time}',\n                'data: {data}',\n                'max mem: {memory:.0f}'\n            ])\n        else:\n            log_msg = self.delimiter.join([\n                header,\n                '[{0' + space_fmt + '}/{1}]',\n                'eta: {eta}',\n                '{meters}',\n                'time: {time}',\n                'data: {data}'\n            ])\n        MB = 1024.0 * 1024.0\n        for obj in iterable:\n            data_time.update(time.time() - end)\n            yield obj\n            iter_time.update(time.time() - end)\n            if i % print_freq == 0 or i == len(iterable) - 1:\n                eta_seconds = iter_time.global_avg * (len(iterable) - i)\n                eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))\n                if torch.cuda.is_available():\n                    print(log_msg.format(\n                        i, len(iterable), eta=eta_string,\n                        meters=str(self),\n                        time=str(iter_time), data=str(data_time),\n                        memory=torch.cuda.max_memory_allocated() / MB))\n                else:\n                    print(log_msg.format(\n                        i, len(iterable), eta=eta_string,\n                        meters=str(self),\n                        time=str(iter_time), data=str(data_time)))\n            i += 1\n            end = time.time()\n        total_time = time.time() - start_time\n        total_time_str = str(datetime.timedelta(seconds=int(total_time)))\n        print('{} Total time: {} ({:.4f} s / it)'.format(\n            header, total_time_str, total_time / len(iterable)))\n\n\ndef get_sha():\n    cwd = os.path.dirname(os.path.abspath(__file__))\n\n    def _run(command):\n        return subprocess.check_output(command, cwd=cwd).decode('ascii').strip()\n    sha = 'N/A'\n    diff = \"clean\"\n    branch = 'N/A'\n    try:\n        sha = _run(['git', 'rev-parse', 'HEAD'])\n        subprocess.check_output(['git', 'diff'], cwd=cwd)\n        diff = _run(['git', 'diff-index', 'HEAD'])\n        diff = \"has uncommited changes\" if diff else \"clean\"\n        branch = _run(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])\n    except Exception:\n        pass\n    message = f\"sha: {sha}, status: {diff}, branch: {branch}\"\n    return message\n\n\ndef collate_fn(batch):\n    batch = list(zip(*batch))\n    batch[0] = nested_tensor_from_tensor_list(batch[0])\n    return tuple(batch)\n\n\ndef _max_by_axis(the_list):\n    # type: (List[List[int]]) -> List[int]\n    maxes = the_list[0]\n    for sublist in the_list[1:]:\n        for index, item in enumerate(sublist):\n            maxes[index] = max(maxes[index], item)\n    return maxes\n\n\ndef nested_tensor_from_tensor_list(tensor_list: List[Tensor]):\n    # TODO make this more general\n    if tensor_list[0].ndim == 3:\n        # TODO make it support different-sized images\n        max_size = _max_by_axis([list(img.shape) for img in tensor_list])\n        # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))\n        batch_shape = [len(tensor_list)] + max_size\n        b, c, h, w = batch_shape\n        dtype = tensor_list[0].dtype\n        device = tensor_list[0].device\n        tensor = torch.zeros(batch_shape, dtype=dtype, device=device)\n        mask = torch.ones((b, h, w), dtype=torch.bool, device=device)\n        for img, pad_img, m in zip(tensor_list, tensor, mask):\n            pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\n            m[: img.shape[1], :img.shape[2]] = False\n    else:\n        raise ValueError('not supported')\n    return NestedTensor(tensor, mask)\n\n\nclass NestedTensor(object):\n    def __init__(self, tensors, mask: Optional[Tensor]):\n        self.tensors = tensors\n        self.mask = mask\n\n    def to(self, device, non_blocking=False):\n        # type: (Device) -> NestedTensor # noqa\n        cast_tensor = self.tensors.to(device, non_blocking=non_blocking)\n        mask = self.mask\n        if mask is not None:\n            assert mask is not None\n            cast_mask = mask.to(device, non_blocking=non_blocking)\n        else:\n            cast_mask = None\n        return NestedTensor(cast_tensor, cast_mask)\n\n    def record_stream(self, *args, **kwargs):\n        self.tensors.record_stream(*args, **kwargs)\n        if self.mask is not None:\n            self.mask.record_stream(*args, **kwargs)\n\n    def decompose(self):\n        return self.tensors, self.mask\n\n    def __repr__(self):\n        return str(self.tensors)\n\n\ndef setup_for_distributed(is_master):\n    \"\"\"\n    This function disables printing when not in master process\n    \"\"\"\n    import builtins as __builtin__\n    builtin_print = __builtin__.print\n\n    def print(*args, **kwargs):\n        force = kwargs.pop('force', False)\n        if is_master or force:\n            builtin_print(*args, **kwargs)\n\n    __builtin__.print = print\n\n\ndef is_dist_avail_and_initialized():\n    if not dist.is_available():\n        return False\n    if not dist.is_initialized():\n        return False\n    return True\n\n\ndef get_world_size():\n    if not is_dist_avail_and_initialized():\n        return 1\n    return dist.get_world_size()\n\n\ndef get_rank():\n    if not is_dist_avail_and_initialized():\n        return 0\n    return dist.get_rank()\n\n\ndef get_local_size():\n    if not is_dist_avail_and_initialized():\n        return 1\n    return int(os.environ['LOCAL_SIZE'])\n\n\ndef get_local_rank():\n    if not is_dist_avail_and_initialized():\n        return 0\n    return int(os.environ['LOCAL_RANK'])\n\n\ndef is_main_process():\n    return get_rank() == 0\n\n\ndef save_on_master(*args, **kwargs):\n    if is_main_process():\n        torch.save(*args, **kwargs)\n        \ndef _check_if_valid_ip(ip):\n    try:\n        socket.inet_aton(ip)\n        # legal\n    except socket.error:\n        # Not legal\n        return False\n    return True\n\n        \ndef _maybe_gethostbyname(addr):\n    \"\"\"to be compatible with Braincloud on which one can access the nodes by their task names.\n    Each node has to wait until all the tasks in the group are up on the cloud.\"\"\"\n    if _check_if_valid_ip(addr):\n        # If IP address is given, do nothing\n        return addr\n    \n    # Otherwise, find the IP address by hostname\n    done = False\n    retry = 0\n    print(f\"Get URL by the given hostname '{addr}' in Braincloud..\")\n    while not done:\n        try:\n            addr = socket.gethostbyname(addr)\n            done = True\n        except:\n            retry += 1\n            print(f\"Retrying count: {retry}\")\n            time.sleep(3)\n    print(f\"Found the host by IP address: {addr}\")\n    return addr\n\n\ndef init_distributed_mode(args):\n\n    if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:\n        os.environ[\"MASTER_ADDR\"] = _maybe_gethostbyname(os.environ[\"MASTER_ADDR\"])\n        args.rank = int(os.environ[\"RANK\"])\n        args.world_size = int(os.environ['WORLD_SIZE'])\n        args.gpu = int(os.environ['LOCAL_RANK'])\n        args.dist_url = 'env://'\n        os.environ['LOCAL_SIZE'] = str(torch.cuda.device_count())\n    elif 'SLURM_PROCID' in os.environ:\n        proc_id = int(os.environ['SLURM_PROCID'])\n        ntasks = int(os.environ['SLURM_NTASKS'])\n        node_list = os.environ['SLURM_NODELIST']\n        num_gpus = torch.cuda.device_count()\n        addr = subprocess.getoutput(\n            'scontrol show hostname {} | head -n1'.format(node_list))\n        os.environ['MASTER_PORT'] = os.environ.get('MASTER_PORT', '29500')\n        os.environ['MASTER_ADDR'] = addr\n        os.environ['WORLD_SIZE'] = str(ntasks)\n        os.environ['RANK'] = str(proc_id)\n        os.environ['LOCAL_RANK'] = str(proc_id % num_gpus)\n        os.environ['LOCAL_SIZE'] = str(num_gpus)\n        args.dist_url = 'env://'\n        args.world_size = ntasks\n        args.rank = proc_id\n        args.gpu = proc_id % num_gpus\n    else:\n        print('Not using distributed mode')\n        args.distributed = False\n        return\n\n    args.distributed = True\n\n    torch.cuda.set_device(args.gpu)\n    args.dist_backend = 'nccl'\n    print('| distributed init (rank {}): {}'.format(\n        args.rank, args.dist_url), flush=True)\n    torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url,\n                                         world_size=args.world_size, rank=args.rank)\n    torch.distributed.barrier()\n    setup_for_distributed(args.rank == 0)\n\n\n@torch.no_grad()\ndef accuracy(output, target, topk=(1,)):\n    \"\"\"Computes the precision@k for the specified values of k\"\"\"\n    if target.numel() == 0:\n        return [torch.zeros([], device=output.device)]\n    maxk = max(topk)\n    batch_size = target.size(0)\n\n    _, pred = output.topk(maxk, 1, True, True)\n    pred = pred.t()\n    correct = pred.eq(target.view(1, -1).expand_as(pred))\n\n    res = []\n    for k in topk:\n        correct_k = correct[:k].view(-1).float().sum(0)\n        res.append(correct_k.mul_(100.0 / batch_size))\n    return res\n\n\ndef interpolate(input, size=None, scale_factor=None, mode=\"nearest\", align_corners=None):\n    # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor\n    \"\"\"\n    Equivalent to nn.functional.interpolate, but with support for empty batch sizes.\n    This will eventually be supported natively by PyTorch, and this\n    class can go away.\n    \"\"\"\n    if float(torchvision.__version__[:3]) < 0.7:\n        if input.numel() > 0:\n            return torch.nn.functional.interpolate(\n                input, size, scale_factor, mode, align_corners\n            )\n\n        output_shape = _output_size(2, input, size, scale_factor)\n        output_shape = list(input.shape[:-2]) + list(output_shape)\n        if float(torchvision.__version__[:3]) < 0.5:\n            return _NewEmptyTensorOp.apply(input, output_shape)\n        return _new_empty_tensor(input, output_shape)\n    else:\n        return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners)\n\n\ndef get_total_grad_norm(parameters, norm_type=2):\n    parameters = list(filter(lambda p: p.grad is not None, parameters))\n    norm_type = float(norm_type)\n    device = parameters[0].grad.device\n    total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]),\n                            norm_type)\n    return total_norm\n\n\ndef inverse_sigmoid(x, eps=1e-5):\n    x = x.clamp(min=0, max=1)\n    x1 = x.clamp(min=eps)\n    x2 = (1 - x).clamp(min=eps)\n    return torch.log(x1/x2)\n\n\ndef scale_learning_rate(args):\n    print(\"==============\")\n    if 'WORLD_SIZE' in os.environ:\n        world_size = int(os.environ['WORLD_SIZE'])\n    else:\n        world_size = 1\n    batch_size = args.batch_size * world_size\n    scale = (batch_size / 16) ** 0.5\n    print(f'Global_batch({batch_size}) = local_batch({args.batch_size}) x world_size({world_size})')\n    print(f'Scaling factor(x{scale:.3f}) = sqrt( global_batch({batch_size}) / 16 )')\n    for name in ['lr', 'lr_backbone']:\n        lr_origin = getattr(args, name)\n        lr_new = lr_origin * scale\n        setattr(args, name, lr_new)\n        print(f'LR scaled ({name}) : {lr_origin:.4e} -> {lr_new:.4e}')\n    print(\"==============\")\n    return args\n"
  },
  {
    "path": "util/plot_utils.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\n\"\"\"\nPlotting utilities to visualize training logs.\n\"\"\"\nimport torch\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nfrom pathlib import Path, PurePath\n\n\ndef plot_logs(logs, fields=('class_error', 'loss_bbox_unscaled', 'mAP'), ewm_col=0, log_name='log.txt'):\n    '''\n    Function to plot specific fields from training log(s). Plots both training and test results.\n\n    :: Inputs - logs = list containing Path objects, each pointing to individual dir with a log file\n              - fields = which results to plot from each log file - plots both training and test for each field.\n              - ewm_col = optional, which column to use as the exponential weighted smoothing of the plots\n              - log_name = optional, name of log file if different than default 'log.txt'.\n\n    :: Outputs - matplotlib plots of results in fields, color coded for each log file.\n               - solid lines are training results, dashed lines are test results.\n\n    '''\n    func_name = \"plot_utils.py::plot_logs\"\n\n    # verify logs is a list of Paths (list[Paths]) or single Pathlib object Path,\n    # convert single Path to list to avoid 'not iterable' error\n\n    if not isinstance(logs, list):\n        if isinstance(logs, PurePath):\n            logs = [logs]\n            print(f\"{func_name} info: logs param expects a list argument, converted to list[Path].\")\n        else:\n            raise ValueError(f\"{func_name} - invalid argument for logs parameter.\\n \\\n            Expect list[Path] or single Path obj, received {type(logs)}\")\n\n    # verify valid dir(s) and that every item in list is Path object\n    for i, dir in enumerate(logs):\n        if not isinstance(dir, PurePath):\n            raise ValueError(f\"{func_name} - non-Path object in logs argument of {type(dir)}: \\n{dir}\")\n        if dir.exists():\n            continue\n        raise ValueError(f\"{func_name} - invalid directory in logs argument:\\n{dir}\")\n\n    # load log file(s) and plot\n    dfs = [pd.read_json(Path(p) / log_name, lines=True) for p in logs]\n\n    fig, axs = plt.subplots(ncols=len(fields), figsize=(16, 5))\n\n    for df, color in zip(dfs, sns.color_palette(n_colors=len(logs))):\n        for j, field in enumerate(fields):\n            if field == 'mAP':\n                coco_eval = pd.DataFrame(pd.np.stack(df.test_coco_eval.dropna().values)[:, 1]).ewm(com=ewm_col).mean()\n                axs[j].plot(coco_eval, c=color)\n            else:\n                df.interpolate().ewm(com=ewm_col).mean().plot(\n                    y=[f'train_{field}', f'test_{field}'],\n                    ax=axs[j],\n                    color=[color] * 2,\n                    style=['-', '--']\n                )\n    for ax, field in zip(axs, fields):\n        ax.legend([Path(p).name for p in logs])\n        ax.set_title(field)\n\n\ndef plot_precision_recall(files, naming_scheme='iter'):\n    if naming_scheme == 'exp_id':\n        # name becomes exp_id\n        names = [f.parts[-3] for f in files]\n    elif naming_scheme == 'iter':\n        names = [f.stem for f in files]\n    else:\n        raise ValueError(f'not supported {naming_scheme}')\n    fig, axs = plt.subplots(ncols=2, figsize=(16, 5))\n    for f, color, name in zip(files, sns.color_palette(\"Blues\", n_colors=len(files)), names):\n        data = torch.load(f)\n        # precision is n_iou, n_points, n_cat, n_area, max_det\n        precision = data['precision']\n        recall = data['params'].recThrs\n        scores = data['scores']\n        # take precision for all classes, all areas and 100 detections\n        precision = precision[0, :, :, 0, -1].mean(1)\n        scores = scores[0, :, :, 0, -1].mean(1)\n        prec = precision.mean()\n        rec = data['recall'][0, :, 0, -1].mean()\n        print(f'{naming_scheme} {name}: mAP@50={prec * 100: 05.1f}, ' +\n              f'score={scores.mean():0.3f}, ' +\n              f'f1={2 * prec * rec / (prec + rec + 1e-8):0.3f}'\n              )\n        axs[0].plot(recall, precision, c=color)\n        axs[1].plot(recall, scores, c=color)\n\n    axs[0].set_title('Precision / Recall')\n    axs[0].legend(names)\n    axs[1].set_title('Scores / Recall')\n    axs[1].legend(names)\n    return fig, axs\n\n\n\n"
  }
]