[
  {
    "path": ".circleci/config.yml",
    "content": "version: 2.1\n\njobs:\n  python_lint:\n    docker:\n      - image: circleci/python:3.7\n    steps:\n      - checkout\n      - run:\n          command: |\n            pip install --user --progress-bar off flake8 typing\n            flake8 .\n\n  test:\n    docker:\n      - image: circleci/python:3.7\n    steps:\n      - checkout\n      - run:\n          command: |\n            pip install --user --progress-bar off scipy pytest\n            pip install --user --progress-bar off --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html\n            pytest .\n\nworkflows:\n  build:\n    jobs:\n      - python_lint\n      - test\n"
  },
  {
    "path": ".github/CODE_OF_CONDUCT.md",
    "content": "# Code of Conduct\n\nFacebook has adopted a Code of Conduct that we expect project participants to adhere to.\nPlease read the [full text](https://code.fb.com/codeofconduct/)\nso that you can understand what actions will and will not be tolerated.\n"
  },
  {
    "path": ".github/CONTRIBUTING.md",
    "content": "# Contributing to DETR\nWe want to make contributing to this project as easy and transparent as\npossible.\n\n## Our Development Process\nMinor changes and improvements will be released on an ongoing basis. Larger changes (e.g., changesets implementing a new paper) will be released on a more periodic basis.\n\n## Pull Requests\nWe actively welcome your pull requests.\n\n1. Fork the repo and create your branch from `master`.\n2. If you've added code that should be tested, add tests.\n3. If you've changed APIs, update the documentation.\n4. Ensure the test suite passes.\n5. Make sure your code lints.\n6. If you haven't already, complete the Contributor License Agreement (\"CLA\").\n\n## Contributor License Agreement (\"CLA\")\nIn order to accept your pull request, we need you to submit a CLA. You only need\nto do this once to work on any of Facebook's open source projects.\n\nComplete your CLA here: <https://code.facebook.com/cla>\n\n## Issues\nWe use GitHub issues to track public bugs. Please ensure your description is\nclear and has sufficient instructions to be able to reproduce the issue.\n\nFacebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe\ndisclosure of security bugs. In those cases, please go through the process\noutlined on that page and do not file a public issue.\n\n## Coding Style  \n* 4 spaces for indentation rather than tabs\n* 80 character line length\n* PEP8 formatting following [Black](https://black.readthedocs.io/en/stable/)\n\n## License\nBy contributing to DETR, you agree that your contributions will be licensed\nunder the LICENSE file in the root directory of this source tree.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bugs.md",
    "content": "---\nname: \"🐛 Bugs\"\nabout: Report bugs in DETR\ntitle: Please read & provide the following\n\n---\n\n## Instructions To Reproduce the 🐛 Bug:\n\n1. what changes you made (`git diff`) or what code you wrote\n```\n<put diff or code here>\n```\n2. what exact command you run:\n3. what you observed (including __full logs__):\n```\n<put logs here>\n```\n4. please simplify the steps as much as possible so they do not require additional resources to\n\t run, such as a private dataset.\n\n## Expected behavior:\n\nIf there are no obvious error in \"what you observed\" provided above,\nplease tell us the expected behavior.\n\n## Environment:\n\nProvide your environment information using the following command:\n```\npython -m torch.utils.collect_env\n```\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/questions-help-support.md",
    "content": "---\nname: \"How to do something❓\"\nabout: How to do something using DETR?\n\n---\n\n## ❓ How to do something using DETR\n\nDescribe what you want to do, including:\n1. what inputs you will provide, if any:\n2. what outputs you are expecting:\n\n\nNOTE:\n\n1. Only general answers are provided.\n   If you want to ask about \"why X did not work\", please use the\n   [Unexpected behaviors](https://github.com/facebookresearch/detr/issues/new/choose) issue template.\n\n2. About how to implement new models / new dataloader / new training logic, etc., check documentation first.\n\n3. We do not answer general machine learning / computer vision questions that are not specific to DETR, such as how a model works, how to improve your training/make it converge, or what algorithm/methods can be used to achieve X.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/unexpected-problems-bugs.md",
    "content": "---\nname: \"Unexpected behaviors\"\nabout: Run into unexpected behaviors when using DETR\ntitle: Please read & provide the following\n\n---\n\nIf you do not know the root cause of the problem, and wish someone to help you, please\npost according to this template:\n\n## Instructions To Reproduce the Issue:\n\n1. what changes you made (`git diff`) or what code you wrote\n```\n<put diff or code here>\n```\n2. what exact command you run:\n3. what you observed (including __full logs__):\n```\n<put logs here>\n```\n4. please simplify the steps as much as possible so they do not require additional resources to\n\t run, such as a private dataset.\n\n## Expected behavior:\n\nIf there are no obvious error in \"what you observed\" provided above,\nplease tell us the expected behavior.\n\nIf you expect the model to converge / work better, note that we do not give suggestions\non how to train a new model.\nOnly in one of the two conditions we will help with it:\n(1) You're unable to reproduce the results in DETR model zoo.\n(2) It indicates a DETR bug.\n\n## Environment:\n\nProvide your environment information using the following command:\n```\npython -m torch.utils.collect_env\n```\n"
  },
  {
    "path": ".gitignore",
    "content": ".nfs*\n*.ipynb\n*.pyc\n.dumbo.json\n.DS_Store\n.*.swp\n*.pth\n**/__pycache__/**\n.ipynb_checkpoints/\ndatasets/data/\nexperiment-*\n*.tmp\n*.pkl\n**/.mypy_cache/*\n.mypy_cache/*\nnot_tracked_dir/\n.vscode\n.python-version\n*.sbatch\n*.egg-info\nsrc/trackformer/models/ops/build*\nsrc/trackformer/models/ops/dist*\nsrc/trackformer/models/ops/lib*\nsrc/trackformer/models/ops/temp*\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2020 - present, Facebook, Inc\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "# TrackFormer: Multi-Object Tracking with Transformers\n\nThis repository provides the official implementation of the [TrackFormer: Multi-Object Tracking with Transformers](https://arxiv.org/abs/2101.02702) paper by [Tim Meinhardt](https://dvl.in.tum.de/team/meinhardt/), [Alexander Kirillov](https://alexander-kirillov.github.io/), [Laura Leal-Taixe](https://dvl.in.tum.de/team/lealtaixe/) and [Christoph Feichtenhofer](https://feichtenhofer.github.io/). The codebase builds upon [DETR](https://github.com/facebookresearch/detr), [Deformable DETR](https://github.com/fundamentalvision/Deformable-DETR) and [Tracktor](https://github.com/phil-bergmann/tracking_wo_bnw).\n\n<!-- **As the paper is still under submission this repository will continuously be updated and might at times not reflect the current state of the [arXiv paper](https://arxiv.org/abs/2012.01866).** -->\n\n<div align=\"center\">\n    <img src=\"docs/MOT17-03-SDP.gif\" alt=\"MOT17-03-SDP\" width=\"375\"/>\n    <img src=\"docs/MOTS20-07.gif\" alt=\"MOTS20-07\" width=\"375\"/>\n</div>\n\n## Abstract\n\nThe challenging task of multi-object tracking (MOT) requires simultaneous reasoning about track initialization, identity, and spatiotemporal trajectories.\nWe formulate this task as a frame-to-frame set prediction problem and introduce TrackFormer, an end-to-end MOT approach based on an encoder-decoder Transformer architecture.\nOur model achieves data association between frames via attention by evolving a set of track predictions through a video sequence.\nThe Transformer decoder initializes new tracks from static object queries and autoregressively follows existing tracks in space and time with the new concept of identity preserving track queries.\nBoth decoder query types benefit from self- and encoder-decoder attention on global frame-level features, thereby omitting any additional graph optimization and matching or modeling of motion and appearance.\nTrackFormer represents a new tracking-by-attention paradigm and yields state-of-the-art performance on the task of multi-object tracking (MOT17) and segmentation (MOTS20).\n\n<div align=\"center\">\n    <img src=\"docs/method.png\" alt=\"TrackFormer casts multi-object tracking as a set prediction problem performing joint detection and tracking-by-attention. The architecture consists of a CNN for image feature extraction, a Transformer encoder for image feature encoding and a Transformer decoder which applies self- and encoder-decoder attention to produce output embeddings with bounding box and class information.\"/>\n</div>\n\n## Installation\n\nWe refer to our [docs/INSTALL.md](docs/INSTALL.md) for detailed installation instructions.\n\n## Train TrackFormer\n\nWe refer to our [docs/TRAIN.md](docs/TRAIN.md) for detailed training instructions.\n\n## Evaluate TrackFormer\n\nIn order to evaluate TrackFormer on a multi-object tracking dataset, we provide the `src/track.py` script which supports several datasets and splits interchangle via the `dataset_name` argument (See `src/datasets/tracking/factory.py` for an overview of all datasets.) The default tracking configuration is specified in `cfgs/track.yaml`. To facilitate the reproducibility of our results, we provide evaluation metrics for both the train and test set.\n\n### MOT17\n\n#### Private detections\n\n```\npython src/track.py with reid\n```\n\n<center>\n\n| MOT17     | MOTA         | IDF1           |       MT     |     ML     |     FP       |     FN              |  ID SW.      |\n|  :---:    | :---:        |     :---:      |    :---:     | :---:      |    :---:     |   :---:             |  :---:       |\n| **Train** |     74.2     |     71.7       |     849      | 177        |      7431    |      78057          |  1449        |\n| **Test**  |     74.1     |     68.0       |    1113      | 246        |     34602    |     108777          |  2829        |\n\n</center>\n\n#### Public detections (DPM, FRCNN, SDP)\n\n```\npython src/track.py with \\\n    reid \\\n    tracker_cfg.public_detections=min_iou_0_5 \\\n    obj_detect_checkpoint_file=models/mot17_deformable_multi_frame/checkpoint_epoch_50.pth\n```\n\n<center>\n\n| MOT17     | MOTA         | IDF1           |       MT     |     ML     |     FP       |     FN              |  ID SW.      |\n|  :---:    | :---:        |     :---:      |    :---:     | :---:      |    :---:     |   :---:             |  :---:       |\n| **Train** |     64.6     |     63.7       |    621       | 675        |     4827     |     111958          |  2556        |\n| **Test**  |     62.3     |     57.6       |    688       | 638        |     16591    |     192123          |  4018        |\n\n</center>\n\n### MOT20\n\n#### Private detections\n\n```\npython src/track.py with \\\n    reid \\\n    dataset_name=MOT20-ALL \\\n    obj_detect_checkpoint_file=models/mot20_crowdhuman_deformable_multi_frame/checkpoint_epoch_50.pth\n```\n\n<center>\n\n| MOT20     | MOTA         | IDF1           |       MT     |     ML     |     FP       |     FN              |  ID SW.      |\n|  :---:    | :---:        |     :---:      |    :---:     | :---:      |    :---:     |   :---:             |  :---:       |\n| **Train** |     81.0     |     73.3       |    1540      | 124        |     20807    |     192665          |  1961        |\n| **Test**  |     68.6     |     65.7       |     666      | 181        |     20348    |     140373          |  1532        |\n\n</center>\n\n### MOTS20\n\n```\npython src/track.py with \\\n    dataset_name=MOTS20-ALL \\\n    obj_detect_checkpoint_file=models/mots20_train_masks/checkpoint.pth\n```\n\nOur tracking script only applies MOT17 metrics evaluation but outputs MOTS20 mask prediction files. To evaluate these download the official [MOTChallengeEvalKit](https://github.com/dendorferpatrick/MOTChallengeEvalKit).\n\n<center>\n\n| MOTS20    | sMOTSA         | IDF1           |       FP     |     FN     |     IDs      |\n|  :---:    | :---:          |     :---:      |    :---:     | :---:      |    :---:     |\n| **Train** |     --         |     --         |    --        |   --       |     --       |\n| **Test**  |     54.9       |     63.6       |    2233      | 7195       |     278      |\n\n</center>\n\n### Demo\n\nTo facilitate the application of TrackFormer, we provide a demo interface which allows for a quick processing of a given video sequence.\n\n```\nffmpeg -i data/snakeboard/snakeboard.mp4 -vf fps=30 data/snakeboard/%06d.png\n\npython src/track.py with \\\n    dataset_name=DEMO \\\n    data_root_dir=data/snakeboard \\\n    output_dir=data/snakeboard \\\n    write_images=pretty\n```\n\n<div align=\"center\">\n    <img src=\"docs/snakeboard.gif\" alt=\"Snakeboard demo\" width=\"600\"/>\n</div>\n\n## Publication\nIf you use this software in your research, please cite our publication:\n\n```\n@InProceedings{meinhardt2021trackformer,\n    title={TrackFormer: Multi-Object Tracking with Transformers},\n    author={Tim Meinhardt and Alexander Kirillov and Laura Leal-Taixe and Christoph Feichtenhofer},\n    year={2022},\n    month = {June},\n    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\n}\n```"
  },
  {
    "path": "cfgs/submit.yaml",
    "content": "# Number of gpus to request on each node\nnum_gpus: 1\nvram: 12GB\n# memory allocated per GPU in GB\nmem_per_gpu: 20\n# Number of nodes to request\nnodes: 1\n# Duration of the job\ntimeout: 4320\n# Job dir. Leave empty for automatic.\njob_dir: ''\n# Use to run jobs locally. ('debug', 'local', 'slurm')\ncluster: debug\n# Partition. Leave empty for automatic.\nslurm_partition: ''\n# Constraint. Leave empty for automatic.\nslurm_constraint: ''\nslurm_comment: ''\nslurm_gres: ''\nslurm_exclude: ''\ncpus_per_task: 2"
  },
  {
    "path": "cfgs/track.yaml",
    "content": "output_dir: null\nverbose: false\nseed: 666\n\nobj_detect_checkpoint_file: models/mot17_crowdhuman_deformable_multi_frame/checkpoint_epoch_40.pth\n\ninterpolate: False\n# if available load tracking results and only evaluate\nload_results_dir: null\n\n# dataset (look into src/datasets/tracking/factory.py)\ndataset_name: MOT17-ALL-ALL\ndata_root_dir: data\n\n# [False, 'debug', 'pretty']\n# compile video with: `ffmpeg -f image2 -framerate 15 -i %06d.jpg -vcodec libx264 -y movie.mp4 -vf scale=320:-1`\nwrite_images: False\n# Maps are only visualized if write_images is True\ngenerate_attention_maps: False\n\n# track, evaluate and write images only for a range of frames (in float fraction)\nframe_range:\n    start: 0.0\n    end: 1.0\n\ntracker_cfg:\n    # [False, 'center_distance', 'min_iou_0_5']\n    public_detections: False\n    # score threshold for detections\n    detection_obj_score_thresh: 0.4\n    # score threshold for keeping the track alive\n    track_obj_score_thresh: 0.4\n    # NMS threshold for detection\n    detection_nms_thresh: 0.9\n    # NMS theshold while tracking\n    track_nms_thresh: 0.9\n    # number of consective steps a score has to be below track_obj_score_thresh for a track to be terminated\n    steps_termination: 1\n    # distance of previous frame for multi-frame attention\n    prev_frame_dist: 1\n    # How many timesteps inactive tracks are kept and cosidered for reid\n    inactive_patience: -1\n    # How similar do image and old track need to be to be considered the same person\n    reid_sim_threshold: 0.0\n    reid_sim_only: false\n    reid_score_thresh: 0.4\n    reid_greedy_matching: false\n"
  },
  {
    "path": "cfgs/track_reid.yaml",
    "content": "tracker_cfg:\n  inactive_patience: 5\n"
  },
  {
    "path": "cfgs/train.yaml",
    "content": "lr: 0.0002\nlr_backbone_names: ['backbone.0']\nlr_backbone: 0.00002\nlr_linear_proj_names: ['reference_points', 'sampling_offsets']\nlr_linear_proj_mult: 0.1\nlr_track: 0.0001\noverwrite_lrs: false\noverwrite_lr_scheduler: false\nbatch_size: 2\nweight_decay: 0.0001\nepochs: 50\nlr_drop: 40\n# gradient clipping max norm\nclip_max_norm: 0.1\n# Deformable DETR\ndeformable: false\nwith_box_refine: false\ntwo_stage: false\n# Model parameters\nfreeze_detr: false\nload_mask_head_from_model: null\n# Backbone\n# Name of the convolutional backbone to use. ('resnet50', 'resnet101')\nbackbone: resnet50\n# If true, we replace stride with dilation in the last convolutional block (DC5)\ndilation: false\n# Type of positional embedding to use on top of the image features. ('sine', 'learned')\nposition_embedding: sine\n# Number of feature levels the encoder processes from the backbone\nnum_feature_levels: 1\n# Transformer\n# Number of encoding layers in the transformer\nenc_layers: 6\n# Number of decoding layers in the transformer\ndec_layers: 6\n# Intermediate size of the feedforward layers in the transformer blocks\ndim_feedforward: 2048\n# Size of the embeddings (dimension of the transformer)\nhidden_dim: 256\n# Dropout applied in the transformer\ndropout: 0.1\n# Number of attention heads inside the transformer's attentions\nnheads: 8\n# Number of object queries\nnum_queries: 100\npre_norm: false\ndec_n_points: 4\nenc_n_points: 4\n# Tracking\ntracking: false\n# In addition to detection also run tracking evaluation with default configuration from `cfgs/track.yaml`\ntracking_eval: true\n# Range of possible random previous frames\ntrack_prev_frame_range: 0\ntrack_prev_frame_rnd_augs: 0.01\ntrack_prev_prev_frame: False\ntrack_backprop_prev_frame: False\ntrack_query_false_positive_prob: 0.1\ntrack_query_false_negative_prob: 0.4\n# only for vanilla DETR\ntrack_query_false_positive_eos_weight: true\ntrack_attention: false\nmulti_frame_attention: false\nmulti_frame_encoding: true\nmulti_frame_attention_separate_encoder: true\nmerge_frame_features: false\noverflow_boxes: false\n# Segmentation\nmasks: false\n# Matcher\n# Class coefficient in the matching cost\nset_cost_class: 1.0\n# L1 box coefficient in the matching cost\nset_cost_bbox: 5.0\n# giou box coefficient in the matching cost\nset_cost_giou: 2.0\n# Loss\n# Disables auxiliary decoding losses (loss at each layer)\naux_loss: true\nmask_loss_coef: 1.0\ndice_loss_coef: 1.0\ncls_loss_coef: 1.0\nbbox_loss_coef: 5.0\ngiou_loss_coef: 2\n# Relative classification weight of the no-object class\neos_coef: 0.1\nfocal_loss: false\nfocal_alpha: 0.25\nfocal_gamma: 2\n# Dataset\ndataset: coco\ntrain_split: train\nval_split: val\ncoco_path: data/coco_2017\ncoco_panoptic_path: null\nmot_path_train: data/MOT17\nmot_path_val: data/MOT17\ncrowdhuman_path: data/CrowdHuman\n# allows for joint training of mot and crowdhuman/coco_person with the `mot_crowdhuman`/`mot_coco_person` dataset\ncrowdhuman_train_split: null\ncoco_person_train_split: null\ncoco_and_crowdhuman_prev_frame_rnd_augs: 0.2\ncoco_min_num_objects: 0\nimg_transform:\n  max_size: 1333\n  val_width: 800\n# Miscellaneous\n# path where to save, empty for no saving\noutput_dir: ''\n# device to use for training / testing\ndevice: cuda\nseed: 42\n# resume from checkpoint\nresume: ''\nresume_shift_neuron: False\n# resume optimization from checkpoint\nresume_optim: false\n# resume Visdom visualization\nresume_vis: false\nstart_epoch: 1\neval_only: false\neval_train: false\nnum_workers: 2\nval_interval: 5\ndebug: false\n# epoch interval for model saving. if 0 only save last and best models\nsave_model_interval: 5\n# distributed training parameters\n# number of distributed processes\nworld_size: 1\n# url used to set up distributed training\ndist_url: env://\n# Visdom params\n# vis_server: http://localhost\nvis_server: ''\nvis_port: 8090\nvis_and_log_interval: 50\nno_vis: false\n"
  },
  {
    "path": "cfgs/train_coco_person_masks.yaml",
    "content": "dataset: coco_person\n\nload_mask_head_from_model: models/detr-r50-panoptic-00ce5173.pth\nfreeze_detr: true\nmasks: true\n\nlr: 0.0001\nlr_drop: 50\nepochs: 50"
  },
  {
    "path": "cfgs/train_crowdhuman.yaml",
    "content": "dataset: mot_crowdhuman\ncrowdhuman_train_split: train_val\ntrain_split: null\nval_split: mot17_train_cross_val_frame_0_5_to_1_0_coco\nepochs: 80\nlr_drop: 50"
  },
  {
    "path": "cfgs/train_deformable.yaml",
    "content": "deformable: true\nnum_feature_levels: 4\nnum_queries: 300\ndim_feedforward: 1024\nfocal_loss: true\nfocal_alpha: 0.25\nfocal_gamma: 2\ncls_loss_coef: 2.0\nset_cost_class: 2.0\noverflow_boxes: true\nwith_box_refine: true"
  },
  {
    "path": "cfgs/train_full_res.yaml",
    "content": "img_transform:\n  max_size: 1920\n  val_width: 1080"
  },
  {
    "path": "cfgs/train_mot17.yaml",
    "content": "dataset: mot\n\ntrain_split: mot17_train_coco\nval_split: mot17_train_cross_val_frame_0_5_to_1_0_coco\n\nmot_path_train: data/MOT17\nmot_path_val: data/MOT17\n\nresume: models/r50_deformable_detr_plus_iterative_bbox_refinement-checkpoint_hidden_dim_288.pth\n\nepochs: 50\nlr_drop: 10"
  },
  {
    "path": "cfgs/train_mot17_crowdhuman.yaml",
    "content": "dataset: mot_crowdhuman\n\ncrowdhuman_train_split: train_val\ntrain_split: mot17_train_coco\nval_split: mot17_train_cross_val_frame_0_5_to_1_0_coco\n\nmot_path_train: data/MOT17\nmot_path_val: data/MOT17\n\nresume: models/crowdhuman_deformable_trackformer/checkpoint_epoch_80.pth\n\nepochs: 40\nlr_drop: 10"
  },
  {
    "path": "cfgs/train_mot20_crowdhuman.yaml",
    "content": "dataset: mot_crowdhuman\n\ncrowdhuman_train_split: train_val\ntrain_split: mot20_train_coco\nval_split: mot20_train_cross_val_frame_0_5_to_1_0_coco\n\nmot_path_train: data/MOT20\nmot_path_val: data/MOT20\n\nresume: models/crowdhuman_deformable_trackformer/checkpoint_epoch_80.pth\n\nepochs: 50\nlr_drop: 10"
  },
  {
    "path": "cfgs/train_mot_coco_person.yaml",
    "content": "dataset: mot_coco_person\ncoco_person_train_split: train\ntrain_split: null\nval_split: mot17_train_cross_val_frame_0_5_to_1_0_coco"
  },
  {
    "path": "cfgs/train_mots20.yaml",
    "content": "dataset: mot\nmot_path: data/MOTS20\ntrain_split: mots20_train_coco\nval_split: mots20_train_coco\n\nresume: models/mot17_train_pretrain_CH_deformable_with_coco_person_masks/checkpoint.pth\nmasks: true\nlr: 0.00001\nlr_backbone: 0.000001\n\nepochs: 40\nlr_drop: 40"
  },
  {
    "path": "cfgs/train_multi_frame.yaml",
    "content": "num_queries: 500\nhidden_dim: 288\nmulti_frame_attention: true\nmulti_frame_encoding: true\nmulti_frame_attention_separate_encoder: true"
  },
  {
    "path": "cfgs/train_tracking.yaml",
    "content": "tracking: true\ntracking_eval: true\ntrack_prev_frame_range: 5\ntrack_query_false_positive_eos_weight: true"
  },
  {
    "path": "data/.gitignore",
    "content": "*\n!.gitignore\n!snakeboard\n"
  },
  {
    "path": "docs/INSTALL.md",
    "content": "# Installation\n\n1. Clone and enter this repository:\n    ```\n    git clone git@github.com:timmeinhardt/trackformer.git\n    cd trackformer\n    ```\n\n2. Install packages for Python 3.7:\n\n    1. `pip3 install -r requirements.txt`\n    2. Install PyTorch 1.5 and torchvision 0.6 from [here](https://pytorch.org/get-started/previous-versions/#v150).\n    3. Install pycocotools (with fixed ignore flag): `pip3 install -U 'git+https://github.com/timmeinhardt/cocoapi.git#subdirectory=PythonAPI'`\n    5. Install MultiScaleDeformableAttention package: `python src/trackformer/models/ops/setup.py build --build-base=src/trackformer/models/ops/ install`\n\n3. Download and unpack datasets in the `data` directory:\n\n    1. [MOT17](https://motchallenge.net/data/MOT17/):\n\n        ```\n        wget https://motchallenge.net/data/MOT17.zip\n        unzip MOT17.zip\n        python src/generate_coco_from_mot.py\n        ```\n\n    2. (Optional) [MOT20](https://motchallenge.net/data/MOT20/):\n\n        ```\n        wget https://motchallenge.net/data/MOT20.zip\n        unzip MOT20.zip\n        python src/generate_coco_from_mot.py --mot20\n        ```\n\n    3. (Optional) [MOTS20](https://motchallenge.net/data/MOTS/):\n\n        ```\n        wget https://motchallenge.net/data/MOTS.zip\n        unzip MOTS.zip\n        python src/generate_coco_from_mot.py --mots\n        ```\n\n    4. (Optional) [CrowdHuman](https://www.crowdhuman.org/download.html):\n\n        1. Create a `CrowdHuman` and `CrowdHuman/annotations` directory.\n        2. Download and extract the `train` and `val` datasets including their corresponding `*.odgt` annotation file into the `CrowdHuman` directory.\n        3. Create a `CrowdHuman/train_val` directory and merge or symlink the `train` and `val` image folders.\n        4. Run `python src/generate_coco_from_crowdhuman.py`\n        5. The final folder structure should resemble this:\n            ~~~\n            |-- data\n                |-- CrowdHuman\n                |   |-- train\n                |   |   |-- *.jpg\n                |   |-- val\n                |   |   |-- *.jpg\n                |   |-- train_val\n                |   |   |-- *.jpg\n                |   |-- annotations\n                |   |   |-- annotation_train.odgt\n                |   |   |-- annotation_val.odgt\n                |   |   |-- train_val.json\n            ~~~\n\n3. Download and unpack pretrained TrackFormer model files in the `models` directory:\n\n    ```\n    wget https://vision.in.tum.de/webshare/u/meinhard/trackformer_models_v1.zip\n    unzip trackformer_models_v1.zip\n    ```\n\n4. (optional) The evaluation of MOTS20 metrics requires two steps:\n    1. Run Trackformer with `src/track.py` and output prediction files\n    2. Download the official MOTChallenge [devkit](https://github.com/dendorferpatrick/MOTChallengeEvalKit) and run the MOTS evaluation on the prediction files\n\nIn order to configure, log and reproduce our computational experiments, we structure our code with the [Sacred](http://sacred.readthedocs.io/en/latest/index.html) framework. For a detailed explanation of the Sacred interface please read its documentation.\n"
  },
  {
    "path": "docs/TRAIN.md",
    "content": "# Train TrackFormer\n\nWe provide the code as well as intermediate models of our entire training pipeline for multiple datasets. Monitoring of the training/evaluation progress is possible via command line as well as [Visdom](https://github.com/fossasia/visdom.git). For the latter, a Visdom server must be running at `vis_port` and `vis_server` (see `cfgs/train.yaml`). We set `vis_server=''` by default to deactivate Visdom logging. To deactivate Visdom logging with set parameters, you can run a training with the `no_vis=True` flag.\n\n<div align=\"center\">\n    <img src=\"../docs/visdom.gif\" alt=\"Snakeboard demo\" width=\"600\"/>\n</div>\n\nThe settings for each dataset are specified in the respective configuration files, e.g., `cfgs/train_crowdhuman.yaml`. The following train commands produced the pretrained model files mentioned in [docs/INSTALL.md](INSTALL.md).\n\n## CrowdHuman pre-training\n\n```\npython src/train.py with \\\n    crowdhuman \\\n    deformable \\\n    multi_frame \\\n    tracking \\\n    output_dir=models/crowdhuman_deformable_multi_frame \\\n```\n\n## MOT17\n\n#### Private detections\n\n```\npython src/train.py with \\\n    mot17_crowdhuman \\\n    deformable \\\n    multi_frame \\\n    tracking \\\n    output_dir=models/mot17_crowdhuman_deformable_multi_frame \\\n```\n\n#### Public detections\n\n```\npython src/train.py with \\\n    mot17 \\\n    deformable \\\n    multi_frame \\\n    tracking \\\n    output_dir=models/mot17_deformable_multi_frame \\\n```\n\n## MOT20\n\n#### Private detections\n\n```\npython src/train.py with \\\n    mot20_crowdhuman \\\n    deformable \\\n    multi_frame \\\n    tracking \\\n    output_dir=models/mot20_crowdhuman_deformable_multi_frame \\\n```\n\n## MOTS20\n\nFor our MOTS20 test set submission, we finetune a MOT17 private detection model without deformable attention, i.e., vanilla DETR, which was pre-trained on the CrowdHuman dataset. The finetuning itself conists of two training steps: (i) the original DETR panoptic segmentation head on the COCO person segmentation data and (ii) the entire TrackFormer model (including segmentation head) on the MOTS20 training set. At this point, we only provide the final model files in [docs/INSTALL.md](INSTALL.md).\n\n<!-- ```\npython src/train.py with \\\n    tracking \\\n    coco_person_masks \\\n    output_dir=models/mot17_train_private_coco_person_masks_v2 \\\n```\n\n```\npython src/train.py with \\\n    tracking \\\n    mots20 \\\n    output_dir=models/mots20_train_masks \\\n``` -->\n\n<!-- ### Ablation studies\n\nWill be added after acceptance of the paper. -->\n\n## Custom Dataset\n\nTrackFormer can be trained on additional/new object detection or multi-object tracking datasets without changing our codebase. The `crowdhuman` or `mot` datasets merely require a [COCO style](https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch) annotation file and the following folder structure:\n\n~~~\n|-- data\n    |-- custom_dataset\n    |   |-- train\n    |   |   |-- *.jpg\n    |   |-- val\n    |   |   |-- *.jpg\n    |   |-- annotations\n    |   |   |-- train.json\n    |   |   |-- val.json\n~~~\n\nIn the case of a multi-object tracking dataset, the original COCO annotations style must be extended with `seq_length`, `first_frame_image_id` and `track_id` fields. See the `src/generate_coco_from_mot.py` script for details. For example, the following command finetunes our `MOT17` private model for additional 20 epochs on a custom dataset:\n\n```\npython src/train.py with \\\n    mot17 \\\n    deformable \\\n    multi_frame \\\n    tracking \\\n    resume=models/mot17_crowdhuman_deformable_trackformer/checkpoint_epoch_40.pth \\\n    output_dir=models/custom_dataset_deformable \\\n    mot_path_train=data/custom_dataset \\\n    mot_path_val=data/custom_dataset \\\n    train_split=train \\\n    val_split=val \\\n    epochs=20 \\\n```\n\n## Run with multipe GPUs\n\nAll reported results are obtained by training with a batch size of 2 and 7 GPUs, i.e., an effective batch size of 14. If you have less GPUs at your disposal, adjust the learning rates accordingly. To start the CrowdHuman pre-training with 7 GPUs execute:\n\n```\npython -m torch.distributed.launch --nproc_per_node=7 --use_env src/train.py with \\\n    crowdhuman \\\n    deformable \\\n    multi_frame \\\n    tracking \\\n    output_dir=models/crowdhuman_deformable_multi_frame \\\n```\n\n## Run SLURM jobs with Submitit\n\nFurthermore, we provide a script for starting Slurm jobs with [submitit](https://github.com/facebookincubator/submitit). This includes a convenient command line interface for Slurm options as well as preemption and resuming capabilities. The aforementioned CrowdHuman pre-training can be executed on 7 x 32 GB GPUs with the following command:\n\n```\npython src/run_with_submitit.py with \\\n    num_gpus=7 \\\n    vram=32GB \\\n    cluster=slurm \\\n    train.crowdhuman \\\n    train.deformable \\\n    train.trackformer \\\n    train.tracking \\\n    train.output_dir=models/crowdhuman_train_val_deformable \\\n```"
  },
  {
    "path": "logs/.gitignore",
    "content": "*\n!visdom\n!.gitignore\n"
  },
  {
    "path": "logs/visdom/.gitignore",
    "content": "*\n!.gitignore"
  },
  {
    "path": "models/.gitignore",
    "content": "*\n!.gitignore"
  },
  {
    "path": "requirements.txt",
    "content": "argon2-cffi==20.1.0\nastroid==2.4.2\nasync-generator==1.10\nattrs==19.3.0\nbackcall==0.2.0\nbleach==3.2.3\ncertifi==2020.4.5.2\ncffi==1.14.4\nchardet==3.0.4\ncloudpickle==1.6.0\ncolorama==0.4.3\ncycler==0.10.0\nCython==0.29.20\ndecorator==4.4.2\ndefusedxml==0.6.0\ndocopt==0.6.2\nentrypoints==0.3\nfilelock==3.0.12\nflake8==3.8.3\nflake8-import-order==0.18.1\nfuture==0.18.2\ngdown==3.12.2\ngitdb==4.0.5\nGitPython==3.1.3\nidna==2.9\nimageio==2.8.0\nimportlib-metadata==1.6.1\nipykernel==5.4.3\nipython==7.19.0\nipython-genutils==0.2.0\nipywidgets==7.6.3\nisort==5.6.4\njedi==0.18.0\nJinja2==2.11.2\njsonpatch==1.25\njsonpickle==1.4.1\njsonpointer==2.0\njsonschema==3.2.0\njupyter==1.0.0\njupyter-client==6.1.11\njupyter-console==6.2.0\njupyter-core==4.7.0\njupyterlab-pygments==0.1.2\njupyterlab-widgets==1.0.0\nkiwisolver==1.2.0\nlap==0.4.0\nlapsolver==1.1.0\nlazy-object-proxy==1.4.3\nMarkupSafe==1.1.1\nmatplotlib==3.2.1\nmccabe==0.6.1\nmistune==0.8.4\nmore-itertools==8.4.0\nmotmetrics==1.2.0\nmunch==2.5.0\nnbclient==0.5.1\nnbconvert==6.0.7\nnbformat==5.1.2\nnest-asyncio==1.5.1\nnetworkx==2.4\nninja==1.10.0.post2\nnotebook==6.2.0\nnumpy==1.18.5\nopencv-python==4.2.0.34\npackaging==20.4\npandas==1.0.5\npandocfilters==1.4.3\nparso==0.8.1\npexpect==4.8.0\npickleshare==0.7.5\nPillow==7.1.2\npluggy==0.13.1\nprometheus-client==0.9.0\nprompt-toolkit==3.0.14\nptyprocess==0.7.0\npy==1.8.2\npy-cpuinfo==6.0.0\npyaml==20.4.0\npycodestyle==2.6.0\npycparser==2.20\npyflakes==2.2.0\nPygments==2.7.4\npylint==2.6.0\npyparsing==2.4.7\npyrsistent==0.17.3\nPySocks==1.7.1\npytest==5.4.3\npytest-benchmark==3.2.3\npython-dateutil==2.8.1\npytz==2020.1\nPyWavelets==1.1.1\nPyYAML==5.3.1\npyzmq==19.0.1\nqtconsole==5.0.2\nQtPy==1.9.0\nrequests==2.23.0\nsacred==0.8.1\nscikit-image==0.17.2\nscipy==1.4.1\nseaborn==0.10.1\nSend2Trash==1.5.0\nsix==1.15.0\nsmmap==3.0.4\nsubmitit==1.1.5\nterminado==0.9.2\ntestpath==0.4.4\ntifffile==2020.6.3\ntoml==0.10.2\ntorchfile==0.1.0\ntornado==6.1\ntqdm==4.46.1\ntraitlets==5.0.5\ntyped-ast==1.4.1\ntyping-extensions==3.7.4.3\nurllib3==1.25.9\nvisdom==0.1.8.9\nwcwidth==0.2.5\nwebencodings==0.5.1\nwebsocket-client==0.57.0\nwidgetsnbextension==3.5.1\nwrapt==1.12.1\nxmltodict==0.12.0\nzipp==3.1.0\n"
  },
  {
    "path": "setup.py",
    "content": "from setuptools import setup, find_packages\n\nsetup(name='trackformer',\n      packages=['trackformer'],\n      package_dir={'':'src'},\n      version='0.0.1',\n      install_requires=[],)\n"
  },
  {
    "path": "src/combine_frames.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nCombine two sets of frames to one.\n\"\"\"\nimport os\nimport os.path as osp\n\nfrom PIL import Image\n\nOUTPUT_DIR = 'models/mot17_masks_track_rcnn_and_v3_combined'\n\nFRAME_DIR_1 = 'models/mot17_masks_track_rcnn/MOTS20-TEST'\nFRAME_DIR_2 = 'models/mot17_masks_v3/MOTS20-ALL'\n\n\nif __name__ == '__main__':\n    seqs_1 = os.listdir(FRAME_DIR_1)\n    seqs_2 = os.listdir(FRAME_DIR_2)\n\n    if not osp.exists(OUTPUT_DIR):\n        os.makedirs(OUTPUT_DIR)\n\n    for seq in seqs_1:\n        if seq in seqs_2:\n            print(seq)\n            seg_output_dir = osp.join(OUTPUT_DIR, seq)\n            if not osp.exists(seg_output_dir):\n                os.makedirs(seg_output_dir)\n\n            frames = os.listdir(osp.join(FRAME_DIR_1, seq))\n\n            for frame in frames:\n                img_1 = Image.open(osp.join(FRAME_DIR_1, seq, frame))\n                img_2 = Image.open(osp.join(FRAME_DIR_2, seq, frame))\n\n                width = img_1.size[0]\n                height = img_2.size[1]\n\n                combined_frame = Image.new('RGB', (width, height * 2))\n                combined_frame.paste(img_1, (0, 0))\n                combined_frame.paste(img_2, (0, height))\n\n                combined_frame.save(osp.join(seg_output_dir, f'{frame}'))\n"
  },
  {
    "path": "src/compute_best_mean_epoch_from_splits.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport os\nimport json\nimport numpy as np\n\n\nLOG_DIR = 'logs/visdom'\n\nMETRICS = ['MOTA', 'IDF1', 'BBOX AP IoU=0.50:0.95', 'MASK AP IoU=0.50:0.95']\n\nRUNS = [\n    'mot17_train_1_deformable_full_res',\n    'mot17_train_2_deformable_full_res',\n    'mot17_train_3_deformable_full_res',\n    'mot17_train_4_deformable_full_res',\n    'mot17_train_5_deformable_full_res',\n    'mot17_train_6_deformable_full_res',\n    'mot17_train_7_deformable_full_res',\n    ]\n\nRUNS = [\n    'mot17_train_1_no_pretrain_deformable_tracking',\n    'mot17_train_2_no_pretrain_deformable_tracking',\n    'mot17_train_3_no_pretrain_deformable_tracking',\n    'mot17_train_4_no_pretrain_deformable_tracking',\n    'mot17_train_5_no_pretrain_deformable_tracking',\n    'mot17_train_6_no_pretrain_deformable_tracking',\n    'mot17_train_7_no_pretrain_deformable_tracking',\n    ]\n\nRUNS = [\n    'mot17_train_1_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_2_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_3_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_4_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_5_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_6_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_7_coco_pretrain_deformable_tracking_lr=0.00001',\n    ]\n\nRUNS = [\n    'mot17_train_1_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_2_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_3_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_4_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_5_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_6_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',\n    'mot17_train_7_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',\n    ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable_tracking_eos_coef=0.2',\n#     'mot17_train_2_no_pretrain_deformable_tracking_eos_coef=0.2',\n#     'mot17_train_3_no_pretrain_deformable_tracking_eos_coef=0.2',\n#     'mot17_train_4_no_pretrain_deformable_tracking_eos_coef=0.2',\n#     'mot17_train_5_no_pretrain_deformable_tracking_eos_coef=0.2',\n#     'mot17_train_6_no_pretrain_deformable_tracking_eos_coef=0.2',\n#     'mot17_train_7_no_pretrain_deformable_tracking_eos_coef=0.2',\n#     ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable_tracking_lr_drop=50',\n#     'mot17_train_2_no_pretrain_deformable_tracking_lr_drop=50',\n#     'mot17_train_3_no_pretrain_deformable_tracking_lr_drop=50',\n#     'mot17_train_4_no_pretrain_deformable_tracking_lr_drop=50',\n#     'mot17_train_5_no_pretrain_deformable_tracking_lr_drop=50',\n#     'mot17_train_6_no_pretrain_deformable_tracking_lr_drop=50',\n#     'mot17_train_7_no_pretrain_deformable_tracking_lr_drop=50',\n#     ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable_tracking_save_model_interval=1',\n#     'mot17_train_2_no_pretrain_deformable_tracking_save_model_interval=1',\n#     'mot17_train_3_no_pretrain_deformable_tracking_save_model_interval=1',\n#     'mot17_train_4_no_pretrain_deformable_tracking_save_model_interval=1',\n#     'mot17_train_5_no_pretrain_deformable_tracking_save_model_interval=1',\n#     'mot17_train_6_no_pretrain_deformable_tracking_save_model_interval=1',\n#     'mot17_train_7_no_pretrain_deformable_tracking_save_model_interval=1',\n#     ]\n\n# RUNS = [\n    # 'mot17_train_1_no_pretrain_deformable_tracking_save_model_interval=1',\n    # 'mot17_train_2_no_pretrain_deformable_tracking_save_model_interval=1',\n    # 'mot17_train_3_no_pretrain_deformable_tracking_save_model_interval=1',\n    # 'mot17_train_4_no_pretrain_deformable_tracking_save_model_interval=1',\n    # 'mot17_train_5_no_pretrain_deformable_tracking_save_model_interval=1',\n    # 'mot17_train_6_no_pretrain_deformable_tracking_save_model_interval=1',\n    # 'mot17_train_7_no_pretrain_deformable_tracking_save_model_interval=1',\n    # ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable_full_res',\n#     'mot17_train_2_no_pretrain_deformable_full_res',\n#     'mot17_train_3_no_pretrain_deformable_full_res',\n#     'mot17_train_4_no_pretrain_deformable_full_res',\n#     'mot17_train_5_no_pretrain_deformable_full_res',\n#     'mot17_train_6_no_pretrain_deformable_full_res',\n#     'mot17_train_7_no_pretrain_deformable_full_res',\n#     ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',\n#     'mot17_train_2_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',\n#     'mot17_train_3_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',\n#     'mot17_train_4_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',\n#     'mot17_train_5_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',\n#     'mot17_train_6_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',\n#     'mot17_train_7_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',\n#     ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',\n#     'mot17_train_2_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',\n#     'mot17_train_3_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',\n#     'mot17_train_4_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',\n#     'mot17_train_5_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',\n#     'mot17_train_6_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',\n#     'mot17_train_7_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',\n#     ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',\n#     'mot17_train_2_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',\n#     'mot17_train_3_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',\n#     'mot17_train_4_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',\n#     'mot17_train_5_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',\n#     'mot17_train_6_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',\n#     'mot17_train_7_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',\n#     ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',\n#     'mot17_train_2_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',\n#     'mot17_train_3_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',\n#     'mot17_train_4_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',\n#     'mot17_train_5_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',\n#     'mot17_train_6_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',\n#     'mot17_train_7_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',\n#     ]\n\n# RUNS = [\n#     'mot17_train_1_no_pretrain_deformable',\n#     'mot17_train_2_no_pretrain_deformable',\n#     'mot17_train_3_no_pretrain_deformable',\n#     'mot17_train_4_no_pretrain_deformable',\n#     'mot17_train_5_no_pretrain_deformable',\n#     'mot17_train_6_no_pretrain_deformable',\n#     'mot17_train_7_no_pretrain_deformable',\n#     ]\n\n#\n# MOTS 4-fold split\n#\n\n# RUNS = [\n#     'mots20_train_1_coco_tracking',\n#     'mots20_train_2_coco_tracking',\n#     'mots20_train_3_coco_tracking',\n#     'mots20_train_4_coco_tracking',\n#     ]\n\n# RUNS = [\n#     'mots20_train_1_coco_tracking_full_res_masks=False',\n#     'mots20_train_2_coco_tracking_full_res_masks=False',\n#     'mots20_train_3_coco_tracking_full_res_masks=False',\n#     'mots20_train_4_coco_tracking_full_res_masks=False',\n#     ]\n\n# RUNS = [\n#     'mots20_train_1_coco_full_res_pretrain_masks=False_lr_0_0001',\n#     'mots20_train_2_coco_full_res_pretrain_masks=False_lr_0_0001',\n#     'mots20_train_3_coco_full_res_pretrain_masks=False_lr_0_0001',\n#     'mots20_train_4_coco_full_res_pretrain_masks=False_lr_0_0001',\n#     ]\n\n# RUNS = [\n#     'mots20_train_1_coco_tracking_full_res_masks=False_pretrain',\n#     'mots20_train_2_coco_tracking_full_res_masks=False_pretrain',\n#     'mots20_train_3_coco_tracking_full_res_masks=False_pretrain',\n#     'mots20_train_4_coco_tracking_full_res_masks=False_pretrain',\n#     ]\n\n# RUNS = [\n#     'mot17det_train_1_mots_track_bbox_proposals_pretrain_train_1_mots_vis_save_model_interval_1',\n#     'mot17det_train_2_mots_track_bbox_proposals_pretrain_train_3_mots_vis_save_model_interval_1',\n#     'mot17det_train_3_mots_track_bbox_proposals_pretrain_train_4_mots_vis_save_model_interval_1',\n#     'mot17det_train_4_mots_track_bbox_proposals_pretrain_train_6_mots_vis_save_model_interval_1',\n# ]\n\nif __name__ == '__main__':\n    results = {}\n\n    for r in RUNS:\n        print(r)\n        log_file = os.path.join(LOG_DIR, f\"{r}.json\")\n\n        with open(log_file) as json_file:\n            data = json.load(json_file)\n\n            window = [\n                window for window in data['jsons'].values()\n                if window['title'] == 'VAL EVAL EPOCHS'][0]\n\n            for m in METRICS:\n                if m not in window['legend']:\n                    continue\n                elif m not in results:\n                    results[m] = []\n\n                idxs = window['legend'].index(m)\n\n                values = window['content']['data'][idxs]['y']\n                results[m].append(values)\n\n        print(f'NUM EPOCHS: {len(values)}')\n\n    min_length = min([len(l) for l in next(iter(results.values()))])\n\n    for metric in results.keys():\n        results[metric] = [l[:min_length] for l in results[metric]]\n\n    mean_results = {\n        metric: np.array(results[metric]).mean(axis=0)\n        for metric in results.keys()}\n\n    print(\"* METRIC INTERVAL = BEST EPOCHS\")\n    for metric in results.keys():\n        best_interval = mean_results[metric].argmax()\n        print(mean_results[metric])\n        print(\n            f'{metric}: {mean_results[metric].max():.2%} at {best_interval + 1}/{len(mean_results[metric])} '\n            f'{[(mmetric, f\"{mean_results[mmetric][best_interval]:.2%}\") for mmetric in results.keys() if not mmetric == metric]}')\n"
  },
  {
    "path": "src/generate_coco_from_crowdhuman.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nGenerates COCO data and annotation structure from CrowdHuman data.\n\"\"\"\nimport json\nimport os\nimport cv2\n\nfrom generate_coco_from_mot import check_coco_from_mot\n\nDATA_ROOT = 'data/CrowdHuman'\nVIS_THRESHOLD = 0.0\n\n\ndef generate_coco_from_crowdhuman(split_name='train_val', split='train_val'):\n    \"\"\"\n    Generate COCO data from CrowdHuman.\n    \"\"\"\n    annotations = {}\n    annotations['type'] = 'instances'\n    annotations['images'] = []\n    annotations['categories'] = [{\"supercategory\": \"person\",\n                                  \"name\": \"person\",\n                                  \"id\": 1}]\n    annotations['annotations'] = []\n    annotation_file = os.path.join(DATA_ROOT, f'annotations/{split_name}.json')\n\n    # IMAGES\n    imgs_list_dir = os.listdir(os.path.join(DATA_ROOT, split))\n    for i, img in enumerate(sorted(imgs_list_dir)):\n        im = cv2.imread(os.path.join(DATA_ROOT, split, img))\n        h, w, _ = im.shape\n\n        annotations['images'].append({\n            \"file_name\": img,\n            \"height\": h,\n            \"width\": w,\n            \"id\": i, })\n\n    # GT\n    annotation_id = 0\n    img_file_name_to_id = {\n        os.path.splitext(img_dict['file_name'])[0]: img_dict['id']\n        for img_dict in annotations['images']}\n\n    for split in ['train', 'val']:\n        if split not in split_name:\n            continue\n        odgt_annos_file = os.path.join(DATA_ROOT, f'annotations/annotation_{split}.odgt')\n        with open(odgt_annos_file, 'r+') as anno_file:\n            datalist = anno_file.readlines()\n\n        ignores = 0\n        for data in datalist:\n            json_data = json.loads(data)\n            gtboxes = json_data['gtboxes']\n            for gtbox in gtboxes:\n                if gtbox['tag'] == 'person':\n                    bbox = gtbox['fbox']\n                    area = bbox[2] * bbox[3]\n\n                    ignore = False\n                    visibility = 1.0\n                    # if 'occ' in gtbox['extra']:\n                    #     visibility = 1.0 - gtbox['extra']['occ']\n                    # if visibility <= VIS_THRESHOLD:\n                    #     ignore = True\n\n                    if 'ignore' in gtbox['extra']:\n                        ignore = ignore or bool(gtbox['extra']['ignore'])\n\n                    ignores += int(ignore)\n\n                    annotation = {\n                        \"id\": annotation_id,\n                        \"bbox\": bbox,\n                        \"image_id\": img_file_name_to_id[json_data['ID']],\n                        \"segmentation\": [],\n                        \"ignore\": int(ignore),\n                        \"visibility\": visibility,\n                        \"area\": area,\n                        \"iscrowd\": 0,\n                        \"category_id\": annotations['categories'][0]['id'],}\n\n                    annotation_id += 1\n                    annotations['annotations'].append(annotation)\n\n    # max objs per image\n    num_objs_per_image = {}\n    for anno in annotations['annotations']:\n        image_id = anno[\"image_id\"]\n        if image_id in num_objs_per_image:\n            num_objs_per_image[image_id] += 1\n        else:\n            num_objs_per_image[image_id] = 1\n\n    print(f'max objs per image: {max([n for n  in num_objs_per_image.values()])}')\n    print(f'ignore augs: {ignores}/{len(annotations[\"annotations\"])}')\n    print(len(annotations['images']))\n\n    # for img_id, num_objs in num_objs_per_image.items():\n    #     if num_objs > 50 or num_objs < 2:\n    #         annotations['images'] = [\n    #             img for img in annotations['images']\n    #             if img_id != img['id']]\n\n    #         annotations['annotations'] = [\n    #             anno for anno in annotations['annotations']\n    #             if img_id != anno['image_id']]\n\n    # print(len(annotations['images']))\n\n    with open(annotation_file, 'w') as anno_file:\n        json.dump(annotations, anno_file, indent=4)\n\n\nif __name__ == '__main__':\n    generate_coco_from_crowdhuman(split_name='train_val', split='train_val')\n    # generate_coco_from_crowdhuman(split_name='train', split='train')\n\n    # coco_dir = os.path.join('data/CrowdHuman', 'train_val')\n    # annotation_file = os.path.join('data/CrowdHuman/annotations', 'train_val.json')\n    # check_coco_from_mot(coco_dir, annotation_file, img_id=9012)\n"
  },
  {
    "path": "src/generate_coco_from_mot.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nGenerates COCO data and annotation structure from MOTChallenge data.\n\"\"\"\nimport argparse\nimport configparser\nimport csv\nimport json\nimport os\nimport shutil\n\nimport numpy as np\nimport pycocotools.mask as rletools\nimport skimage.io as io\nimport torch\nfrom matplotlib import pyplot as plt\nfrom pycocotools.coco import COCO\nfrom scipy.optimize import linear_sum_assignment\nfrom torchvision.ops.boxes import box_iou\n\nfrom trackformer.datasets.tracking.mots20_sequence import load_mots_gt\n\nMOTS_ROOT = 'data/MOTS20'\nVIS_THRESHOLD = 0.25\n\nMOT_15_SEQS_INFO = {\n    'ETH-Bahnhof': {'img_width': 640, 'img_height': 480, 'seq_length': 1000},\n    'ETH-Sunnyday': {'img_width': 640, 'img_height': 480, 'seq_length': 354},\n    'KITTI-13': {'img_width': 1242, 'img_height': 375, 'seq_length': 340},\n    'KITTI-17': {'img_width': 1224, 'img_height': 370, 'seq_length': 145},\n    'PETS09-S2L1': {'img_width': 768, 'img_height': 576, 'seq_length': 795},\n    'TUD-Campus': {'img_width': 640, 'img_height': 480, 'seq_length': 71},\n    'TUD-Stadtmitte': {'img_width': 640, 'img_height': 480, 'seq_length': 179},}\n\n\ndef generate_coco_from_mot(split_name='train', seqs_names=None,\n                           root_split='train', mots=False, mots_vis=False,\n                           frame_range=None, data_root='data/MOT17'):\n    \"\"\"\n    Generates COCO data from MOT.\n    \"\"\"\n    if frame_range is None:\n        frame_range = {'start': 0.0, 'end': 1.0}\n\n    if mots:\n        data_root = MOTS_ROOT\n    root_split_path = os.path.join(data_root, root_split)\n    root_split_mots_path = os.path.join(MOTS_ROOT, root_split)\n    coco_dir = os.path.join(data_root, split_name)\n\n    if os.path.isdir(coco_dir):\n        shutil.rmtree(coco_dir)\n\n    os.mkdir(coco_dir)\n\n    annotations = {}\n    annotations['type'] = 'instances'\n    annotations['images'] = []\n    annotations['categories'] = [{\"supercategory\": \"person\",\n                                  \"name\": \"person\",\n                                  \"id\": 1}]\n    annotations['annotations'] = []\n\n    annotations_dir = os.path.join(os.path.join(data_root, 'annotations'))\n    if not os.path.isdir(annotations_dir):\n        os.mkdir(annotations_dir)\n    annotation_file = os.path.join(annotations_dir, f'{split_name}.json')\n\n    # IMAGE FILES\n    img_id = 0\n\n    seqs = sorted(os.listdir(root_split_path))\n\n    if seqs_names is not None:\n        seqs = [s for s in seqs if s in seqs_names]\n    annotations['sequences'] = seqs\n    annotations['frame_range'] = frame_range\n    print(split_name, seqs)\n\n    for seq in seqs:\n        # CONFIG FILE\n        config = configparser.ConfigParser()\n        config_file = os.path.join(root_split_path, seq, 'seqinfo.ini')\n\n        if os.path.isfile(config_file):\n            config.read(config_file)\n            img_width = int(config['Sequence']['imWidth'])\n            img_height = int(config['Sequence']['imHeight'])\n            seq_length = int(config['Sequence']['seqLength'])\n        else:\n            img_width = MOT_15_SEQS_INFO[seq]['img_width']\n            img_height = MOT_15_SEQS_INFO[seq]['img_height']\n            seq_length = MOT_15_SEQS_INFO[seq]['seq_length']\n\n        seg_list_dir = os.listdir(os.path.join(root_split_path, seq, 'img1'))\n        start_frame = int(frame_range['start'] * seq_length)\n        end_frame = int(frame_range['end'] * seq_length)\n        seg_list_dir = seg_list_dir[start_frame: end_frame]\n\n        print(f\"{seq}: {len(seg_list_dir)}/{seq_length}\")\n        seq_length = len(seg_list_dir)\n\n        for i, img in enumerate(sorted(seg_list_dir)):\n\n            if i == 0:\n                first_frame_image_id = img_id\n\n            annotations['images'].append({\"file_name\": f\"{seq}_{img}\",\n                                          \"height\": img_height,\n                                          \"width\": img_width,\n                                          \"id\": img_id,\n                                          \"frame_id\": i,\n                                          \"seq_length\": seq_length,\n                                          \"first_frame_image_id\": first_frame_image_id})\n\n            img_id += 1\n\n            os.symlink(os.path.join(os.getcwd(), root_split_path, seq, 'img1', img),\n                       os.path.join(coco_dir, f\"{seq}_{img}\"))\n\n    # GT\n    annotation_id = 0\n    img_file_name_to_id = {\n        img_dict['file_name']: img_dict['id']\n        for img_dict in annotations['images']}\n    for seq in seqs:\n        # GT FILE\n        gt_file_path = os.path.join(root_split_path, seq, 'gt', 'gt.txt')\n        if mots:\n            gt_file_path = os.path.join(\n                root_split_mots_path,\n                seq.replace('MOT17', 'MOTS20'),\n                'gt',\n                'gt.txt')\n        if not os.path.isfile(gt_file_path):\n            continue\n\n        seq_annotations = []\n        if mots:\n            mask_objects_per_frame = load_mots_gt(gt_file_path)\n            for frame_id, mask_objects in mask_objects_per_frame.items():\n                for mask_object in mask_objects:\n                    # class_id = 1 is car\n                    # class_id = 2 is person\n                    # class_id = 10 IGNORE\n                    if mask_object.class_id == 1:\n                        continue\n\n                    bbox = rletools.toBbox(mask_object.mask)\n                    bbox = [int(c) for c in bbox]\n                    area = bbox[2] * bbox[3]\n                    image_id = img_file_name_to_id.get(f\"{seq}_{frame_id:06d}.jpg\", None)\n                    if image_id is None:\n                        continue\n\n                    segmentation = {\n                        'size': mask_object.mask['size'],\n                        'counts': mask_object.mask['counts'].decode(encoding='UTF-8')}\n\n                    annotation = {\n                        \"id\": annotation_id,\n                        \"bbox\": bbox,\n                        \"image_id\": image_id,\n                        \"segmentation\": segmentation,\n                        \"ignore\": mask_object.class_id == 10,\n                        \"visibility\": 1.0,\n                        \"area\": area,\n                        \"iscrowd\": 0,\n                        \"seq\": seq,\n                        \"category_id\": annotations['categories'][0]['id'],\n                        \"track_id\": mask_object.track_id}\n\n                    seq_annotations.append(annotation)\n                    annotation_id += 1\n\n            annotations['annotations'].extend(seq_annotations)\n        else:\n\n            seq_annotations_per_frame = {}\n            with open(gt_file_path, \"r\") as gt_file:\n                reader = csv.reader(gt_file, delimiter=' ' if mots else ',')\n\n                for row in reader:\n                    if int(row[6]) == 1 and (seq in MOT_15_SEQS_INFO or int(row[7]) == 1):\n                        bbox = [float(row[2]), float(row[3]), float(row[4]), float(row[5])]\n                        bbox = [int(c) for c in bbox]\n\n                        area = bbox[2] * bbox[3]\n                        visibility = float(row[8])\n                        frame_id = int(row[0])\n                        image_id = img_file_name_to_id.get(f\"{seq}_{frame_id:06d}.jpg\", None)\n                        if image_id is None:\n                            continue\n                        track_id = int(row[1])\n\n                        annotation = {\n                            \"id\": annotation_id,\n                            \"bbox\": bbox,\n                            \"image_id\": image_id,\n                            \"segmentation\": [],\n                            \"ignore\": 0 if visibility > VIS_THRESHOLD else 1,\n                            \"visibility\": visibility,\n                            \"area\": area,\n                            \"iscrowd\": 0,\n                            \"seq\": seq,\n                            \"category_id\": annotations['categories'][0]['id'],\n                            \"track_id\": track_id}\n\n                        seq_annotations.append(annotation)\n                        if frame_id not in seq_annotations_per_frame:\n                            seq_annotations_per_frame[frame_id] = []\n                        seq_annotations_per_frame[frame_id].append(annotation)\n\n                        annotation_id += 1\n\n            annotations['annotations'].extend(seq_annotations)\n\n            #change ignore based on MOTS mask\n            if mots_vis:\n                gt_file_mots = os.path.join(\n                    root_split_mots_path,\n                    seq.replace('MOT17', 'MOTS20'),\n                    'gt',\n                    'gt.txt')\n                if os.path.isfile(gt_file_mots):\n                    mask_objects_per_frame = load_mots_gt(gt_file_mots)\n\n                    for frame_id, frame_annotations in seq_annotations_per_frame.items():\n                        mask_objects = mask_objects_per_frame[frame_id]\n                        mask_object_bboxes = [rletools.toBbox(obj.mask) for obj in mask_objects]\n                        mask_object_bboxes = torch.tensor(mask_object_bboxes).float()\n\n                        frame_boxes = [a['bbox'] for a in frame_annotations]\n                        frame_boxes = torch.tensor(frame_boxes).float()\n\n                        # x,y,w,h --> x,y,x,y\n                        frame_boxes[:, 2:] += frame_boxes[:, :2]\n                        mask_object_bboxes[:, 2:] += mask_object_bboxes[:, :2]\n\n                        mask_iou = box_iou(mask_object_bboxes, frame_boxes)\n\n                        mask_indices, frame_indices = linear_sum_assignment(-mask_iou)\n                        for m_i, f_i in zip(mask_indices, frame_indices):\n                            if mask_iou[m_i, f_i] < 0.5:\n                                continue\n\n                            if not frame_annotations[f_i]['visibility']:\n                                frame_annotations[f_i]['ignore'] = 0\n\n    # max objs per image\n    num_objs_per_image = {}\n    for anno in annotations['annotations']:\n        image_id = anno[\"image_id\"]\n\n        if image_id in num_objs_per_image:\n            num_objs_per_image[image_id] += 1\n        else:\n            num_objs_per_image[image_id] = 1\n\n    print(f'max objs per image: {max(list(num_objs_per_image.values()))}')\n\n    with open(annotation_file, 'w') as anno_file:\n        json.dump(annotations, anno_file, indent=4)\n\n\ndef check_coco_from_mot(coco_dir='data/MOT17/mot17_train_coco', annotation_file='data/MOT17/annotations/mot17_train_coco.json', img_id=None):\n    \"\"\"\n    Visualize generated COCO data. Only used for debugging.\n    \"\"\"\n    # coco_dir = os.path.join(data_root, split)\n    # annotation_file = os.path.join(coco_dir, 'annotations.json')\n\n    coco = COCO(annotation_file)\n    cat_ids = coco.getCatIds(catNms=['person'])\n    if img_id == None:\n        img_ids = coco.getImgIds(catIds=cat_ids)\n        index = np.random.randint(0, len(img_ids))\n        img_id = img_ids[index]\n    img = coco.loadImgs(img_id)[0]\n\n    i = io.imread(os.path.join(coco_dir, img['file_name']))\n\n    plt.imshow(i)\n    plt.axis('off')\n    ann_ids = coco.getAnnIds(imgIds=img['id'], catIds=cat_ids, iscrowd=None)\n    anns = coco.loadAnns(ann_ids)\n    coco.showAnns(anns, draw_bbox=True)\n    plt.savefig('annotations.png')\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser(description='Generate COCO from MOT.')\n    parser.add_argument('--mots20', action='store_true')\n    parser.add_argument('--mot20', action='store_true')\n    args = parser.parse_args()\n\n    mot15_seqs_names = list(MOT_15_SEQS_INFO.keys())\n\n    if args.mots20:\n        #\n        # MOTS20\n        #\n\n        # TRAIN SET\n        generate_coco_from_mot(\n            'mots20_train_coco',\n            seqs_names=['MOTS20-02', 'MOTS20-05', 'MOTS20-09', 'MOTS20-11'],\n            mots=True)\n\n        # TRAIN SPLITS\n        for i in range(4):\n            train_seqs = ['MOTS20-02', 'MOTS20-05', 'MOTS20-09', 'MOTS20-11']\n            val_seqs = train_seqs.pop(i)\n\n            generate_coco_from_mot(\n                f'mots20_train_{i + 1}_coco',\n                seqs_names=train_seqs, mots=True)\n            generate_coco_from_mot(\n                f'mots20_val_{i + 1}_coco',\n                seqs_names=val_seqs, mots=True)\n\n    elif args.mot20:\n        data_root = 'data/MOT20'\n        train_seqs = ['MOT20-01', 'MOT20-02', 'MOT20-03', 'MOT20-05',]\n        # TRAIN SET\n        generate_coco_from_mot(\n            'mot20_train_coco',\n            seqs_names=train_seqs,\n            data_root=data_root)\n\n        for i in range(0, len(train_seqs)):\n            train_seqs_copy = train_seqs.copy()\n            val_seqs = train_seqs_copy.pop(i)\n\n            generate_coco_from_mot(\n                f'mot20_train_{i + 1}_coco',\n                seqs_names=train_seqs_copy,\n                data_root=data_root)\n            generate_coco_from_mot(\n                f'mot20_val_{i + 1}_coco',\n                seqs_names=val_seqs,\n                data_root=data_root)\n\n        # CROSS VAL FRAME SPLIT\n        generate_coco_from_mot(\n            'mot20_train_cross_val_frame_0_0_to_0_5_coco',\n            seqs_names=train_seqs,\n            frame_range={'start': 0, 'end': 0.5},\n            data_root=data_root)\n        generate_coco_from_mot(\n            'mot20_train_cross_val_frame_0_5_to_1_0_coco',\n            seqs_names=train_seqs,\n            frame_range={'start': 0.5, 'end': 1.0},\n            data_root=data_root)\n\n    else:\n        #\n        # MOT17\n        #\n\n        # CROSS VAL SPLIT 1\n        generate_coco_from_mot(\n            'mot17_train_cross_val_1_coco',\n            seqs_names=['MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-11-FRCNN'])\n        generate_coco_from_mot(\n            'mot17_val_cross_val_1_coco',\n            seqs_names=['MOT17-02-FRCNN', 'MOT17-10-FRCNN', 'MOT17-13-FRCNN'])\n\n        # CROSS VAL SPLIT 2\n        generate_coco_from_mot(\n            'mot17_train_cross_val_2_coco',\n            seqs_names=['MOT17-02-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-13-FRCNN'])\n        generate_coco_from_mot(\n            'mot17_val_cross_val_2_coco',\n            seqs_names=['MOT17-04-FRCNN', 'MOT17-11-FRCNN'])\n\n        # CROSS VAL SPLIT 3\n        generate_coco_from_mot(\n            'mot17_train_cross_val_3_coco',\n            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'])\n        generate_coco_from_mot(\n            'mot17_val_cross_val_3_coco',\n            seqs_names=['MOT17-05-FRCNN', 'MOT17-09-FRCNN'])\n\n        # CROSS VAL FRAME SPLIT\n        generate_coco_from_mot(\n            'mot17_train_cross_val_frame_0_0_to_0_25_coco',\n            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'],\n            frame_range={'start': 0, 'end': 0.25})\n        generate_coco_from_mot(\n            'mot17_train_cross_val_frame_0_0_to_0_5_coco',\n            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'],\n            frame_range={'start': 0, 'end': 0.5})\n        generate_coco_from_mot(\n            'mot17_train_cross_val_frame_0_5_to_1_0_coco',\n            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'],\n            frame_range={'start': 0.5, 'end': 1.0})\n\n        generate_coco_from_mot(\n            'mot17_train_cross_val_frame_0_75_to_1_0_coco',\n            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'],\n            frame_range={'start': 0.75, 'end': 1.0})\n\n        # TRAIN SET\n        generate_coco_from_mot(\n            'mot17_train_coco',\n            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN',\n                        'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'])\n\n        for i in range(0, 7):\n            train_seqs = [\n                'MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN',\n                'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN']\n            val_seqs = train_seqs.pop(i)\n\n            generate_coco_from_mot(\n                f'mot17_train_{i + 1}_coco',\n                seqs_names=train_seqs)\n            generate_coco_from_mot(\n                f'mot17_val_{i + 1}_coco',\n                seqs_names=val_seqs)\n"
  },
  {
    "path": "src/parse_mot_results_to_tex.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nParse MOT results and generate a LaTeX table.\n\"\"\"\n\nMOTS = False\nMOT20 = False\n# F_CONTENT = \"\"\"\n# \tMOTA\tIDF1\tMOTP\tMT\tML\tFP\tFN\tRecall\tPrecision\tFAF\tIDSW\tFrag\n#     MOT17-01-DPM\t41.6\t44.2\t77.1\t5\t8\t496\t3252\t49.6\t86.6\t1.1\t22\t58\n#     MOT17-01-FRCNN\t41.0\t42.1\t77.1\t6\t9\t571\t3207\t50.3\t85.0\t1.3\t25\t61\n#     MOT17-01-SDP\t41.8\t44.3\t76.8\t7\t8\t612\t3112\t51.8\t84.5\t1.4\t27\t65\n#     MOT17-03-DPM\t79.3\t71.6\t79.1\t94\t8\t1142\t20297\t80.6\t98.7\t0.8\t191\t525\n#     MOT17-03-FRCNN\t79.6\t72.7\t79.1\t93\t7\t1234\t19945\t80.9\t98.6\t0.8\t180\t508\n#     MOT17-03-SDP\t80.0\t72.0\t79.0\t93\t8\t1223\t19530\t81.3\t98.6\t0.8\t181\t526\n#     MOT17-06-DPM\t54.8\t42.0\t79.5\t54\t63\t314\t4839\t58.9\t95.7\t0.3\t175\t244\n#     MOT17-06-FRCNN\t55.6\t42.9\t79.3\t57\t59\t363\t4676\t60.3\t95.1\t0.3\t190\t264\n#     MOT17-06-SDP\t55.5\t43.8\t79.3\t56\t61\t354\t4712\t60.0\t95.2\t0.3\t181\t262\n#     MOT17-07-DPM\t44.8\t42.0\t76.6\t11\t16\t1322\t7851\t53.5\t87.2\t2.6\t147\t275\n#     MOT17-07-FRCNN\t45.5\t41.5\t76.6\t13\t15\t1263\t7785\t53.9\t87.8\t2.5\t156\t289\n#     MOT17-07-SDP\t45.2\t42.4\t76.6\t13\t15\t1332\t7775\t54.0\t87.3\t2.7\t147\t279\n#     MOT17-08-DPM\t26.5\t32.2\t83.0\t11\t37\t378\t15066\t28.7\t94.1\t0.6\t88\t146\n#     MOT17-08-FRCNN\t26.5\t31.9\t83.1\t11\t36\t332\t15113\t28.5\t94.8\t0.5\t89\t141\n#     MOT17-08-SDP\t26.6\t32.3\t83.1\t11\t36\t350\t15067\t28.7\t94.5\t0.6\t91\t147\n#     MOT17-12-DPM\t46.1\t53.1\t82.7\t16\t45\t207\t4434\t48.8\t95.3\t0.2\t30\t50\n#     MOT17-12-FRCNN\t46.1\t52.6\t82.6\t15\t45\t197\t4443\t48.7\t95.5\t0.2\t30\t48\n#     MOT17-12-SDP\t46.0\t53.0\t82.6\t16\t45\t221\t4426\t48.9\t95.0\t0.2\t30\t52\n#     MOT17-14-DPM\t31.6\t36.6\t74.8\t13\t78\t636\t11812\t36.1\t91.3\t0.8\t196\t331\n#     MOT17-14-FRCNN\t31.6\t37.6\t74.6\t13\t77\t780\t11653\t37.0\t89.8\t1.0\t202\t350\n#     MOT17-14-SDP\t31.7\t37.1\t74.7\t13\t76\t749\t11677\t36.8\t90.1\t1.0\t205\t344\n#     OVERALL 61.5\t59.6\t78.9\t621 \t752\t14076\t200672\t64.4\t96.3\t0.8\t2583\t4965\n#     \"\"\"\n\nF_CONTENT = \"\"\"\n\tMOTA\tMOTP\tIDF1\tIDP\tIDR\tTP\tFP\tFN\tRcll\tPrcn\tMTR\tPTR\tMLR\tMT\tPT\tML\tIDSW\tFAR\tFM\n    MOT17-01-DPM\t49.92\t79.58\t42.97\t58.18\t34.06\t3518\t258\t2932\t54.54\t93.17\t20.83\t45.83\t33.33\t5\t11\t8\t40\t0.57\t50\n    MOT17-01-FRCNN\t50.87\t79.26\t42.33\t55.77\t34.11\t3637\t308\t2813\t56.39\t92.19\t33.33\t41.67\t25.00\t8\t10\t6\t48\t0.68\t57\n    MOT17-01-SDP\t53.66\t78.16\t45.33\t54.31\t38.90\t4064\t556\t2386\t63.01\t87.97\t41.67\t37.50\t20.83\t10\t9\t5\t47\t1.24\t72\n    MOT17-03-DPM\t74.05\t79.41\t66.45\t76.34\t58.83\t79279\t1389\t25396\t75.74\t98.28\t57.43\t30.41\t12.16\t85\t45\t18\t374\t0.93\t420\n    MOT17-03-FRCNN\t75.34\t79.45\t66.98\t76.21\t59.75\t80635\t1434\t24040\t77.03\t98.25\t56.76\t32.43\t10.81\t84\t48\t16\t335\t0.96\t409\n    MOT17-03-SDP\t79.64\t79.04\t65.84\t72.00\t60.65\t86043\t2134\t18632\t82.20\t97.58\t64.19\t27.03\t8.78\t95\t40\t13\t545\t1.42\t522\n    MOT17-06-DPM\t53.62\t82.55\t51.83\t64.47\t43.33\t7209\t711\t4575\t61.18\t91.02\t28.38\t37.84\t33.78\t63\t84\t75\t180\t0.60\t170\n    MOT17-06-FRCNN\t57.21\t81.73\t54.75\t63.67\t48.02\t7928\t960\t3856\t67.28\t89.20\t32.88\t45.50\t21.62\t73\t101\t48\t226\t0.80\t223\n    MOT17-06-SDP\t56.43\t81.93\t54.00\t62.70\t47.42\t7895\t1017\t3889\t67.00\t88.59\t36.94\t37.39\t25.68\t82\t83\t57\t228\t0.85\t222\n    MOT17-07-DPM\t52.59\t80.54\t48.08\t66.84\t37.54\t9230\t258\t7663\t54.64\t97.28\t20.00\t53.33\t26.67\t12\t32\t16\t88\t0.52\t148\n    MOT17-07-FRCNN\t52.39\t80.11\t47.88\t64.56\t38.05\t9456\t499\t7437\t55.98\t94.99\t20.00\t61.67\t18.33\t12\t37\t11\t106\t1.00\t174\n    MOT17-07-SDP\t54.56\t79.84\t47.81\t62.29\t38.79\t9928\t590\t6965\t58.77\t94.39\t26.67\t55.00\t18.33\t16\t33\t11\t121\t1.18\t199\n    MOT17-08-DPM\t32.52\t83.93\t31.85\t60.34\t21.63\t7286\t288\t13838\t34.49\t96.20\t13.16\t44.74\t42.11\t10\t34\t32\t128\t0.46\t154\n    MOT17-08-FRCNN\t31.11\t84.47\t31.68\t62.05\t21.27\t6958\t285\t14166\t32.94\t96.07\t13.16\t39.47\t47.37\t10\t30\t36\t102\t0.46\t120\n    MOT17-08-SDP\t34.96\t83.31\t33.05\t58.02\t23.11\t7972\t443\t13152\t37.74\t94.74\t15.79\t48.68\t35.53\t12\t37\t27\t144\t0.71\t175\n    MOT17-12-DPM\t51.26\t83.01\t57.74\t72.70\t47.88\t5102\t606\t3565\t58.87\t89.38\t23.08\t42.86\t34.07\t21\t39\t31\t53\t0.67\t86\n    MOT17-12-FRCNN\t47.71\t83.16\t56.73\t72.39\t46.64\t4882\t702\t3785\t56.33\t87.43\t20.88\t43.96\t35.16\t19\t40\t32\t45\t0.78\t72\n    MOT17-12-SDP\t48.88\t82.87\t57.46\t70.30\t48.59\t5140\t850\t3527\t59.31\t85.81\t24.18\t45.05\t30.77\t22\t41\t28\t54\t0.94\t89\n    MOT17-14-DPM\t38.07\t77.47\t42.03\t66.15\t30.80\t7978\t627\t10505\t43.16\t92.71\t9.15\t52.44\t38.41\t15\t86\t63\t314\t0.84\t296\n    MOT17-14-FRCNN\t37.78\t76.70\t41.78\t59.55\t32.18\t8688\t1300\t9795\t47.01\t86.98\t10.37\t55.49\t34.15\t17\t91\t56\t406\t1.73\t382\n    MOT17-14-SDP\t40.40\t76.40\t42.38\t57.96\t33.40\t9277\t1376\t9206\t50.19\t87.08\t10.37\t59.76\t29.88\t17\t98\t49\t434\t1.83\t437\n    OVERALL\\t62.30\\t79.77\\t57.58\\t70.58\\t48.62\\t372105\\t16591\t192123\t65.95\t95.73\t29.21\t43.69\t27.09\t688\t1029\t638\t4018\t0.93\t4477\n    \"\"\"\n\n\n# MOTS = True\n# F_CONTENT = \"\"\"\n#     sMOTSA\tMOTSA\tMOTSP\tIDF1\tMT\tML\tMTR\tPTR\tMLR\tGT\tTP\tFP\tFN\tRcll\tPrcn\tFM\tFMR\tIDSW\tIDSWR\n#     MOTS20-01\t59.79\t79.56\t77.60\t68.00\t10\t0\t83.33\t16.67\t0.00\t12\t2742\t255\t364\t88.28\t91.49\t37\t41.91\t16\t18.1\n#     MOTS20-06\t63.91\t78.72\t82.85\t65.14\t115\t22\t60.53\t27.89\t11.58\t190\t8479\t595\t1335\t86.40\t93.44\t218\t252.32\t158\t182.9\n#     MOTS20-07\t43.17\t58.52\t76.59\t53.60\t15\t17\t25.86\t44.83\t29.31\t58\t8445\t834\t4433\t65.58\t91.01\t177\t269.91\t75\t114.4\n#     MOTS20-12\t62.04\t74.64\t84.93\t76.83\t41\t9\t60.29\t26.47\t13.24\t68\t5408\t549\t1063\t83.57\t90.78\t76\t90.94\t29\t34.7\n#     OVERALL\t54.86\t69.92\t80.62\t63.58\t181\t48\t55.18\t30.18\t14.63\t328\t25074\t2233\t7195\t77.70\t91.82\t508\t653.77\t278\t357.8\n#     \"\"\"\n\n\nMOT20 = True\nF_CONTENT = \"\"\"\n\tMOTA\tMOTP\tIDF1\tIDP\tIDR\tHOTA\tDetA\tAssA\tDetRe\tDetPr\tAssRe\tAssPr\tLocA\tTP\tFP\tFN\tRcll\tPrcn\tIDSW\\tMT\\tML\n    MOT20-04\t82.72\t82.57\t75.59\t79.81\t71.79\t63.21\t68.29\t58.64\t73.11\t81.27\t63.43\t80.18\t84.53\t236919\t9639\t37165\t86.44\t96.09\t566\\t490\\t28\n    MOT20-06\t55.88\t79.00\t53.51\t68.11\t44.07\t43.85\t45.80\t42.23\t49.13\t75.94\t45.95\t74.07\t81.72\t80317\t5582\t52440\t60.50\t93.50\t545\\t96\\t72\n    MOT20-07\t56.21\t85.22\t59.05\t78.90\t47.18\t49.19\t48.45\t50.21\t50.63\t84.68\t53.31\t83.48\t86.86\t19245\t547\t13856\t58.14\t97.24\t92\\t41\\t20\n    MOT20-08\t46.03\t77.71\t48.34\t65.65\t38.26\t38.89\t38.46\t39.70\t41.87\t71.85\t43.36\t71.76\t81.08\t40572\t4580\t36912\t52.36\t89.86\t329\\t39\\t61\n    OVERALL\\t68.64\t81.42\t65.70\t75.63\t58.08\t54.67\t56.68\t52.97\t60.84\t79.22\t57.39\t78.50\t83.69\t377053\t20348\t140373\t72.87\t94.88\t1532\\t666\\t181\n\"\"\"\n\n\nif __name__ == '__main__':\n    # remove empty lines at start and beginning of F_CONTENT\n    F_CONTENT = F_CONTENT.strip()\n    F_CONTENT = F_CONTENT.splitlines()\n\n    start_ixs = range(1, len(F_CONTENT) - 1, 3)\n    if MOTS or MOT20:\n        start_ixs = range(1, len(F_CONTENT) - 1)\n\n    metrics_res = {}\n\n    for i in range(len(['DPM', 'FRCNN', 'SDP'])):\n        for start in start_ixs:\n            f_list = F_CONTENT[start + i].strip().split('\\t')\n            metrics_res[f_list[0]] = f_list[1:]\n\n        if MOTS or MOT20:\n            break\n\n    metrics_names = F_CONTENT[0].replace('\\n', '').split()\n\n    print(metrics_names)\n\n    metrics_res['ALL'] = F_CONTENT[-1].strip().split('\\t')[1:]\n\n    for full_seq_name, data in metrics_res.items():\n        seq_name = '-'.join(full_seq_name.split('-')[:2])\n        detection_name = full_seq_name.split('-')[-1]\n\n        if MOTS:\n            print(f\"{seq_name} & \"\n                f\"{float(data[metrics_names.index('sMOTSA')]):.1f} & \"\n                f\"{float(data[metrics_names.index('IDF1')]):.1f} & \"\n                f\"{float(data[metrics_names.index('MOTSA')]):.1f} & \"\n                f\"{data[metrics_names.index('FP')]} & \"\n                f\"{data[metrics_names.index('FN')]} & \"\n                f\"{data[metrics_names.index('IDSW')]} \\\\\\\\\")\n        else:\n            print(f\"{seq_name} & {detection_name} & \"\n                f\"{float(data[metrics_names.index('MOTA')]):.1f} & \"\n                f\"{float(data[metrics_names.index('IDF1')]):.1f} & \"\n                f\"{data[metrics_names.index('MT')]} & \"\n                f\"{data[metrics_names.index('ML')]} & \"\n                f\"{data[metrics_names.index('FP')]} & \"\n                f\"{data[metrics_names.index('FN')]} & \"\n                f\"{data[metrics_names.index('IDSW')]} \\\\\\\\\")\n"
  },
  {
    "path": "src/run_with_submitit.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nA script to run multinode training with submitit.\n\"\"\"\nimport os\nimport sys\nimport uuid\nfrom pathlib import Path\nfrom argparse import Namespace\n\nimport sacred\nimport submitit\n\nimport train\nfrom trackformer.util.misc import nested_dict_to_namespace\n\nWORK_DIR = str(Path(__file__).parent.absolute())\n\n\nex = sacred.Experiment('submit', ingredients=[train.ex])\nex.add_config('cfgs/submit.yaml')\n\n\ndef get_shared_folder() -> Path:\n    user = os.getenv(\"USER\")\n    if Path(\"/storage/slurm\").is_dir():\n        path = Path(f\"/storage/slurm/{user}/runs\")\n        path.mkdir(exist_ok=True)\n        return path\n    raise RuntimeError(\"No shared folder available\")\n\n\ndef get_init_file() -> Path:\n    # Init file must not exist, but it's parent dir must exist.\n    os.makedirs(str(get_shared_folder()), exist_ok=True)\n    init_file = get_shared_folder() / f\"{uuid.uuid4().hex}_init\"\n    if init_file.exists():\n        os.remove(str(init_file))\n    return init_file\n\n\nclass Trainer:\n    def __init__(self, args: Namespace) -> None:\n        self.args = args\n\n    def __call__(self) -> None:\n        sys.path.append(WORK_DIR)\n\n        import train\n        self._setup_gpu_args()\n        train.train(self.args)\n\n    def checkpoint(self) -> submitit.helpers.DelayedSubmission:\n        import os\n\n        import submitit\n\n        self.args.dist_url = get_init_file().as_uri()\n        checkpoint_file = os.path.join(self.args.output_dir, \"checkpoint.pth\")\n        if os.path.exists(checkpoint_file):\n            self.args.resume = checkpoint_file\n            self.args.resume_optim = True\n            self.args.resume_vis = True\n            self.args.load_mask_head_from_model = None\n        print(\"Requeuing \", self.args)\n        empty_trainer = type(self)(self.args)\n        return submitit.helpers.DelayedSubmission(empty_trainer)\n\n    def _setup_gpu_args(self) -> None:\n        from pathlib import Path\n\n        import submitit\n\n        job_env = submitit.JobEnvironment()\n        self.args.output_dir = Path(str(self.args.output_dir).replace(\"%j\", str(job_env.job_id)))\n        print(self.args.output_dir)\n        self.args.gpu = job_env.local_rank\n        self.args.rank = job_env.global_rank\n        self.args.world_size = job_env.num_tasks\n        print(f\"Process group: {job_env.num_tasks} tasks, rank: {job_env.global_rank}\")\n\n\ndef main(args: Namespace):\n    # Note that the folder will depend on the job_id, to easily track experiments\n    if args.job_dir == \"\":\n        args.job_dir = get_shared_folder() / \"%j\"\n\n    executor = submitit.AutoExecutor(\n        folder=args.job_dir, cluster=args.cluster, slurm_max_num_timeout=30)\n\n    # cluster setup is defined by environment variables\n    num_gpus_per_node = args.num_gpus\n    nodes = args.nodes\n    timeout_min = args.timeout\n\n    if args.slurm_gres:\n        slurm_gres = args.slurm_gres\n    else:\n        slurm_gres = f'gpu:{num_gpus_per_node},VRAM:{args.vram}'\n        # slurm_gres = f'gpu:rtx_8000:{num_gpus_per_node}'\n\n    executor.update_parameters(\n        mem_gb=args.mem_per_gpu * num_gpus_per_node,\n        gpus_per_node=num_gpus_per_node,\n        tasks_per_node=num_gpus_per_node,  # one task per GPU\n        cpus_per_task=args.cpus_per_task,\n        nodes=nodes,\n        timeout_min=timeout_min,  # max is 60 * 72,\n        slurm_partition=args.slurm_partition,\n        slurm_constraint=args.slurm_constraint,\n        slurm_comment=args.slurm_comment,\n        slurm_exclude=args.slurm_exclude,\n        slurm_gres=slurm_gres\n    )\n\n    executor.update_parameters(name=\"fair_track\")\n\n    args.train.dist_url = get_init_file().as_uri()\n    # args.output_dir = args.job_dir\n\n    trainer = Trainer(args.train)\n    job = executor.submit(trainer)\n\n    print(\"Submitted job_id:\", job.job_id)\n\n    if args.cluster == 'debug':\n        job.wait()\n\n\n@ex.main\ndef load_config(_config, _run):\n    \"\"\" We use sacred only for config loading from YAML files. \"\"\"\n    sacred.commands.print_config(_run)\n\n\nif __name__ == '__main__':\n    # TODO: hierachical Namespacing for nested dict\n    config = ex.run_commandline().config\n    args = nested_dict_to_namespace(config)\n    # args.train = Namespace(**config['train'])\n    main(args)\n"
  },
  {
    "path": "src/track.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport os\nimport sys\nimport time\nfrom os import path as osp\n\nimport motmetrics as mm\nimport numpy as np\nimport sacred\nimport torch\nimport tqdm\nimport yaml\nfrom torch.utils.data import DataLoader\n\nfrom trackformer.datasets.tracking import TrackDatasetFactory\nfrom trackformer.models import build_model\nfrom trackformer.models.tracker import Tracker\nfrom trackformer.util.misc import nested_dict_to_namespace\nfrom trackformer.util.track_utils import (evaluate_mot_accums, get_mot_accum,\n                                          interpolate_tracks, plot_sequence)\n\nmm.lap.default_solver = 'lap'\n\nex = sacred.Experiment('track')\nex.add_config('cfgs/track.yaml')\nex.add_named_config('reid', 'cfgs/track_reid.yaml')\n\n\n@ex.automain\ndef main(seed, dataset_name, obj_detect_checkpoint_file, tracker_cfg,\n         write_images, output_dir, interpolate, verbose, load_results_dir,\n         data_root_dir, generate_attention_maps, frame_range,\n         _config, _log, _run, obj_detector_model=None):\n    if write_images:\n        assert output_dir is not None\n\n    # obj_detector_model is only provided when run as evaluation during\n    # training. in that case we omit verbose outputs.\n    if obj_detector_model is None:\n        sacred.commands.print_config(_run)\n\n    # set all seeds\n    if seed is not None:\n        torch.manual_seed(seed)\n        torch.cuda.manual_seed(seed)\n        np.random.seed(seed)\n        torch.backends.cudnn.deterministic = True\n\n    if output_dir is not None:\n        if not osp.exists(output_dir):\n            os.makedirs(output_dir)\n\n        yaml.dump(\n            _config,\n            open(osp.join(output_dir, 'track.yaml'), 'w'),\n            default_flow_style=False)\n\n    ##########################\n    # Initialize the modules #\n    ##########################\n\n    # object detection\n    if obj_detector_model is None:\n        obj_detect_config_path = os.path.join(\n            os.path.dirname(obj_detect_checkpoint_file),\n            'config.yaml')\n        obj_detect_args = nested_dict_to_namespace(yaml.unsafe_load(open(obj_detect_config_path)))\n        img_transform = obj_detect_args.img_transform\n        obj_detector, _, obj_detector_post = build_model(obj_detect_args)\n\n        obj_detect_checkpoint = torch.load(\n            obj_detect_checkpoint_file, map_location=lambda storage, loc: storage)\n\n        obj_detect_state_dict = obj_detect_checkpoint['model']\n        # obj_detect_state_dict = {\n        #     k: obj_detect_state_dict[k] if k in obj_detect_state_dict\n        #     else v\n        #     for k, v in obj_detector.state_dict().items()}\n\n        obj_detect_state_dict = {\n            k.replace('detr.', ''): v\n            for k, v in obj_detect_state_dict.items()\n            if 'track_encoding' not in k}\n\n        obj_detector.load_state_dict(obj_detect_state_dict)\n        if 'epoch' in obj_detect_checkpoint:\n            _log.info(f\"INIT object detector [EPOCH: {obj_detect_checkpoint['epoch']}]\")\n\n        obj_detector.cuda()\n    else:\n        obj_detector = obj_detector_model['model']\n        obj_detector_post = obj_detector_model['post']\n        img_transform = obj_detector_model['img_transform']\n\n    if hasattr(obj_detector, 'tracking'):\n        obj_detector.tracking()\n\n    track_logger = None\n    if verbose:\n        track_logger = _log.info\n    tracker = Tracker(\n        obj_detector, obj_detector_post, tracker_cfg,\n        generate_attention_maps, track_logger, verbose)\n\n    time_total = 0\n    num_frames = 0\n    mot_accums = []\n    dataset = TrackDatasetFactory(\n        dataset_name, root_dir=data_root_dir, img_transform=img_transform)\n\n    for seq in dataset:\n        tracker.reset()\n\n        _log.info(f\"------------------\")\n        _log.info(f\"TRACK SEQ: {seq}\")\n\n        start_frame = int(frame_range['start'] * len(seq))\n        end_frame = int(frame_range['end'] * len(seq))\n\n        seq_loader = DataLoader(\n            torch.utils.data.Subset(seq, range(start_frame, end_frame)))\n\n        num_frames += len(seq_loader)\n\n        results = seq.load_results(load_results_dir)\n\n        if not results:\n            start = time.time()\n\n            for frame_id, frame_data in enumerate(tqdm.tqdm(seq_loader, file=sys.stdout)):\n                with torch.no_grad():\n                    tracker.step(frame_data)\n\n            results = tracker.get_results()\n\n            time_total += time.time() - start\n\n            _log.info(f\"NUM TRACKS: {len(results)} ReIDs: {tracker.num_reids}\")\n            _log.info(f\"RUNTIME: {time.time() - start :.2f} s\")\n\n            if interpolate:\n                results = interpolate_tracks(results)\n\n            if output_dir is not None:\n                _log.info(f\"WRITE RESULTS\")\n                seq.write_results(results, output_dir)\n        else:\n            _log.info(\"LOAD RESULTS\")\n\n        if seq.no_gt:\n            _log.info(\"NO GT AVAILBLE\")\n        else:\n            mot_accum = get_mot_accum(results, seq_loader)\n            mot_accums.append(mot_accum)\n\n            if verbose:\n                mot_events = mot_accum.mot_events\n                reid_events = mot_events[mot_events['Type'] == 'SWITCH']\n                match_events = mot_events[mot_events['Type'] == 'MATCH']\n\n                switch_gaps = []\n                for index, event in reid_events.iterrows():\n                    frame_id, _ = index\n                    match_events_oid = match_events[match_events['OId'] == event['OId']]\n                    match_events_oid_earlier = match_events_oid[\n                        match_events_oid.index.get_level_values('FrameId') < frame_id]\n\n                    if not match_events_oid_earlier.empty:\n                        match_events_oid_earlier_frame_ids = \\\n                            match_events_oid_earlier.index.get_level_values('FrameId')\n                        last_occurrence = match_events_oid_earlier_frame_ids.max()\n                        switch_gap = frame_id - last_occurrence\n                        switch_gaps.append(switch_gap)\n\n                switch_gaps_hist = None\n                if switch_gaps:\n                    switch_gaps_hist, _ = np.histogram(\n                        switch_gaps, bins=list(range(0, max(switch_gaps) + 10, 10)))\n                    switch_gaps_hist = switch_gaps_hist.tolist()\n\n                _log.info(f'SWITCH_GAPS_HIST (bin_width=10): {switch_gaps_hist}')\n\n        if output_dir is not None and write_images:\n            _log.info(\"PLOT SEQ\")\n            plot_sequence(\n                results, seq_loader, osp.join(output_dir, dataset_name, str(seq)),\n                write_images, generate_attention_maps)\n\n    if time_total:\n        _log.info(f\"RUNTIME ALL SEQS (w/o EVAL or IMG WRITE): \"\n                  f\"{time_total:.2f} s for {num_frames} frames \"\n                  f\"({num_frames / time_total:.2f} Hz)\")\n\n    if obj_detector_model is None:\n        _log.info(f\"EVAL:\")\n\n        summary, str_summary = evaluate_mot_accums(\n            mot_accums,\n            [str(s) for s in dataset if not s.no_gt])\n\n        _log.info(f'\\n{str_summary}')\n\n        return summary\n\n    return mot_accums\n"
  },
  {
    "path": "src/track_param_search.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom itertools import product\n\nimport numpy as np\n\nfrom track import ex\n\n\nif __name__ == \"__main__\":\n\n\n    # configs = [\n    #     {'dataset_name': [\"MOT17-02-FRCNN\", \"MOT17-10-FRCNN\", \"MOT17-13-FRCNN\"],\n    #      'obj_detect_checkpoint_file': 'models/mot17det_train_cross_val_1_mots_vis_track_bbox_proposals_track_encoding_bbox_proposals_prev_frame_5/checkpoint_best_MOTA.pth'},\n    #     {'dataset_name': [\"MOT17-04-FRCNN\", \"MOT17-11-FRCNN\"],\n    #      'obj_detect_checkpoint_file': 'models/mot17det_train_cross_val_2_mots_vis_track_bbox_proposals_track_encoding_bbox_proposals_prev_frame_5/checkpoint_best_MOTA.pth'},\n    #     {'dataset_name': [\"MOT17-05-FRCNN\", \"MOT17-09-FRCNN\"],\n    #      'obj_detect_checkpoint_file': 'models/mot17det_train_cross_val_3_mots_vis_track_bbox_proposals_track_encoding_bbox_proposals_prev_frame_5/checkpoint_best_MOTA.pth'},\n    # ]\n\n    # configs = [\n    #     {'dataset_name': [\"MOT17-02-FRCNN\"],\n    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_1_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},\n    #     {'dataset_name': [\"MOT17-04-FRCNN\"],\n    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_2_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},\n    #     {'dataset_name': [\"MOT17-05-FRCNN\"],\n    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_3_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},\n    #     {'dataset_name': [\"MOT17-09-FRCNN\"],\n    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_4_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},\n    #     {'dataset_name': [\"MOT17-10-FRCNN\"],\n    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_5_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},\n    #     {'dataset_name': [\"MOT17-11-FRCNN\"],\n    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_6_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},\n    #     {'dataset_name': [\"MOT17-13-FRCNN\"],\n    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_7_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},\n    # ]\n\n    # dataset_name = [\"MOT17-02-FRCNN\", \"MOT17-04-FRCNN\", \"MOT17-05-FRCNN\", \"MOT17-09-FRCNN\", \"MOT17-10-FRCNN\", \"MOT17-11-FRCNN\", \"MOT17-13-FRCNN\"]\n\n    # general_tracker_cfg = {'public_detections': False, 'reid_sim_only': True, 'reid_greedy_matching': False}\n    general_tracker_cfg = {'public_detections': 'min_iou_0_5'}\n    # general_tracker_cfg = {'public_detections': False}\n\n    # dataset_name = 'MOT17-TRAIN-FRCNN'\n    dataset_name = 'MOT17-TRAIN-ALL'\n    # dataset_name = 'MOT20-TRAIN'\n\n    configs = [\n        {'dataset_name': dataset_name,\n\n         'frame_range': {'start': 0.5},\n         'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot_mot17_train_cross_val_frame_0_0_to_0_5_coco_pretrained_num_queries_500_batch_size=2_num_gpus_7_num_classes_20_AP_det_overflow_boxes_True_prev_frame_rnd_augs_0_2_uniform_false_negative_prob_multi_frame_hidden_dim_288_sep_encoders_batch_queries/checkpoint_epoch_50.pth'},\n    ]\n\n    tracker_param_grids = {\n        # 'detection_obj_score_thresh': [0.3, 0.4, 0.5, 0.6],\n        # 'track_obj_score_thresh': [0.3, 0.4, 0.5, 0.6],\n        'detection_obj_score_thresh': [0.4],\n        'track_obj_score_thresh': [0.4],\n        # 'detection_nms_thresh': [0.95, 0.9, 0.0],\n        # 'track_nms_thresh': [0.95, 0.9, 0.0],\n        # 'detection_nms_thresh': [0.9],\n        # 'track_nms_thresh': [0.9],\n        # 'reid_sim_threshold': [0.0, 0.5, 1.0, 10, 50, 100, 200],\n        'reid_score_thresh': [0.4],\n        # 'inactive_patience': [-1, 5, 10, 20, 30, 40, 50]\n        # 'reid_score_thresh': [0.8],\n        # 'inactive_patience': [-1],\n        # 'inactive_patience': [-1, 5, 10]\n        }\n\n    # compute all config combinations\n    tracker_param_cfgs = [dict(zip(tracker_param_grids, v))\n                          for v in product(*tracker_param_grids.values())]\n\n    # add empty metric arrays\n    metrics = ['mota', 'idf1']\n    tracker_param_cfgs = [\n        {'config': {**general_tracker_cfg, **tracker_cfg}}\n        for tracker_cfg in tracker_param_cfgs]\n\n    for m in metrics:\n        for tracker_cfg in tracker_param_cfgs:\n            tracker_cfg[m] = []\n\n    total_num_experiments = len(tracker_param_cfgs) * len(configs)\n    print(f'NUM experiments: {total_num_experiments}')\n\n    # run all tracker config combinations for all experiment configurations\n    exp_counter = 1\n    for config in configs:\n        for tracker_cfg in tracker_param_cfgs:\n            print(f\"EXPERIMENT: {exp_counter}/{total_num_experiments}\")\n\n            config['tracker_cfg'] = tracker_cfg['config']\n            run = ex.run(config_updates=config)\n            eval_summary = run.result\n\n            for m in metrics:\n                tracker_cfg[m].append(eval_summary[m]['OVERALL'])\n\n            exp_counter += 1\n\n    # compute mean for all metrices\n    for m in metrics:\n        for tracker_cfg in tracker_param_cfgs:\n            tracker_cfg[m] = np.array(tracker_cfg[m]).mean()\n\n    for cfg in tracker_param_cfgs:\n        print([cfg[m] for m in metrics], cfg['config'])\n\n    # compute and plot best metric config\n    for m in metrics:\n        best_metric_cfg_idx = np.array(\n            [cfg[m] for cfg in tracker_param_cfgs]).argmax()\n\n        print(f\"BEST {m.upper()} CFG: {tracker_param_cfgs[best_metric_cfg_idx]['config']}\")\n\n    # TODO\n    best_mota_plus_idf1_cfg_idx = np.array(\n        [cfg['mota'] + cfg['idf1'] for cfg in tracker_param_cfgs]).argmax()\n    print(f\"BEST MOTA PLUS IDF1 CFG: {tracker_param_cfgs[best_mota_plus_idf1_cfg_idx]['config']}\")\n"
  },
  {
    "path": "src/trackformer/__init__.py",
    "content": ""
  },
  {
    "path": "src/trackformer/datasets/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nSubmodule interface.\n\"\"\"\nfrom argparse import Namespace\nfrom pycocotools.coco import COCO\nfrom torch.utils.data import Dataset, Subset\nfrom torchvision.datasets import CocoDetection\n\nfrom .coco import build as build_coco\nfrom .crowdhuman import build_crowdhuman\nfrom .mot import build_mot, build_mot_crowdhuman, build_mot_coco_person\n\n\ndef get_coco_api_from_dataset(dataset: Subset) -> COCO:\n    \"\"\"Return COCO class from PyTorch dataset for evaluation with COCO eval.\"\"\"\n    for _ in range(10):\n        # if isinstance(dataset, CocoDetection):\n        #     break\n        if isinstance(dataset, Subset):\n            dataset = dataset.dataset\n\n    if not isinstance(dataset, CocoDetection):\n        raise NotImplementedError\n\n    return dataset.coco\n\n\ndef build_dataset(split: str, args: Namespace) -> Dataset:\n    \"\"\"Helper function to build dataset for different splits ('train' or 'val').\"\"\"\n    if args.dataset == 'coco':\n        dataset = build_coco(split, args)\n    elif args.dataset == 'coco_person':\n        dataset = build_coco(split, args, 'person_keypoints')\n    elif args.dataset == 'mot':\n        dataset = build_mot(split, args)\n    elif args.dataset == 'crowdhuman':\n        dataset = build_crowdhuman(split, args)\n    elif args.dataset == 'mot_crowdhuman':\n        dataset = build_mot_crowdhuman(split, args)\n    elif args.dataset == 'mot_coco_person':\n        dataset = build_mot_coco_person(split, args)\n    elif args.dataset == 'coco_panoptic':\n        # to avoid making panopticapi required for coco\n        from .coco_panoptic import build as build_coco_panoptic\n        dataset = build_coco_panoptic(split, args)\n    else:\n        raise ValueError(f'dataset {args.dataset} not supported')\n\n    return dataset\n"
  },
  {
    "path": "src/trackformer/datasets/coco.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nCOCO dataset which returns image_id for evaluation.\n\nMostly copy-paste from https://github.com/pytorch/vision/blob/13b35ff/references/detection/coco_utils.py\n\"\"\"\nimport copy\nimport random\nfrom pathlib import Path\nfrom collections import Counter\n\nimport torch\nimport torch.nn.functional as F\nimport torch.utils.data\nimport torchvision\nfrom pycocotools import mask as coco_mask\n\nfrom . import transforms as T\n\n\nclass CocoDetection(torchvision.datasets.CocoDetection):\n\n    fields = [\"labels\", \"area\", \"iscrowd\", \"boxes\", \"track_ids\", \"masks\"]\n\n    def __init__(self,  img_folder, ann_file, transforms, norm_transforms,\n                 return_masks=False, overflow_boxes=False, remove_no_obj_imgs=True,\n                 prev_frame=False, prev_frame_rnd_augs=0.0, prev_prev_frame=False,\n                 min_num_objects=0):\n        super(CocoDetection, self).__init__(img_folder, ann_file)\n        self._transforms = transforms\n        self._norm_transforms = norm_transforms\n        self.prepare = ConvertCocoPolysToMask(return_masks, overflow_boxes)\n\n        annos_image_ids = [\n            ann['image_id'] for ann in self.coco.loadAnns(self.coco.getAnnIds())]\n        if remove_no_obj_imgs:\n            self.ids = sorted(list(set(annos_image_ids)))\n\n        if min_num_objects:\n            counter = Counter(annos_image_ids)\n\n            self.ids = [i for i in self.ids if counter[i] >= min_num_objects]\n\n        self._prev_frame = prev_frame\n        self._prev_frame_rnd_augs = prev_frame_rnd_augs\n        self._prev_prev_frame = prev_prev_frame\n\n    def _getitem_from_id(self, image_id, random_state=None, random_jitter=True):\n        # if random state is given we do the data augmentation with the state\n        # and then apply the random jitter. this ensures that (simulated) adjacent\n        # frames have independent jitter.\n        if random_state is not None:\n            curr_random_state = {\n                'random': random.getstate(),\n                'torch': torch.random.get_rng_state()}\n            random.setstate(random_state['random'])\n            torch.random.set_rng_state(random_state['torch'])\n\n        img, target = super(CocoDetection, self).__getitem__(image_id)\n        image_id = self.ids[image_id]\n        target = {'image_id': image_id,\n                  'annotations': target}\n        img, target = self.prepare(img, target)\n\n        if 'track_ids' not in target:\n            target['track_ids'] = torch.arange(len(target['labels']))\n\n        if self._transforms is not None:\n            img, target = self._transforms(img, target)\n\n        # ignore\n        ignore = target.pop(\"ignore\").bool()\n        for field in self.fields:\n            if field in target:\n                target[f\"{field}_ignore\"] = target[field][ignore]\n                target[field] = target[field][~ignore]\n\n        if random_state is not None:\n            random.setstate(curr_random_state['random'])\n            torch.random.set_rng_state(curr_random_state['torch'])\n\n        if random_jitter:\n            img, target = self._add_random_jitter(img, target)\n        img, target = self._norm_transforms(img, target)\n\n        return img, target\n\n    # TODO: add to the transforms and merge norm_transforms into transforms\n    def _add_random_jitter(self, img, target):\n        if self._prev_frame_rnd_augs:\n            orig_w, orig_h = img.size\n\n            crop_width = random.randint(\n                int((1.0 - self._prev_frame_rnd_augs) * orig_w),\n                orig_w)\n            crop_height = int(orig_h * crop_width / orig_w)\n\n            transform = T.RandomCrop((crop_height, crop_width))\n            img, target = transform(img, target)\n\n            img, target = T.resize(img, target, (orig_w, orig_h))\n\n        return img, target\n\n    # def _add_random_jitter(self, img, target):\n    #     if self._prev_frame_rnd_augs: # and random.uniform(0, 1) < 0.5:\n    #         orig_w, orig_h = img.size\n\n    #         width, height = img.size\n    #         size = random.randint(\n    #             int((1.0 - self._prev_frame_rnd_augs) * min(width, height)),\n    #             int((1.0 + self._prev_frame_rnd_augs) * min(width, height)))\n    #         img, target = T.RandomResize([size])(img, target)\n\n    #         width, height = img.size\n    #         min_size = (\n    #             int((1.0 - self._prev_frame_rnd_augs) * width),\n    #             int((1.0 - self._prev_frame_rnd_augs) * height))\n    #         transform = T.RandomSizeCrop(min_size=min_size)\n    #         img, target = transform(img, target)\n\n    #         width, height = img.size\n    #         if orig_w < width:\n    #             img, target = T.RandomCrop((height, orig_w))(img, target)\n    #         else:\n    #             total_pad = orig_w - width\n    #             pad_left = random.randint(0, total_pad)\n    #             pad_right = total_pad - pad_left\n\n    #             padding = (pad_left, 0, pad_right, 0)\n    #             img, target = T.pad(img, target, padding)\n\n    #         width, height = img.size\n    #         if orig_h < height:\n    #             img, target = T.RandomCrop((orig_h, width))(img, target)\n    #         else:\n    #             total_pad = orig_h - height\n    #             pad_top = random.randint(0, total_pad)\n    #             pad_bottom = total_pad - pad_top\n\n    #             padding = (0, pad_top, 0, pad_bottom)\n    #             img, target = T.pad(img, target, padding)\n\n    #     return img, target\n\n    def __getitem__(self, idx):\n        random_state = {\n            'random': random.getstate(),\n            'torch': torch.random.get_rng_state()}\n        img, target = self._getitem_from_id(idx, random_state, random_jitter=False)\n\n        if self._prev_frame:\n            # PREV\n            prev_img, prev_target = self._getitem_from_id(idx, random_state)\n            target[f'prev_image'] = prev_img\n            target[f'prev_target'] = prev_target\n\n            if self._prev_prev_frame:\n                # PREV PREV\n                prev_prev_img, prev_prev_target = self._getitem_from_id(idx, random_state)\n                target[f'prev_prev_image'] = prev_prev_img\n                target[f'prev_prev_target'] = prev_prev_target\n\n        return img, target\n\n    def write_result_files(self, *args):\n        pass\n\n\ndef convert_coco_poly_to_mask(segmentations, height, width):\n    masks = []\n    for polygons in segmentations:\n        if isinstance(polygons, dict):\n            rles = {'size': polygons['size'],\n                    'counts': polygons['counts'].encode(encoding='UTF-8')}\n        else:\n            rles = coco_mask.frPyObjects(polygons, height, width)\n        mask = coco_mask.decode(rles)\n        if len(mask.shape) < 3:\n            mask = mask[..., None]\n        mask = torch.as_tensor(mask, dtype=torch.uint8)\n        mask = mask.any(dim=2)\n        masks.append(mask)\n    if masks:\n        masks = torch.stack(masks, dim=0)\n    else:\n        masks = torch.zeros((0, height, width), dtype=torch.uint8)\n    return masks\n\n\nclass ConvertCocoPolysToMask(object):\n    def __init__(self, return_masks=False, overflow_boxes=False):\n        self.return_masks = return_masks\n        self.overflow_boxes = overflow_boxes\n\n    def __call__(self, image, target):\n        w, h = image.size\n\n        image_id = target[\"image_id\"]\n        image_id = torch.tensor([image_id])\n\n        anno = target[\"annotations\"]\n\n        anno = [obj for obj in anno if 'iscrowd' not in obj or obj['iscrowd'] == 0]\n\n        boxes = [obj[\"bbox\"] for obj in anno]\n        # guard against no boxes via resizing\n        boxes = torch.as_tensor(boxes, dtype=torch.float32).reshape(-1, 4)\n        # x,y,w,h --> x,y,x,y\n        boxes[:, 2:] += boxes[:, :2]\n        if not self.overflow_boxes:\n            boxes[:, 0::2].clamp_(min=0, max=w)\n            boxes[:, 1::2].clamp_(min=0, max=h)\n\n        classes = [obj[\"category_id\"] for obj in anno]\n        classes = torch.tensor(classes, dtype=torch.int64)\n\n        if self.return_masks:\n            segmentations = [obj[\"segmentation\"] for obj in anno]\n            masks = convert_coco_poly_to_mask(segmentations, h, w)\n\n        keypoints = None\n        if anno and \"keypoints\" in anno[0]:\n            keypoints = [obj[\"keypoints\"] for obj in anno]\n            keypoints = torch.as_tensor(keypoints, dtype=torch.float32)\n            num_keypoints = keypoints.shape[0]\n            if num_keypoints:\n                keypoints = keypoints.view(num_keypoints, -1, 3)\n\n        keep = (boxes[:, 3] > boxes[:, 1]) & (boxes[:, 2] > boxes[:, 0])\n\n        boxes = boxes[keep]\n        classes = classes[keep]\n        if self.return_masks:\n            masks = masks[keep]\n        if keypoints is not None:\n            keypoints = keypoints[keep]\n\n        target = {}\n        target[\"boxes\"] = boxes\n        target[\"labels\"] = classes - 1\n\n        if self.return_masks:\n            target[\"masks\"] = masks\n        target[\"image_id\"] = image_id\n        if keypoints is not None:\n            target[\"keypoints\"] = keypoints\n\n        if anno and \"track_id\" in anno[0]:\n            track_ids = torch.tensor([obj[\"track_id\"] for obj in anno])\n            target[\"track_ids\"] = track_ids[keep]\n        elif not len(boxes):\n            target[\"track_ids\"] = torch.empty(0)\n\n        # for conversion to coco api\n        area = torch.tensor([obj[\"area\"] for obj in anno])\n        iscrowd = torch.tensor([obj[\"iscrowd\"] if \"iscrowd\" in obj else 0 for obj in anno])\n        ignore = torch.tensor([obj[\"ignore\"] if \"ignore\" in obj else 0 for obj in anno])\n\n        target[\"area\"] = area[keep]\n        target[\"iscrowd\"] = iscrowd[keep]\n        target[\"ignore\"] = ignore[keep]\n\n        target[\"orig_size\"] = torch.as_tensor([int(h), int(w)])\n        target[\"size\"] = torch.as_tensor([int(h), int(w)])\n\n        return image, target\n\n\ndef make_coco_transforms(image_set, img_transform=None, overflow_boxes=False):\n    normalize = T.Compose([\n        T.ToTensor(),\n        T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n    ])\n    # default\n    max_size = 1333\n    val_width = 800\n    scales = [480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800]\n    random_resizes = [400, 500, 600]\n    random_size_crop = (384, 600)\n\n    if img_transform is not None:\n        scale = img_transform.max_size / max_size\n        max_size = img_transform.max_size\n        val_width = img_transform.val_width\n\n        # scale all with respect to custom max_size\n        scales = [int(scale * s) for s in scales]\n        random_resizes = [int(scale * s) for s in random_resizes]\n        random_size_crop = [int(scale * s) for s in random_size_crop]\n\n    if image_set == 'train':\n        transforms = [\n            T.RandomHorizontalFlip(),\n            T.RandomSelect(\n                T.RandomResize(scales, max_size=max_size),\n                T.Compose([\n                    T.RandomResize(random_resizes),\n                    T.RandomSizeCrop(*random_size_crop, overflow_boxes=overflow_boxes),\n                    T.RandomResize(scales, max_size=max_size),\n                ])\n            ),\n        ]\n    elif image_set == 'val':\n        transforms = [\n            T.RandomResize([val_width], max_size=max_size),\n        ]\n    else:\n        ValueError(f'unknown {image_set}')\n\n    # transforms.append(normalize)\n    return T.Compose(transforms), normalize\n\n\ndef build(image_set, args, mode='instances'):\n    root = Path(args.coco_path)\n    assert root.exists(), f'provided COCO path {root} does not exist'\n\n    # image_set is 'train' or 'val'\n    split = getattr(args, f\"{image_set}_split\")\n\n    splits = {\n        \"train\": (root / \"train2017\", root / \"annotations\" / f'{mode}_train2017.json'),\n        \"val\": (root / \"val2017\", root / \"annotations\" / f'{mode}_val2017.json'),\n    }\n\n    if image_set == 'train':\n        prev_frame_rnd_augs = args.coco_and_crowdhuman_prev_frame_rnd_augs\n    elif image_set == 'val':\n        prev_frame_rnd_augs = 0.0\n\n    transforms, norm_transforms = make_coco_transforms(image_set, args.img_transform, args.overflow_boxes)\n    img_folder, ann_file = splits[split]\n    dataset = CocoDetection(\n        img_folder, ann_file, transforms, norm_transforms,\n        return_masks=args.masks,\n        prev_frame=args.tracking,\n        prev_frame_rnd_augs=prev_frame_rnd_augs,\n        prev_prev_frame=args.track_prev_prev_frame,\n        min_num_objects=args.coco_min_num_objects)\n\n    return dataset\n"
  },
  {
    "path": "src/trackformer/datasets/coco_eval.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nCOCO evaluator that works in distributed mode.\n\nMostly copy-paste from https://github.com/pytorch/vision/blob/edfd5a7/references/detection/coco_eval.py\nThe difference is that there is less copy-pasting from pycocotools\nin the end of the file, as python3 can suppress prints with contextlib\n\"\"\"\nimport os\nimport contextlib\nimport copy\nimport numpy as np\nimport torch\n\nfrom pycocotools.cocoeval import COCOeval\nfrom pycocotools.coco import COCO\nimport pycocotools.mask as mask_util\n\nfrom ..util.misc import all_gather\n\n\nclass CocoEvaluator(object):\n    def __init__(self, coco_gt, iou_types):\n        assert isinstance(iou_types, (list, tuple))\n        coco_gt = copy.deepcopy(coco_gt)\n        self.coco_gt = coco_gt\n\n        self.iou_types = iou_types\n        self.coco_eval = {}\n        for iou_type in iou_types:\n            self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type)\n\n        self.img_ids = []\n        self.eval_imgs = {k: [] for k in iou_types}\n\n    def update(self, predictions):\n        img_ids = list(np.unique(list(predictions.keys())))\n        self.img_ids.extend(img_ids)\n\n        for prediction in predictions.values():\n            prediction[\"labels\"] += 1\n\n        for iou_type in self.iou_types:\n            results = self.prepare(predictions, iou_type)\n\n            # suppress pycocotools prints\n            with open(os.devnull, 'w') as devnull:\n                with contextlib.redirect_stdout(devnull):\n                    coco_dt = COCO.loadRes(self.coco_gt, results) if results else COCO()\n            coco_eval = self.coco_eval[iou_type]\n\n            coco_eval.cocoDt = coco_dt\n            coco_eval.params.imgIds = list(img_ids)\n            img_ids, eval_imgs = evaluate(coco_eval)\n\n            self.eval_imgs[iou_type].append(eval_imgs)\n\n    def synchronize_between_processes(self):\n        for iou_type in self.iou_types:\n            self.eval_imgs[iou_type] = np.concatenate(self.eval_imgs[iou_type], 2)\n            create_common_coco_eval(\n                self.coco_eval[iou_type],\n                self.img_ids,\n                self.eval_imgs[iou_type])\n\n    def accumulate(self):\n        for coco_eval in self.coco_eval.values():\n            coco_eval.accumulate()\n\n    def summarize(self):\n        for iou_type, coco_eval in self.coco_eval.items():\n            print(f\"IoU metric: {iou_type}\")\n            coco_eval.summarize()\n\n    def prepare(self, predictions, iou_type):\n        if iou_type == \"bbox\":\n            return self.prepare_for_coco_detection(predictions)\n        elif iou_type == \"segm\":\n            return self.prepare_for_coco_segmentation(predictions)\n        elif iou_type == \"keypoints\":\n            return self.prepare_for_coco_keypoint(predictions)\n        else:\n            raise ValueError(\"Unknown iou type {}\".format(iou_type))\n\n    def prepare_for_coco_detection(self, predictions):\n        coco_results = []\n        for original_id, prediction in predictions.items():\n            if len(prediction) == 0:\n                continue\n\n            boxes = prediction[\"boxes\"]\n            boxes = convert_to_xywh(boxes).tolist()\n            scores = prediction[\"scores\"].tolist()\n            labels = prediction[\"labels\"].tolist()\n\n            coco_results.extend(\n                [\n                    {\n                        \"image_id\": original_id,\n                        \"category_id\": labels[k],\n                        \"bbox\": box,\n                        \"score\": scores[k],\n                    }\n                    for k, box in enumerate(boxes)\n                ]\n            )\n        return coco_results\n\n    def prepare_for_coco_segmentation(self, predictions):\n        coco_results = []\n        for original_id, prediction in predictions.items():\n            if len(prediction) == 0:\n                continue\n\n            scores = prediction[\"scores\"]\n            labels = prediction[\"labels\"]\n            masks = prediction[\"masks\"]\n\n            masks = masks > 0.5\n\n            scores = prediction[\"scores\"].tolist()\n            labels = prediction[\"labels\"].tolist()\n\n            rles = [\n                mask_util.encode(np.array(mask[0, :, :, np.newaxis], dtype=np.uint8, order=\"F\"))[0]\n                for mask in masks\n            ]\n            for rle in rles:\n                rle[\"counts\"] = rle[\"counts\"].decode(\"utf-8\")\n\n            coco_results.extend(\n                [\n                    {\n                        \"image_id\": original_id,\n                        \"category_id\": labels[k],\n                        \"segmentation\": rle,\n                        \"score\": scores[k],\n                    }\n                    for k, rle in enumerate(rles)\n                ]\n            )\n        return coco_results\n\n    def prepare_for_coco_keypoint(self, predictions):\n        coco_results = []\n        for original_id, prediction in predictions.items():\n            if len(prediction) == 0:\n                continue\n\n            boxes = prediction[\"boxes\"]\n            boxes = convert_to_xywh(boxes).tolist()\n            scores = prediction[\"scores\"].tolist()\n            labels = prediction[\"labels\"].tolist()\n            keypoints = prediction[\"keypoints\"]\n            keypoints = keypoints.flatten(start_dim=1).tolist()\n\n            coco_results.extend(\n                [\n                    {\n                        \"image_id\": original_id,\n                        \"category_id\": labels[k],\n                        'keypoints': keypoint,\n                        \"score\": scores[k],\n                    }\n                    for k, keypoint in enumerate(keypoints)\n                ]\n            )\n        return coco_results\n\n\ndef convert_to_xywh(boxes):\n    xmin, ymin, xmax, ymax = boxes.unbind(1)\n    return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1)\n\n\ndef merge(img_ids, eval_imgs):\n    all_img_ids = all_gather(img_ids)\n    all_eval_imgs = all_gather(eval_imgs)\n\n    merged_img_ids = []\n    for p in all_img_ids:\n        merged_img_ids.extend(p)\n\n    merged_eval_imgs = []\n    for p in all_eval_imgs:\n        merged_eval_imgs.append(p)\n\n    merged_img_ids = np.array(merged_img_ids)\n    merged_eval_imgs = np.concatenate(merged_eval_imgs, 2)\n\n    # keep only unique (and in sorted order) images\n    merged_img_ids, idx = np.unique(merged_img_ids, return_index=True)\n    merged_eval_imgs = merged_eval_imgs[..., idx]\n\n    return merged_img_ids, merged_eval_imgs\n\n\ndef create_common_coco_eval(coco_eval, img_ids, eval_imgs):\n    img_ids, eval_imgs = merge(img_ids, eval_imgs)\n    img_ids = list(img_ids)\n    eval_imgs = list(eval_imgs.flatten())\n\n    coco_eval.evalImgs = eval_imgs\n    coco_eval.params.imgIds = img_ids\n    coco_eval._paramsEval = copy.deepcopy(coco_eval.params)\n\n\n#################################################################\n# From pycocotools, just removed the prints and fixed\n# a Python3 bug about unicode not defined\n#################################################################\n\n\ndef evaluate(self):\n    '''\n    Run per image evaluation on given images and store results (a list of dict) in self.evalImgs\n    :return: None\n    '''\n    # tic = time.time()\n    # print('Running per image evaluation...')\n    p = self.params\n    # add backward compatibility if useSegm is specified in params\n    if p.useSegm is not None:\n        p.iouType = 'segm' if p.useSegm == 1 else 'bbox'\n        print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType))\n    # print('Evaluate annotation type *{}*'.format(p.iouType))\n    p.imgIds = list(np.unique(p.imgIds))\n    if p.useCats:\n        p.catIds = list(np.unique(p.catIds))\n    p.maxDets = sorted(p.maxDets)\n    self.params = p\n\n    self._prepare()\n    # loop through images, area range, max detection number\n    catIds = p.catIds if p.useCats else [-1]\n\n    if p.iouType == 'segm' or p.iouType == 'bbox':\n        computeIoU = self.computeIoU\n    elif p.iouType == 'keypoints':\n        computeIoU = self.computeOks\n    self.ious = {\n        (imgId, catId): computeIoU(imgId, catId)\n        for imgId in p.imgIds\n        for catId in catIds}\n\n    evaluateImg = self.evaluateImg\n    maxDet = p.maxDets[-1]\n    evalImgs = [\n        evaluateImg(imgId, catId, areaRng, maxDet)\n        for catId in catIds\n        for areaRng in p.areaRng\n        for imgId in p.imgIds\n    ]\n    # this is NOT in the pycocotools code, but could be done outside\n    evalImgs = np.asarray(evalImgs).reshape(len(catIds), len(p.areaRng), len(p.imgIds))\n    self._paramsEval = copy.deepcopy(self.params)\n    # toc = time.time()\n    # print('DONE (t={:0.2f}s).'.format(toc-tic))\n    return p.imgIds, evalImgs\n\n#################################################################\n# end of straight copy from pycocotools, just removing the prints\n#################################################################\n"
  },
  {
    "path": "src/trackformer/datasets/coco_panoptic.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport json\nfrom pathlib import Path\n\nimport numpy as np\nimport torch\nfrom PIL import Image\n\nfrom panopticapi.utils import rgb2id\nfrom util.box_ops import masks_to_boxes\n\nfrom .coco import make_coco_transforms\n\n\nclass CocoPanoptic:\n    def __init__(self, img_folder, ann_folder, ann_file, transforms=None, norm_transforms=None, return_masks=True):\n        with open(ann_file, 'r') as f:\n            self.coco = json.load(f)\n\n        # sort 'images' field so that they are aligned with 'annotations'\n        # i.e., in alphabetical order\n        self.coco['images'] = sorted(self.coco['images'], key=lambda x: x['id'])\n        # sanity check\n        if \"annotations\" in self.coco:\n            for img, ann in zip(self.coco['images'], self.coco['annotations']):\n                assert img['file_name'][:-4] == ann['file_name'][:-4]\n\n        self.img_folder = img_folder\n        self.ann_folder = ann_folder\n        self.ann_file = ann_file\n        self.transforms = transforms\n        self.norm_transforms = norm_transforms\n        self.return_masks = return_masks\n\n    def __getitem__(self, idx):\n        ann_info = self.coco['annotations'][idx] if \"annotations\" in self.coco else self.coco['images'][idx]\n        img_path = Path(self.img_folder) / ann_info['file_name'].replace('.png', '.jpg')\n        ann_path = Path(self.ann_folder) / ann_info['file_name']\n\n        img = Image.open(img_path).convert('RGB')\n        w, h = img.size\n        if \"segments_info\" in ann_info:\n            masks = np.asarray(Image.open(ann_path), dtype=np.uint32)\n            masks = rgb2id(masks)\n\n            ids = np.array([ann['id'] for ann in ann_info['segments_info']])\n            masks = masks == ids[:, None, None]\n\n            masks = torch.as_tensor(masks, dtype=torch.uint8)\n            labels = torch.tensor([ann['category_id'] for ann in ann_info['segments_info']], dtype=torch.int64)\n\n        target = {}\n        target['image_id'] = torch.tensor([ann_info['image_id'] if \"image_id\" in ann_info else ann_info[\"id\"]])\n        if self.return_masks:\n            target['masks'] = masks\n        target['labels'] = labels\n\n        target[\"boxes\"] = masks_to_boxes(masks)\n\n        target['size'] = torch.as_tensor([int(h), int(w)])\n        target['orig_size'] = torch.as_tensor([int(h), int(w)])\n        if \"segments_info\" in ann_info:\n            for name in ['iscrowd', 'area']:\n                target[name] = torch.tensor([ann[name] for ann in ann_info['segments_info']])\n\n        if self.transforms is not None:\n            img, target = self.transforms(img, target)\n        if self.norm_transforms is not None:\n            img, target = self.norm_transforms(img, target)\n\n        return img, target\n\n    def __len__(self):\n        return len(self.coco['images'])\n\n    def get_height_and_width(self, idx):\n        img_info = self.coco['images'][idx]\n        height = img_info['height']\n        width = img_info['width']\n        return height, width\n\n\ndef build(image_set, args):\n    img_folder_root = Path(args.coco_path)\n    ann_folder_root = Path(args.coco_panoptic_path)\n    assert img_folder_root.exists(), f'provided COCO path {img_folder_root} does not exist'\n    assert ann_folder_root.exists(), f'provided COCO path {ann_folder_root} does not exist'\n    mode = 'panoptic'\n    PATHS = {\n        \"train\": (\"train2017\", Path(\"annotations\") / f'{mode}_train2017.json'),\n        \"val\": (\"val2017\", Path(\"annotations\") / f'{mode}_val2017.json'),\n    }\n\n    img_folder, ann_file = PATHS[image_set]\n    img_folder_path = img_folder_root / img_folder\n    ann_folder = ann_folder_root / f'{mode}_{img_folder}'\n    ann_file = ann_folder_root / ann_file\n\n    transforms, norm_transforms = make_coco_transforms(image_set, args.img_transform, args.overflow_boxes)\n    dataset = CocoPanoptic(img_folder_path, ann_folder, ann_file,\n                           transforms=transforms, norm_transforms=norm_transforms, return_masks=args.masks)\n\n    return dataset\n"
  },
  {
    "path": "src/trackformer/datasets/crowdhuman.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nCrowdHuman dataset with tracking training augmentations.\n\"\"\"\nfrom pathlib import Path\n\nfrom .coco import CocoDetection, make_coco_transforms\n\n\ndef build_crowdhuman(image_set, args):\n    root = Path(args.crowdhuman_path)\n    assert root.exists(), f'provided COCO path {root} does not exist'\n\n    split = getattr(args, f\"{image_set}_split\")\n\n    img_folder = root / split\n    ann_file = root / f'annotations/{split}.json'\n\n    if image_set == 'train':\n        prev_frame_rnd_augs = args.coco_and_crowdhuman_prev_frame_rnd_augs\n    elif image_set == 'val':\n        prev_frame_rnd_augs = 0.0\n\n    transforms, norm_transforms = make_coco_transforms(\n        image_set, args.img_transform, args.overflow_boxes)\n    dataset = CocoDetection(\n        img_folder, ann_file, transforms, norm_transforms,\n        return_masks=args.masks,\n        prev_frame=args.tracking,\n        prev_frame_rnd_augs=prev_frame_rnd_augs)\n\n    return dataset\n"
  },
  {
    "path": "src/trackformer/datasets/mot.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT dataset with tracking training augmentations.\n\"\"\"\nimport bisect\nimport copy\nimport csv\nimport os\nimport random\nfrom pathlib import Path\n\nimport torch\n\nfrom . import transforms as T\nfrom .coco import CocoDetection, make_coco_transforms\nfrom .coco import build as build_coco\nfrom .crowdhuman import build_crowdhuman\n\n\nclass MOT(CocoDetection):\n\n    def __init__(self, *args, prev_frame_range=1, **kwargs):\n        super(MOT, self).__init__(*args, **kwargs)\n\n        self._prev_frame_range = prev_frame_range\n\n    @property\n    def sequences(self):\n        return self.coco.dataset['sequences']\n\n    @property\n    def frame_range(self):\n        if 'frame_range' in self.coco.dataset:\n            return self.coco.dataset['frame_range']\n        else:\n            return {'start': 0, 'end': 1.0}\n\n    def seq_length(self, idx):\n        return self.coco.imgs[idx]['seq_length']\n\n    def sample_weight(self, idx):\n        return 1.0 / self.seq_length(idx)\n\n    def __getitem__(self, idx):\n        random_state = {\n            'random': random.getstate(),\n            'torch': torch.random.get_rng_state()}\n\n        img, target = self._getitem_from_id(idx, random_state, random_jitter=False)\n\n        if self._prev_frame:\n            frame_id = self.coco.imgs[idx]['frame_id']\n\n            # PREV\n            # first frame has no previous frame\n            prev_frame_id = random.randint(\n                max(0, frame_id - self._prev_frame_range),\n                min(frame_id + self._prev_frame_range, self.seq_length(idx) - 1))\n            prev_image_id = self.coco.imgs[idx]['first_frame_image_id'] + prev_frame_id\n\n            prev_img, prev_target = self._getitem_from_id(prev_image_id, random_state)\n            target[f'prev_image'] = prev_img\n            target[f'prev_target'] = prev_target\n\n            if self._prev_prev_frame:\n                # PREV PREV frame equidistant as prev_frame\n                prev_prev_frame_id = min(max(0, prev_frame_id + prev_frame_id - frame_id), self.seq_length(idx) - 1)\n                prev_prev_image_id = self.coco.imgs[idx]['first_frame_image_id'] + prev_prev_frame_id\n\n                prev_prev_img, prev_prev_target = self._getitem_from_id(prev_prev_image_id, random_state)\n                target[f'prev_prev_image'] = prev_prev_img\n                target[f'prev_prev_target'] = prev_prev_target\n\n        return img, target\n\n    def write_result_files(self, results, output_dir):\n        \"\"\"Write the detections in the format for the MOT17Det sumbission\n\n        Each file contains these lines:\n        <frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>\n\n        \"\"\"\n\n        files = {}\n        for image_id, res in results.items():\n            img = self.coco.loadImgs(image_id)[0]\n            file_name_without_ext = os.path.splitext(img['file_name'])[0]\n            seq_name, frame = file_name_without_ext.split('_')\n            frame = int(frame)\n\n            outfile = os.path.join(output_dir, f\"{seq_name}.txt\")\n\n            # check if out in keys and create empty list if not\n            if outfile not in files.keys():\n                files[outfile] = []\n\n            for box, score in zip(res['boxes'], res['scores']):\n                if score <= 0.7:\n                    continue\n                x1 = box[0].item()\n                y1 = box[1].item()\n                x2 = box[2].item()\n                y2 = box[3].item()\n                files[outfile].append(\n                    [frame, -1, x1, y1, x2 - x1, y2 - y1, score.item(), -1, -1, -1])\n\n        for k, v in files.items():\n            with open(k, \"w\") as of:\n                writer = csv.writer(of, delimiter=',')\n                for d in v:\n                    writer.writerow(d)\n\n\nclass WeightedConcatDataset(torch.utils.data.ConcatDataset):\n\n    def sample_weight(self, idx):\n        dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)\n        if dataset_idx == 0:\n            sample_idx = idx\n        else:\n            sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]\n\n        if hasattr(self.datasets[dataset_idx], 'sample_weight'):\n            return self.datasets[dataset_idx].sample_weight(sample_idx)\n        else:\n            return 1 / len(self.datasets[dataset_idx])\n\n\ndef build_mot(image_set, args):\n    if image_set == 'train':\n        root = Path(args.mot_path_train)\n        prev_frame_rnd_augs = args.track_prev_frame_rnd_augs\n        prev_frame_range=args.track_prev_frame_range\n    elif image_set == 'val':\n        root = Path(args.mot_path_val)\n        prev_frame_rnd_augs = 0.0\n        prev_frame_range = 1\n    else:\n        ValueError(f'unknown {image_set}')\n\n    assert root.exists(), f'provided MOT17Det path {root} does not exist'\n\n    split = getattr(args, f\"{image_set}_split\")\n\n    img_folder = root / split\n    ann_file = root / f\"annotations/{split}.json\"\n\n    transforms, norm_transforms = make_coco_transforms(\n        image_set, args.img_transform, args.overflow_boxes)\n\n    dataset = MOT(\n        img_folder, ann_file, transforms, norm_transforms,\n        prev_frame_range=prev_frame_range,\n        return_masks=args.masks,\n        overflow_boxes=args.overflow_boxes,\n        remove_no_obj_imgs=False,\n        prev_frame=args.tracking,\n        prev_frame_rnd_augs=prev_frame_rnd_augs,\n        prev_prev_frame=args.track_prev_prev_frame,\n        )\n\n    return dataset\n\n\ndef build_mot_crowdhuman(image_set, args):\n    if image_set == 'train':\n        args_crowdhuman = copy.deepcopy(args)\n        args_crowdhuman.train_split = args.crowdhuman_train_split\n\n        crowdhuman_dataset = build_crowdhuman('train', args_crowdhuman)\n\n        if getattr(args, f\"{image_set}_split\") is None:\n            return crowdhuman_dataset\n\n    dataset = build_mot(image_set, args)\n\n    if image_set == 'train':\n        dataset = torch.utils.data.ConcatDataset(\n            [dataset, crowdhuman_dataset])\n\n    return dataset\n\n\ndef build_mot_coco_person(image_set, args):\n    if image_set == 'train':\n        args_coco_person = copy.deepcopy(args)\n        args_coco_person.train_split = args.coco_person_train_split\n\n        coco_person_dataset = build_coco('train', args_coco_person, 'person_keypoints')\n\n        if getattr(args, f\"{image_set}_split\") is None:\n            return coco_person_dataset\n\n    dataset = build_mot(image_set, args)\n\n    if image_set == 'train':\n        dataset = torch.utils.data.ConcatDataset(\n            [dataset, coco_person_dataset])\n\n    return dataset\n"
  },
  {
    "path": "src/trackformer/datasets/panoptic_eval.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport json\nimport os\n\nfrom ..util import misc as utils\n\ntry:\n    from panopticapi.evaluation import pq_compute\nexcept ImportError:\n    pass\n\n\nclass PanopticEvaluator(object):\n    def __init__(self, ann_file, ann_folder, output_dir=\"panoptic_eval\"):\n        self.gt_json = ann_file\n        self.gt_folder = ann_folder\n        if utils.is_main_process():\n            if not os.path.exists(output_dir):\n                os.mkdir(output_dir)\n        self.output_dir = output_dir\n        self.predictions = []\n\n    def update(self, predictions):\n        for p in predictions:\n            with open(os.path.join(self.output_dir, p[\"file_name\"]), \"wb\") as f:\n                f.write(p.pop(\"png_string\"))\n\n        self.predictions += predictions\n\n    def synchronize_between_processes(self):\n        all_predictions = utils.all_gather(self.predictions)\n        merged_predictions = []\n        for p in all_predictions:\n            merged_predictions += p\n        self.predictions = merged_predictions\n\n    def summarize(self):\n        if utils.is_main_process():\n            json_data = {\"annotations\": self.predictions}\n            predictions_json = os.path.join(self.output_dir, \"predictions.json\")\n            with open(predictions_json, \"w\") as f:\n                f.write(json.dumps(json_data))\n            return pq_compute(\n                self.gt_json, predictions_json,\n                gt_folder=self.gt_folder, pred_folder=self.output_dir)\n        return None\n"
  },
  {
    "path": "src/trackformer/datasets/tracking/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nSubmodule interface.\n\"\"\"\nfrom .factory import TrackDatasetFactory\n"
  },
  {
    "path": "src/trackformer/datasets/tracking/demo_sequence.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT17 sequence dataset.\n\"\"\"\nimport configparser\nimport csv\nimport os\nfrom pathlib import Path\nimport os.path as osp\nfrom argparse import Namespace\nfrom typing import Optional, Tuple, List\n\nimport numpy as np\nimport torch\nfrom PIL import Image\nfrom torch.utils.data import Dataset\n\nfrom ..coco import make_coco_transforms\nfrom ..transforms import Compose\n\n\nclass DemoSequence(Dataset):\n    \"\"\"DemoSequence (MOT17) Dataset.\n    \"\"\"\n\n    def __init__(self, root_dir: str = 'data', img_transform: Namespace = None) -> None:\n        \"\"\"\n        Args:\n            seq_name (string): Sequence to take\n            vis_threshold (float): Threshold of visibility of persons\n                                   above which they are selected\n        \"\"\"\n        super().__init__()\n\n        self._data_dir = Path(root_dir)\n        assert self._data_dir.is_dir(), f'data_root_dir:{root_dir} does not exist.'\n\n        self.transforms = Compose(make_coco_transforms('val', img_transform, overflow_boxes=True))\n\n        self.data = self._sequence()\n        self.no_gt = True\n\n    def __len__(self) -> int:\n        return len(self.data)\n\n    def __str__(self) -> str:\n        return self._data_dir.name\n\n    def __getitem__(self, idx: int) -> dict:\n        \"\"\"Return the ith image converted to blob\"\"\"\n        data = self.data[idx]\n        img = Image.open(data['im_path']).convert(\"RGB\")\n        width_orig, height_orig = img.size\n\n        img, _ = self.transforms(img)\n        width, height = img.size(2), img.size(1)\n\n        sample = {}\n        sample['img'] = img\n        sample['img_path'] = data['im_path']\n        sample['dets'] = torch.tensor([])\n        sample['orig_size'] = torch.as_tensor([int(height_orig), int(width_orig)])\n        sample['size'] = torch.as_tensor([int(height), int(width)])\n\n        return sample\n\n    def _sequence(self) -> List[dict]:\n        total = []\n        for filename in sorted(os.listdir(self._data_dir)):\n            extension = os.path.splitext(filename)[1]\n            if extension in ['.png', '.jpg']:\n                total.append({'im_path': osp.join(self._data_dir, filename)})\n\n        return total\n\n    def load_results(self, results_dir: str) -> dict:\n        return {}\n\n    def write_results(self, results: dict, output_dir: str) -> None:\n        \"\"\"Write the tracks in the format for MOT16/MOT17 sumbission\n\n        results: dictionary with 1 dictionary for every track with\n                 {..., i:np.array([x1,y1,x2,y2]), ...} at key track_num\n\n        Each file contains these lines:\n        <frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>\n        \"\"\"\n\n        # format_str = \"{}, -1, {}, {}, {}, {}, {}, -1, -1, -1\"\n        if not os.path.exists(output_dir):\n            os.makedirs(output_dir)\n\n        result_file_path = osp.join(output_dir, self._data_dir.name)\n\n        with open(result_file_path, \"w\") as r_file:\n            writer = csv.writer(r_file, delimiter=',')\n\n            for i, track in results.items():\n                for frame, data in track.items():\n                    x1 = data['bbox'][0]\n                    y1 = data['bbox'][1]\n                    x2 = data['bbox'][2]\n                    y2 = data['bbox'][3]\n\n                    writer.writerow([\n                        frame + 1,\n                        i + 1,\n                        x1 + 1,\n                        y1 + 1,\n                        x2 - x1 + 1,\n                        y2 - y1 + 1,\n                        -1, -1, -1, -1])\n"
  },
  {
    "path": "src/trackformer/datasets/tracking/factory.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nFactory of tracking datasets.\n\"\"\"\nfrom typing import Union\n\nfrom torch.utils.data import ConcatDataset\n\nfrom .demo_sequence import DemoSequence\nfrom .mot_wrapper import MOT17Wrapper, MOT20Wrapper, MOTS20Wrapper\n\nDATASETS = {}\n\n# Fill all available datasets, change here to modify / add new datasets.\nfor split in ['TRAIN', 'TEST', 'ALL', '01', '02', '03', '04', '05',\n              '06', '07', '08', '09', '10', '11', '12', '13', '14']:\n    for dets in ['DPM', 'FRCNN', 'SDP', 'ALL']:\n        name = f'MOT17-{split}'\n        if dets:\n            name = f\"{name}-{dets}\"\n        DATASETS[name] = (\n            lambda kwargs, split=split, dets=dets: MOT17Wrapper(split, dets, **kwargs))\n\n\nfor split in ['TRAIN', 'TEST', 'ALL', '01', '02', '03', '04', '05',\n              '06', '07', '08']:\n    name = f'MOT20-{split}'\n    DATASETS[name] = (\n        lambda kwargs, split=split: MOT20Wrapper(split, **kwargs))\n\n\nfor split in ['TRAIN', 'TEST', 'ALL', '01', '02', '05', '06', '07', '09', '11', '12']:\n    name = f'MOTS20-{split}'\n    DATASETS[name] = (\n        lambda kwargs, split=split: MOTS20Wrapper(split, **kwargs))\n\nDATASETS['DEMO'] = (lambda kwargs: [DemoSequence(**kwargs), ])\n\n\nclass TrackDatasetFactory:\n    \"\"\"A central class to manage the individual dataset loaders.\n\n    This class contains the datasets. Once initialized the individual parts (e.g. sequences)\n    can be accessed.\n    \"\"\"\n\n    def __init__(self, datasets: Union[str, list], **kwargs) -> None:\n        \"\"\"Initialize the corresponding dataloader.\n\n        Keyword arguments:\n        datasets --  the name of the dataset or list of dataset names\n        kwargs -- arguments used to call the datasets\n        \"\"\"\n        if isinstance(datasets, str):\n            datasets = [datasets]\n\n        self._data = None\n        for dataset in datasets:\n            assert dataset in DATASETS, f\"[!] Dataset not found: {dataset}\"\n\n            if self._data is None:\n                self._data = DATASETS[dataset](kwargs)\n            else:\n                self._data = ConcatDataset([self._data, DATASETS[dataset](kwargs)])\n\n    def __len__(self) -> int:\n        return len(self._data)\n\n    def __getitem__(self, idx: int):\n        return self._data[idx]\n"
  },
  {
    "path": "src/trackformer/datasets/tracking/mot17_sequence.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT17 sequence dataset.\n\"\"\"\nimport configparser\nimport csv\nimport os\nimport os.path as osp\nfrom argparse import Namespace\nfrom typing import Optional, Tuple, List\n\nimport numpy as np\nimport torch\nfrom PIL import Image\nfrom torch.utils.data import Dataset\n\nfrom ..coco import make_coco_transforms\nfrom ..transforms import Compose\n\n\nclass MOT17Sequence(Dataset):\n    \"\"\"Multiple Object Tracking (MOT17) Dataset.\n\n    This dataloader is designed so that it can handle only one sequence,\n    if more have to be handled one should inherit from this class.\n    \"\"\"\n    data_folder = 'MOT17'\n\n    def __init__(self, root_dir: str = 'data', seq_name: Optional[str] = None,\n                 dets: str = '', vis_threshold: float = 0.0, img_transform: Namespace = None) -> None:\n        \"\"\"\n        Args:\n            seq_name (string): Sequence to take\n            vis_threshold (float): Threshold of visibility of persons\n                                   above which they are selected\n        \"\"\"\n        super().__init__()\n\n        self._seq_name = seq_name\n        self._dets = dets\n        self._vis_threshold = vis_threshold\n\n        self._data_dir = osp.join(root_dir, self.data_folder)\n\n        self._train_folders = os.listdir(os.path.join(self._data_dir, 'train'))\n        self._test_folders = os.listdir(os.path.join(self._data_dir, 'test'))\n\n        self.transforms = Compose(make_coco_transforms('val', img_transform, overflow_boxes=True))\n\n        self.data = []\n        self.no_gt = True\n        if seq_name is not None:\n            full_seq_name = seq_name\n            if self._dets is not None:\n                full_seq_name = f\"{seq_name}-{dets}\"\n            assert full_seq_name in self._train_folders or full_seq_name in self._test_folders, \\\n                'Image set does not exist: {}'.format(full_seq_name)\n\n            self.data = self._sequence()\n            self.no_gt = not osp.exists(self.get_gt_file_path())\n\n    def __len__(self) -> int:\n        return len(self.data)\n\n    def __getitem__(self, idx: int) -> dict:\n        \"\"\"Return the ith image converted to blob\"\"\"\n        data = self.data[idx]\n        img = Image.open(data['im_path']).convert(\"RGB\")\n        width_orig, height_orig = img.size\n\n        img, _ = self.transforms(img)\n        width, height = img.size(2), img.size(1)\n\n        sample = {}\n        sample['img'] = img\n        sample['dets'] = torch.tensor([det[:4] for det in data['dets']])\n        sample['img_path'] = data['im_path']\n        sample['gt'] = data['gt']\n        sample['vis'] = data['vis']\n        sample['orig_size'] = torch.as_tensor([int(height_orig), int(width_orig)])\n        sample['size'] = torch.as_tensor([int(height), int(width)])\n\n        return sample\n\n    def _sequence(self) -> List[dict]:\n        # public detections\n        dets = {i: [] for i in range(1, self.seq_length + 1)}\n        det_file = self.get_det_file_path()\n\n        if osp.exists(det_file):\n            with open(det_file, \"r\") as inf:\n                reader = csv.reader(inf, delimiter=',')\n                for row in reader:\n                    x1 = float(row[2]) - 1\n                    y1 = float(row[3]) - 1\n                    # This -1 accounts for the width (width of 1 x1=x2)\n                    x2 = x1 + float(row[4]) - 1\n                    y2 = y1 + float(row[5]) - 1\n                    score = float(row[6])\n                    bbox = np.array([x1, y1, x2, y2, score], dtype=np.float32)\n                    dets[int(float(row[0]))].append(bbox)\n\n        # accumulate total\n        img_dir = osp.join(\n            self.get_seq_path(),\n            self.config['Sequence']['imDir'])\n\n        boxes, visibility = self.get_track_boxes_and_visbility()\n\n        total = [\n            {'gt': boxes[i],\n             'im_path': osp.join(img_dir, f\"{i:06d}.jpg\"),\n             'vis': visibility[i],\n             'dets': dets[i]}\n            for i in range(1, self.seq_length + 1)]\n\n        return total\n\n    def get_track_boxes_and_visbility(self) -> Tuple[dict, dict]:\n        \"\"\" Load ground truth boxes and their visibility.\"\"\"\n        boxes = {}\n        visibility = {}\n\n        for i in range(1, self.seq_length + 1):\n            boxes[i] = {}\n            visibility[i] = {}\n\n        gt_file = self.get_gt_file_path()\n        if not osp.exists(gt_file):\n            return boxes, visibility\n\n        with open(gt_file, \"r\") as inf:\n            reader = csv.reader(inf, delimiter=',')\n            for row in reader:\n                # class person, certainity 1\n                if int(row[6]) == 1 and int(row[7]) == 1 and float(row[8]) >= self._vis_threshold:\n                    # Make pixel indexes 0-based, should already be 0-based (or not)\n                    x1 = int(row[2]) - 1\n                    y1 = int(row[3]) - 1\n                    # This -1 accounts for the width (width of 1 x1=x2)\n                    x2 = x1 + int(row[4]) - 1\n                    y2 = y1 + int(row[5]) - 1\n                    bbox = np.array([x1, y1, x2, y2], dtype=np.float32)\n\n                    frame_id = int(row[0])\n                    track_id = int(row[1])\n\n                    boxes[frame_id][track_id] = bbox\n                    visibility[frame_id][track_id] = float(row[8])\n\n        return boxes, visibility\n\n    def get_seq_path(self) -> str:\n        \"\"\" Return directory path of sequence. \"\"\"\n        full_seq_name = self._seq_name\n        if self._dets is not None:\n            full_seq_name = f\"{self._seq_name}-{self._dets}\"\n\n        if full_seq_name in self._train_folders:\n            return osp.join(self._data_dir, 'train', full_seq_name)\n        else:\n            return osp.join(self._data_dir, 'test', full_seq_name)\n\n    def get_config_file_path(self) -> str:\n        \"\"\" Return config file of sequence. \"\"\"\n        return osp.join(self.get_seq_path(), 'seqinfo.ini')\n\n    def get_gt_file_path(self) -> str:\n        \"\"\" Return ground truth file of sequence. \"\"\"\n        return osp.join(self.get_seq_path(), 'gt', 'gt.txt')\n\n    def get_det_file_path(self) -> str:\n        \"\"\" Return public detections file of sequence. \"\"\"\n        if self._dets is None:\n            return \"\"\n\n        return osp.join(self.get_seq_path(), 'det', 'det.txt')\n\n    @property\n    def config(self) -> dict:\n        \"\"\" Return config of sequence. \"\"\"\n        config_file = self.get_config_file_path()\n\n        assert osp.exists(config_file), \\\n            f'Config file does not exist: {config_file}'\n\n        config = configparser.ConfigParser()\n        config.read(config_file)\n        return config\n\n    @property\n    def seq_length(self) -> int:\n        \"\"\" Return sequence length, i.e, number of frames. \"\"\"\n        return int(self.config['Sequence']['seqLength'])\n\n    def __str__(self) -> str:\n        return f\"{self._seq_name}-{self._dets}\"\n\n    @property\n    def results_file_name(self) -> str:\n        \"\"\" Generate file name of results file. \"\"\"\n        assert self._seq_name is not None, \"[!] No seq_name, probably using combined database\"\n\n        if self._dets is None:\n            return f\"{self._seq_name}.txt\"\n\n        return f\"{self}.txt\"\n\n    def write_results(self, results: dict, output_dir: str) -> None:\n        \"\"\"Write the tracks in the format for MOT16/MOT17 sumbission\n\n        results: dictionary with 1 dictionary for every track with\n                 {..., i:np.array([x1,y1,x2,y2]), ...} at key track_num\n\n        Each file contains these lines:\n        <frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>\n        \"\"\"\n\n        # format_str = \"{}, -1, {}, {}, {}, {}, {}, -1, -1, -1\"\n        if not os.path.exists(output_dir):\n            os.makedirs(output_dir)\n\n        result_file_path = osp.join(output_dir, self.results_file_name)\n\n        with open(result_file_path, \"w\") as r_file:\n            writer = csv.writer(r_file, delimiter=',')\n\n            for i, track in results.items():\n                for frame, data in track.items():\n                    x1 = data['bbox'][0]\n                    y1 = data['bbox'][1]\n                    x2 = data['bbox'][2]\n                    y2 = data['bbox'][3]\n\n                    writer.writerow([\n                        frame + 1,\n                        i + 1,\n                        x1 + 1,\n                        y1 + 1,\n                        x2 - x1 + 1,\n                        y2 - y1 + 1,\n                        -1, -1, -1, -1])\n\n    def load_results(self, results_dir: str) -> dict:\n        results = {}\n        if results_dir is None:\n            return results\n\n        file_path = osp.join(results_dir, self.results_file_name)\n\n        if not os.path.isfile(file_path):\n            return results\n\n        with open(file_path, \"r\") as file:\n            csv_reader = csv.reader(file, delimiter=',')\n\n            for row in csv_reader:\n                frame_id, track_id = int(row[0]) - 1, int(row[1]) - 1\n\n                if track_id not in results:\n                    results[track_id] = {}\n\n                x1 = float(row[2]) - 1\n                y1 = float(row[3]) - 1\n                x2 = float(row[4]) - 1 + x1\n                y2 = float(row[5]) - 1 + y1\n\n                results[track_id][frame_id] = {}\n                results[track_id][frame_id]['bbox'] = [x1, y1, x2, y2]\n                results[track_id][frame_id]['score'] = 1.0\n\n        return results\n\n"
  },
  {
    "path": "src/trackformer/datasets/tracking/mot20_sequence.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT20 sequence dataset.\n\"\"\"\n\nfrom .mot17_sequence import MOT17Sequence\n\n\nclass MOT20Sequence(MOT17Sequence):\n    \"\"\"Multiple Object Tracking (MOT20) Dataset.\n\n    This dataloader is designed so that it can handle only one sequence,\n    if more have to be handled one should inherit from this class.\n    \"\"\"\n    data_folder = 'MOT20'\n"
  },
  {
    "path": "src/trackformer/datasets/tracking/mot_wrapper.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT wrapper which combines sequences to a dataset.\n\"\"\"\nfrom torch.utils.data import Dataset\n\nfrom .mot17_sequence import MOT17Sequence\nfrom .mot20_sequence import MOT20Sequence\nfrom .mots20_sequence import MOTS20Sequence\n\n\nclass MOT17Wrapper(Dataset):\n    \"\"\"A Wrapper for the MOT_Sequence class to return multiple sequences.\"\"\"\n\n    def __init__(self, split: str, dets: str, **kwargs) -> None:\n        \"\"\"Initliazes all subset of the dataset.\n\n        Keyword arguments:\n        split -- the split of the dataset to use\n        kwargs -- kwargs for the MOT17Sequence dataset\n        \"\"\"\n        train_sequences = [\n            'MOT17-02', 'MOT17-04', 'MOT17-05', 'MOT17-09',\n            'MOT17-10', 'MOT17-11', 'MOT17-13']\n        test_sequences = [\n            'MOT17-01', 'MOT17-03', 'MOT17-06', 'MOT17-07',\n            'MOT17-08', 'MOT17-12', 'MOT17-14']\n\n        if split == \"TRAIN\":\n            sequences = train_sequences\n        elif split == \"TEST\":\n            sequences = test_sequences\n        elif split == \"ALL\":\n            sequences = train_sequences + test_sequences\n            sequences = sorted(sequences)\n        elif f\"MOT17-{split}\" in train_sequences + test_sequences:\n            sequences = [f\"MOT17-{split}\"]\n        else:\n            raise NotImplementedError(\"MOT17 split not available.\")\n\n        self._data = []\n        for seq in sequences:\n            if dets == 'ALL':\n                self._data.append(MOT17Sequence(seq_name=seq, dets='DPM', **kwargs))\n                self._data.append(MOT17Sequence(seq_name=seq, dets='FRCNN', **kwargs))\n                self._data.append(MOT17Sequence(seq_name=seq, dets='SDP', **kwargs))\n            else:\n                self._data.append(MOT17Sequence(seq_name=seq, dets=dets, **kwargs))\n\n    def __len__(self) -> int:\n        return len(self._data)\n\n    def __getitem__(self, idx: int):\n        return self._data[idx]\n\n\nclass MOT20Wrapper(Dataset):\n    \"\"\"A Wrapper for the MOT_Sequence class to return multiple sequences.\"\"\"\n\n    def __init__(self, split: str, **kwargs) -> None:\n        \"\"\"Initliazes all subset of the dataset.\n\n        Keyword arguments:\n        split -- the split of the dataset to use\n        kwargs -- kwargs for the MOT20Sequence dataset\n        \"\"\"\n        train_sequences = ['MOT20-01', 'MOT20-02', 'MOT20-03', 'MOT20-05',]\n        test_sequences = ['MOT20-04', 'MOT20-06', 'MOT20-07', 'MOT20-08',]\n\n        if split == \"TRAIN\":\n            sequences = train_sequences\n        elif split == \"TEST\":\n            sequences = test_sequences\n        elif split == \"ALL\":\n            sequences = train_sequences + test_sequences\n            sequences = sorted(sequences)\n        elif f\"MOT20-{split}\" in train_sequences + test_sequences:\n            sequences = [f\"MOT20-{split}\"]\n        else:\n            raise NotImplementedError(\"MOT20 split not available.\")\n\n        self._data = []\n        for seq in sequences:\n            self._data.append(MOT20Sequence(seq_name=seq, dets=None, **kwargs))\n\n    def __len__(self) -> int:\n        return len(self._data)\n\n    def __getitem__(self, idx: int):\n        return self._data[idx]\n\n\nclass MOTS20Wrapper(MOT17Wrapper):\n    \"\"\"A Wrapper for the MOT_Sequence class to return multiple sequences.\"\"\"\n\n    def __init__(self, split: str, **kwargs) -> None:\n        \"\"\"Initliazes all subset of the dataset.\n\n        Keyword arguments:\n        split -- the split of the dataset to use\n        kwargs -- kwargs for the MOTS20Sequence dataset\n        \"\"\"\n        train_sequences = ['MOTS20-02', 'MOTS20-05', 'MOTS20-09', 'MOTS20-11']\n        test_sequences = ['MOTS20-01', 'MOTS20-06', 'MOTS20-07', 'MOTS20-12']\n\n        if split == \"TRAIN\":\n            sequences = train_sequences\n        elif split == \"TEST\":\n            sequences = test_sequences\n        elif split == \"ALL\":\n            sequences = train_sequences + test_sequences\n            sequences = sorted(sequences)\n        elif f\"MOTS20-{split}\" in train_sequences + test_sequences:\n            sequences = [f\"MOTS20-{split}\"]\n        else:\n            raise NotImplementedError(\"MOTS20 split not available.\")\n\n        self._data = []\n        for seq in sequences:\n            self._data.append(MOTS20Sequence(seq_name=seq, **kwargs))\n"
  },
  {
    "path": "src/trackformer/datasets/tracking/mots20_sequence.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOTS20 sequence dataset.\n\"\"\"\nimport csv\nimport os\nimport os.path as osp\nfrom argparse import Namespace\nfrom typing import Optional, Tuple\n\nimport numpy as np\nimport pycocotools.mask as rletools\n\nfrom .mot17_sequence import MOT17Sequence\n\n\nclass MOTS20Sequence(MOT17Sequence):\n    \"\"\"Multiple Object and Segmentation Tracking (MOTS20) Dataset.\n\n    This dataloader is designed so that it can handle only one sequence,\n    if more have to be handled one should inherit from this class.\n    \"\"\"\n    data_folder = 'MOTS20'\n\n    def __init__(self, root_dir: str = 'data', seq_name: Optional[str] = None,\n                 vis_threshold: float = 0.0, img_transform: Namespace = None) -> None:\n        \"\"\"\n        Args:\n            seq_name (string): Sequence to take\n            vis_threshold (float): Threshold of visibility of persons\n                                   above which they are selected\n        \"\"\"\n        super().__init__(root_dir, seq_name, None, vis_threshold, img_transform)\n\n    def get_track_boxes_and_visbility(self) -> Tuple[dict, dict]:\n        boxes = {}\n        visibility = {}\n\n        for i in range(1, self.seq_length + 1):\n            boxes[i] = {}\n            visibility[i] = {}\n\n        gt_file = self.get_gt_file_path()\n        if not osp.exists(gt_file):\n            return boxes, visibility\n\n        mask_objects_per_frame = load_mots_gt(gt_file)\n        for frame_id, mask_objects in mask_objects_per_frame.items():\n            for mask_object in mask_objects:\n                # class_id = 1 is car\n                # class_id = 2 is pedestrian\n                # class_id = 10 IGNORE\n                if mask_object.class_id in [1, 10]:\n                    continue\n\n                bbox = rletools.toBbox(mask_object.mask)\n                x1, y1, w, h = [int(c) for c in bbox]\n                bbox = np.array([x1, y1, x1 + w, y1 + h], dtype=np.float32)\n\n                # area = bbox[2] * bbox[3]\n                # image_id = img_file_name_to_id[f\"{seq}_{frame_id:06d}.jpg\"]\n\n                # segmentation = {\n                #     'size': mask_object.mask['size'],\n                #     'counts': mask_object.mask['counts'].decode(encoding='UTF-8')}\n\n                boxes[frame_id][mask_object.track_id] = bbox\n                visibility[frame_id][mask_object.track_id] = 1.0\n\n        return boxes, visibility\n\n    def write_results(self, results: dict, output_dir: str) -> None:\n        if not os.path.exists(output_dir):\n            os.makedirs(output_dir)\n\n        result_file_path = osp.join(output_dir, f\"{self._seq_name}.txt\")\n\n        with open(result_file_path, \"w\") as res_file:\n            writer = csv.writer(res_file, delimiter=' ')\n            for i, track in results.items():\n                for frame, data in track.items():\n                    mask = np.asfortranarray(data['mask'])\n                    rle_mask = rletools.encode(mask)\n\n                    writer.writerow([\n                        frame + 1,\n                        i + 1,\n                        2,  # class pedestrian\n                        mask.shape[0],\n                        mask.shape[1],\n                        rle_mask['counts'].decode(encoding='UTF-8')])\n\n    def load_results(self, results_dir: str) -> dict:\n        results = {}\n\n        if results_dir is None:\n            return results\n\n        file_path = osp.join(results_dir, self.results_file_name)\n\n        if not os.path.isfile(file_path):\n            return results\n\n        mask_objects_per_frame = load_mots_gt(file_path)\n\n        for frame_id, mask_objects in mask_objects_per_frame.items():\n            for mask_object in mask_objects:\n                # class_id = 1 is car\n                # class_id = 2 is pedestrian\n                # class_id = 10 IGNORE\n                if mask_object.class_id in [1, 10]:\n                    continue\n\n                bbox = rletools.toBbox(mask_object.mask)\n                x1, y1, w, h = [int(c) for c in bbox]\n                bbox = np.array([x1, y1, x1 + w, y1 + h], dtype=np.float32)\n\n                # area = bbox[2] * bbox[3]\n                # image_id = img_file_name_to_id[f\"{seq}_{frame_id:06d}.jpg\"]\n\n                # segmentation = {\n                #     'size': mask_object.mask['size'],\n                #     'counts': mask_object.mask['counts'].decode(encoding='UTF-8')}\n\n                track_id = mask_object.track_id - 1\n                if track_id not in results:\n                    results[track_id] = {}\n\n                results[track_id][frame_id - 1] = {}\n                results[track_id][frame_id - 1]['mask'] = rletools.decode(mask_object.mask)\n                results[track_id][frame_id - 1]['bbox'] = bbox.tolist()\n                results[track_id][frame_id - 1]['score'] = 1.0\n\n        return results\n\n    def __str__(self) -> str:\n        return self._seq_name\n\n\nclass SegmentedObject:\n    \"\"\"\n    Helper class for segmentation objects.\n    \"\"\"\n    def __init__(self, mask: dict, class_id: int, track_id: int) -> None:\n        self.mask = mask\n        self.class_id = class_id\n        self.track_id = track_id\n\n\ndef load_mots_gt(path: str) -> dict:\n    \"\"\"Load MOTS ground truth from path.\"\"\"\n    objects_per_frame = {}\n    track_ids_per_frame = {}  # Check that no frame contains two objects with same id\n    combined_mask_per_frame = {}  # Check that no frame contains overlapping masks\n\n    with open(path, \"r\") as gt_file:\n        for line in gt_file:\n            line = line.strip()\n            fields = line.split(\" \")\n\n            frame = int(fields[0])\n            if frame not in objects_per_frame:\n                objects_per_frame[frame] = []\n            if frame not in track_ids_per_frame:\n                track_ids_per_frame[frame] = set()\n            if int(fields[1]) in track_ids_per_frame[frame]:\n                assert False, f\"Multiple objects with track id {fields[1]} in frame {fields[0]}\"\n            else:\n                track_ids_per_frame[frame].add(int(fields[1]))\n\n            class_id = int(fields[2])\n            if not(class_id == 1 or class_id == 2 or class_id == 10):\n                assert False, \"Unknown object class \" + fields[2]\n\n            mask = {\n                'size': [int(fields[3]), int(fields[4])],\n                'counts': fields[5].encode(encoding='UTF-8')}\n            if frame not in combined_mask_per_frame:\n                combined_mask_per_frame[frame] = mask\n            elif rletools.area(rletools.merge([\n                    combined_mask_per_frame[frame], mask],\n                    intersect=True)):\n                assert False, \"Objects with overlapping masks in frame \" + fields[0]\n            else:\n                combined_mask_per_frame[frame] = rletools.merge(\n                    [combined_mask_per_frame[frame], mask],\n                    intersect=False)\n            objects_per_frame[frame].append(SegmentedObject(\n                mask,\n                class_id,\n                int(fields[1])\n            ))\n\n    return objects_per_frame\n"
  },
  {
    "path": "src/trackformer/datasets/transforms.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nTransforms and data augmentation for both image + bbox.\n\"\"\"\nimport random\nfrom typing import Union\n\nimport PIL\nimport torch\nimport torchvision.transforms as T\nimport torchvision.transforms.functional as F\n\nfrom ..util.box_ops import box_xyxy_to_cxcywh\nfrom ..util.misc import interpolate\n\n\ndef crop(image, target, region, overflow_boxes=False):\n    i, j, h, w = region\n    target = target.copy()\n\n    if isinstance(image, torch.Tensor):\n        cropped_image = image[:, j:j + w, i:i + h]\n    else:\n        cropped_image = F.crop(image, *region)\n\n    # should we do something wrt the original size?\n    target[\"size\"] = torch.tensor([h, w])\n\n    fields = [\"labels\", \"area\", \"iscrowd\", \"ignore\", \"track_ids\"]\n\n    orig_area = target[\"area\"]\n\n    if \"boxes\" in target:\n        boxes = target[\"boxes\"]\n        max_size = torch.as_tensor([w, h], dtype=torch.float32)\n        cropped_boxes = boxes - torch.as_tensor([j, i, j, i])\n\n        if overflow_boxes:\n            for i, box in enumerate(cropped_boxes):\n                l, t, r, b = box\n                if l < 0 and r < 0:\n                    l = r = 0\n                if l > w and r > w:\n                    l = r = w\n                if t < 0 and b < 0:\n                    t = b = 0\n                if t > h and b > h:\n                    t = b = h\n                cropped_boxes[i] = torch.tensor([l, t, r, b], dtype=box.dtype)\n            cropped_boxes = cropped_boxes.reshape(-1, 2, 2)\n        else:\n            cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size)\n            cropped_boxes = cropped_boxes.clamp(min=0)\n\n        area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1)\n        target[\"boxes\"] = cropped_boxes.reshape(-1, 4)\n        target[\"area\"] = area\n        fields.append(\"boxes\")\n\n    if \"masks\" in target:\n        # FIXME should we update the area here if there are no boxes?\n        target['masks'] = target['masks'][:, i:i + h, j:j + w]\n        fields.append(\"masks\")\n\n    # remove elements for which the boxes or masks that have zero area\n    if \"boxes\" in target or \"masks\" in target:\n        # favor boxes selection when defining which elements to keep\n        # this is compatible with previous implementation\n        if \"boxes\" in target:\n            cropped_boxes = target['boxes'].reshape(-1, 2, 2)\n            keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1)\n\n            # new area must be at least % of orginal area\n            # keep = target[\"area\"] >= orig_area * 0.2\n        else:\n            keep = target['masks'].flatten(1).any(1)\n\n        for field in fields:\n            if field in target:\n                target[field] = target[field][keep]\n\n    return cropped_image, target\n\n\ndef hflip(image, target):\n    if isinstance(image, torch.Tensor):\n        flipped_image = image.flip(-1)\n        _, width, _ = image.size()\n    else:\n        flipped_image = F.hflip(image)\n        width, _ = image.size\n\n    target = target.copy()\n\n    if \"boxes\" in target:\n        boxes = target[\"boxes\"]\n        boxes = boxes[:, [2, 1, 0, 3]] \\\n            * torch.as_tensor([-1, 1, -1, 1]) \\\n            + torch.as_tensor([width, 0, width, 0])\n        target[\"boxes\"] = boxes\n\n    if \"boxes_ignore\" in target:\n        boxes = target[\"boxes_ignore\"]\n        boxes = boxes[:, [2, 1, 0, 3]] \\\n            * torch.as_tensor([-1, 1, -1, 1]) \\\n            + torch.as_tensor([width, 0, width, 0])\n        target[\"boxes_ignore\"] = boxes\n\n    if \"masks\" in target:\n        target['masks'] = target['masks'].flip(-1)\n\n    return flipped_image, target\n\n\ndef resize(image, target, size, max_size=None):\n    # size can be min_size (scalar) or (w, h) tuple\n\n    def get_size_with_aspect_ratio(image_size, size, max_size=None):\n        w, h = image_size\n        if max_size is not None:\n            min_original_size = float(min((w, h)))\n            max_original_size = float(max((w, h)))\n            if max_original_size / min_original_size * size > max_size:\n                size = int(round(max_size * min_original_size / max_original_size))\n\n        if (w <= h and w == size) or (h <= w and h == size):\n            return (h, w)\n\n        if w < h:\n            ow = size\n            oh = int(size * h / w)\n        else:\n            oh = size\n            ow = int(size * w / h)\n\n        return (oh, ow)\n\n    def get_size(image_size, size, max_size=None):\n        if isinstance(size, (list, tuple)):\n            return size[::-1]\n        else:\n            return get_size_with_aspect_ratio(image_size, size, max_size)\n\n    size = get_size(image.size, size, max_size)\n    rescaled_image = F.resize(image, size)\n\n    if target is None:\n        return rescaled_image, None\n\n    ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size))\n    ratio_width, ratio_height = ratios\n\n    target = target.copy()\n    if \"boxes\" in target:\n        boxes = target[\"boxes\"]\n        scaled_boxes = boxes \\\n            * torch.as_tensor([ratio_width, ratio_height, ratio_width, ratio_height])\n        target[\"boxes\"] = scaled_boxes\n\n    if \"area\" in target:\n        area = target[\"area\"]\n        scaled_area = area * (ratio_width * ratio_height)\n        target[\"area\"] = scaled_area\n\n    h, w = size\n    target[\"size\"] = torch.tensor([h, w])\n\n    if \"masks\" in target:\n        target['masks'] = interpolate(\n            target['masks'][:, None].float(), size, mode=\"nearest\")[:, 0] > 0.5\n\n    return rescaled_image, target\n\n\ndef pad(image, target, padding):\n    # pad_left, pad_top, pad_right, pad_bottom\n    padded_image = F.pad(image, padding)\n    if target is None:\n        return padded_image, None\n    target = target.copy()\n    # should we do something wrt the original size?\n    w, h = padded_image.size\n\n    if \"boxes\" in target:\n        # correct xyxy from left and right paddings\n        target[\"boxes\"] += torch.tensor(\n            [padding[0], padding[1], padding[0], padding[1]])\n\n    target[\"size\"] = torch.tensor([h, w])\n    if \"masks\" in target:\n        # padding_left, padding_right, padding_top, padding_bottom\n        target['masks'] = torch.nn.functional.pad(\n            target['masks'],\n            (padding[0], padding[2], padding[1], padding[3]))\n    return padded_image, target\n\n\nclass RandomCrop:\n    def __init__(self, size, overflow_boxes=False):\n        # in hxw\n        self.size = size\n        self.overflow_boxes = overflow_boxes\n\n    def __call__(self, img, target):\n        region = T.RandomCrop.get_params(img, self.size)\n        return crop(img, target, region, self.overflow_boxes)\n\n\nclass RandomSizeCrop:\n    def __init__(self,\n                 min_size: Union[tuple, list, int],\n                 max_size: Union[tuple, list, int] = None,\n                 overflow_boxes: bool = False):\n        if isinstance(min_size, int):\n            min_size = (min_size, min_size)\n        if isinstance(max_size, int):\n            max_size = (max_size, max_size)\n\n        self.min_size = min_size\n        self.max_size = max_size\n        self.overflow_boxes = overflow_boxes\n\n    def __call__(self, img: PIL.Image.Image, target: dict):\n        if self.max_size is None:\n            w = random.randint(min(self.min_size[0], img.width), img.width)\n            h = random.randint(min(self.min_size[1], img.height), img.height)\n        else:\n            w = random.randint(\n                min(self.min_size[0], img.width),\n                min(img.width, self.max_size[0]))\n            h = random.randint(\n                min(self.min_size[1], img.height),\n                min(img.height, self.max_size[1]))\n\n        region = T.RandomCrop.get_params(img, [h, w])\n        return crop(img, target, region, self.overflow_boxes)\n\n\nclass CenterCrop:\n    def __init__(self, size, overflow_boxes=False):\n        self.size = size\n        self.overflow_boxes = overflow_boxes\n\n    def __call__(self, img, target):\n        image_width, image_height = img.size\n        crop_height, crop_width = self.size\n        crop_top = int(round((image_height - crop_height) / 2.))\n        crop_left = int(round((image_width - crop_width) / 2.))\n        return crop(img, target, (crop_top, crop_left, crop_height, crop_width), self.overflow_boxes)\n\n\nclass RandomHorizontalFlip:\n    def __init__(self, p=0.5):\n        self.p = p\n\n    def __call__(self, img, target):\n        if random.random() < self.p:\n            return hflip(img, target)\n        return img, target\n\n\nclass RepeatUntilMaxObjects:\n    def __init__(self, transforms, num_max_objects):\n        self._num_max_objects = num_max_objects\n        self._transforms = transforms\n\n    def __call__(self, img, target):\n        num_objects = None\n        while num_objects is None or num_objects > self._num_max_objects:\n            img_trans, target_trans = self._transforms(img, target)\n            num_objects = len(target_trans['boxes'])\n        return img_trans, target_trans\n\n\nclass RandomResize:\n    def __init__(self, sizes, max_size=None):\n        assert isinstance(sizes, (list, tuple))\n        self.sizes = sizes\n        self.max_size = max_size\n\n    def __call__(self, img, target=None):\n        size = random.choice(self.sizes)\n        return resize(img, target, size, self.max_size)\n\n\nclass RandomResizeTargets:\n    def __init__(self, scale=0.5):\n        self.scalce = scale\n\n    def __call__(self, img, target=None):\n        img = F.to_tensor(img)\n        img_c, img_w, img_h = img.shape\n\n        rescaled_boxes = []\n        rescaled_box_images = []\n        for box in target['boxes']:\n            y1, x1, y2, x2 = box.int().tolist()\n            w = x2 - x1\n            h = y2 - y1\n\n            box_img = img[:, x1:x2, y1:y2]\n            random_scale = random.uniform(0.5, 2.0)\n            scaled_width = int(random_scale * w)\n            scaled_height = int(random_scale * h)\n\n            box_img = F.to_pil_image(box_img)\n            rescaled_box_image = F.resize(\n                box_img,\n                (scaled_width, scaled_height))\n            rescaled_box_images.append(F.to_tensor(rescaled_box_image))\n            rescaled_boxes.append([y1, x1, y1 + scaled_height, x1 + scaled_width])\n\n        for box in target['boxes']:\n            y1, x1, y2, x2 = box.int().tolist()\n            w = x2 - x1\n            h = y2 - y1\n\n            erase_value = torch.empty(\n                [img_c, w, h],\n                dtype=torch.float32).normal_()\n\n            img = F.erase(\n                img, x1, y1, w, h, erase_value, True)\n\n        for box, rescaled_box_image in zip(target['boxes'], rescaled_box_images):\n            y1, x1, y2, x2 = box.int().tolist()\n            w = x2 - x1\n            h = y2 - y1\n            _, scaled_width, scaled_height = rescaled_box_image.shape\n\n            rescaled_box_image = rescaled_box_image[\n                :,\n                :scaled_width - max(x1 + scaled_width - img_w, 0),\n                :scaled_height - max(y1 + scaled_height - img_h, 0)]\n\n            img[:, x1:x1 + scaled_width, y1:y1 + scaled_height] = rescaled_box_image\n\n        target['boxes'] = torch.tensor(rescaled_boxes).float()\n        img = F.to_pil_image(img)\n        return img, target\n\n\nclass RandomPad:\n    def __init__(self, max_size):\n        if isinstance(max_size, int):\n            max_size = (max_size, max_size)\n\n        self.max_size = max_size\n\n    def __call__(self, img, target):\n        w, h = img.size\n        pad_width = random.randint(0, max(self.max_size[0] - w, 0))\n        pad_height = random.randint(0, max(self.max_size[1] - h, 0))\n\n        pad_left = random.randint(0, pad_width)\n        pad_right = pad_width - pad_left\n        pad_top = random.randint(0, pad_height)\n        pad_bottom = pad_height - pad_top\n\n        padding = (pad_left, pad_top, pad_right, pad_bottom)\n\n        return pad(img, target, padding)\n\n\nclass RandomSelect:\n    \"\"\"\n    Randomly selects between transforms1 and transforms2,\n    with probability p for transforms1 and (1 - p) for transforms2\n    \"\"\"\n    def __init__(self, transforms1, transforms2, p=0.5):\n        self.transforms1 = transforms1\n        self.transforms2 = transforms2\n        self.p = p\n\n    def __call__(self, img, target):\n        if random.random() < self.p:\n            return self.transforms1(img, target)\n        return self.transforms2(img, target)\n\n\nclass ToTensor:\n    def __call__(self, img, target=None):\n        return F.to_tensor(img), target\n\n\nclass RandomErasing:\n\n    def __init__(self, p=0.5, scale=(0.02, 0.33), ratio=(0.3, 3.3), value=0, inplace=False):\n        self.eraser = T.RandomErasing()\n        self.p = p\n        self.scale = scale\n        self.ratio = ratio\n        self.value = value\n        self.inplace = inplace\n\n    def __call__(self, img, target):\n        if random.uniform(0, 1) < self.p:\n            img = F.to_tensor(img)\n\n            x, y, h, w, v = self.eraser.get_params(\n                img, scale=self.scale, ratio=self.ratio, value=self.value)\n\n            img = F.erase(img, x, y, h, w, v, self.inplace)\n            img = F.to_pil_image(img)\n\n            # target\n            fields = ['boxes', \"labels\", \"area\", \"iscrowd\", \"ignore\", \"track_ids\"]\n\n            if 'boxes' in target:\n                erased_box = torch.tensor([[y, x, y + w, x + h]]).float()\n\n                lt = torch.max(erased_box[:, None, :2], target['boxes'][:, :2])  # [N,M,2]\n                rb = torch.min(erased_box[:, None, 2:], target['boxes'][:, 2:])  # [N,M,2]\n                wh = (rb - lt).clamp(min=0)  # [N,M,2]\n                inter = wh[:, :, 0] * wh[:, :, 1]  # [N,M]\n\n                keep = inter[0] <= 0.7 * target['area']\n\n                left = torch.logical_and(\n                    target['boxes'][:, 0] < erased_box[:, 0],\n                    target['boxes'][:, 2] > erased_box[:, 0])\n                left = torch.logical_and(left, inter[0].bool())\n\n                right = torch.logical_and(\n                    target['boxes'][:, 0] < erased_box[:, 2],\n                    target['boxes'][:, 2] > erased_box[:, 2])\n                right = torch.logical_and(right, inter[0].bool())\n\n                top = torch.logical_and(\n                    target['boxes'][:, 1] < erased_box[:, 1],\n                    target['boxes'][:, 3] > erased_box[:, 1])\n                top = torch.logical_and(top, inter[0].bool())\n\n                bottom = torch.logical_and(\n                    target['boxes'][:, 1] < erased_box[:, 3],\n                    target['boxes'][:, 3] > erased_box[:, 3])\n                bottom = torch.logical_and(bottom, inter[0].bool())\n\n                only_one_crop = (top.float() + bottom.float() + left.float() + right.float()) > 1\n                left[only_one_crop] = False\n                right[only_one_crop] = False\n                top[only_one_crop] = False\n                bottom[only_one_crop] = False\n\n                target['boxes'][:, 2][left] = erased_box[:, 0]\n                target['boxes'][:, 0][right] = erased_box[:, 2]\n                target['boxes'][:, 3][top] = erased_box[:, 1]\n                target['boxes'][:, 1][bottom] = erased_box[:, 3]\n\n                for field in fields:\n                    if field in target:\n                        target[field] = target[field][keep]\n\n        return img, target\n\n\nclass Normalize:\n    def __init__(self, mean, std):\n        self.mean = mean\n        self.std = std\n\n    def __call__(self, image, target=None):\n        image = F.normalize(image, mean=self.mean, std=self.std)\n        if target is None:\n            return image, None\n        target = target.copy()\n        h, w = image.shape[-2:]\n        if \"boxes\" in target:\n            boxes = target[\"boxes\"]\n            boxes = box_xyxy_to_cxcywh(boxes)\n            boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32)\n            target[\"boxes\"] = boxes\n        return image, target\n\n\nclass Compose:\n    def __init__(self, transforms):\n        self.transforms = transforms\n\n    def __call__(self, image, target=None):\n        for t in self.transforms:\n            image, target = t(image, target)\n        return image, target\n\n    def __repr__(self):\n        format_string = self.__class__.__name__ + \"(\"\n        for t in self.transforms:\n            format_string += \"\\n\"\n            format_string += \"    {0}\".format(t)\n        format_string += \"\\n)\"\n        return format_string\n"
  },
  {
    "path": "src/trackformer/engine.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nTrain and eval functions used in main.py\n\"\"\"\nimport logging\nimport math\nimport os\nimport sys\nfrom typing import Iterable\n\nimport torch\nfrom track import ex\n\nfrom .datasets import get_coco_api_from_dataset\nfrom .datasets.coco_eval import CocoEvaluator\nfrom .datasets.panoptic_eval import PanopticEvaluator\nfrom .models.detr_segmentation import DETRSegm\nfrom .util import misc as utils\nfrom .util.box_ops import box_iou\nfrom .util.track_utils import evaluate_mot_accums\nfrom .vis import vis_results\n\n\ndef make_results(outputs, targets, postprocessors, tracking, return_only_orig=True):\n    target_sizes = torch.stack([t[\"size\"] for t in targets], dim=0)\n    orig_target_sizes = torch.stack([t[\"orig_size\"] for t in targets], dim=0)\n\n    # remove placeholder track queries\n    # results_mask = None\n    # if tracking:\n    #     results_mask = [~t['track_queries_placeholder_mask'] for t in targets]\n    #     for target, res_mask in zip(targets, results_mask):\n    #         target['track_queries_mask'] = target['track_queries_mask'][res_mask]\n    #         target['track_queries_fal_pos_mask'] = target['track_queries_fal_pos_mask'][res_mask]\n\n    # results = None\n    # if not return_only_orig:\n    #     results = postprocessors['bbox'](outputs, target_sizes, results_mask)\n    # results_orig = postprocessors['bbox'](outputs, orig_target_sizes, results_mask)\n\n    # if 'segm' in postprocessors:\n    #     results_orig = postprocessors['segm'](\n    #         results_orig, outputs, orig_target_sizes, target_sizes, results_mask)\n    #     if not return_only_orig:\n    #         results = postprocessors['segm'](\n    #             results, outputs, target_sizes, target_sizes, results_mask)\n\n    results = None\n    if not return_only_orig:\n        results = postprocessors['bbox'](outputs, target_sizes)\n    results_orig = postprocessors['bbox'](outputs, orig_target_sizes)\n\n    if 'segm' in postprocessors:\n        results_orig = postprocessors['segm'](\n            results_orig, outputs, orig_target_sizes, target_sizes)\n        if not return_only_orig:\n            results = postprocessors['segm'](\n                results, outputs, target_sizes, target_sizes)\n\n    if results is None:\n        return results_orig, results\n\n    for i, result in enumerate(results):\n        target = targets[i]\n        target_size = target_sizes[i].unsqueeze(dim=0)\n\n        result['target'] = {}\n        result['boxes'] = result['boxes'].cpu()\n\n        # revert boxes for visualization\n        for key in ['boxes', 'track_query_boxes']:\n            if key in target:\n                target[key] = postprocessors['bbox'].process_boxes(\n                    target[key], target_size)[0].cpu()\n\n        if tracking and 'prev_target' in target:\n            if 'prev_prev_target' in target:\n                target['prev_prev_target']['boxes'] = postprocessors['bbox'].process_boxes(\n                    target['prev_prev_target']['boxes'],\n                    target['prev_prev_target']['size'].unsqueeze(dim=0))[0].cpu()\n\n            target['prev_target']['boxes'] = postprocessors['bbox'].process_boxes(\n                target['prev_target']['boxes'],\n                target['prev_target']['size'].unsqueeze(dim=0))[0].cpu()\n\n            if 'track_query_match_ids' in target and len(target['track_query_match_ids']):\n                track_queries_iou, _ = box_iou(\n                    target['boxes'][target['track_query_match_ids']],\n                    result['boxes'])\n\n                box_ids = [box_id\n                    for box_id, (is_track_query, is_fals_pos_track_query)\n                    in enumerate(zip(target['track_queries_mask'], target['track_queries_fal_pos_mask']))\n                    if is_track_query and not is_fals_pos_track_query]\n\n                result['track_queries_with_id_iou'] = torch.diagonal(track_queries_iou[:, box_ids])\n\n    return results_orig, results\n\n\ndef train_one_epoch(model: torch.nn.Module, criterion: torch.nn.Module, postprocessors,\n                    data_loader: Iterable, optimizer: torch.optim.Optimizer,\n                    device: torch.device, epoch: int, visualizers: dict, args):\n\n    vis_iter_metrics = None\n    if visualizers:\n        vis_iter_metrics = visualizers['iter_metrics']\n\n    model.train()\n    criterion.train()\n    metric_logger = utils.MetricLogger(\n        args.vis_and_log_interval,\n        delimiter=\"  \",\n        vis=vis_iter_metrics,\n        debug=args.debug)\n    metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))\n    metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))\n\n    for i, (samples, targets) in enumerate(metric_logger.log_every(data_loader, epoch)):\n        samples = samples.to(device)\n        targets = [utils.nested_dict_to_device(t, device) for t in targets]\n\n        # in order to be able to modify targets inside the forward call we need\n        # to pass it through as torch.nn.parallel.DistributedDataParallel only\n        # passes copies\n        outputs, targets, *_ = model(samples, targets)\n\n        loss_dict = criterion(outputs, targets)\n        weight_dict = criterion.weight_dict\n        losses = sum(loss_dict[k] * weight_dict[k] for k in loss_dict.keys() if k in weight_dict)\n\n        # reduce losses over all GPUs for logging purposes\n        loss_dict_reduced = utils.reduce_dict(loss_dict)\n        loss_dict_reduced_unscaled = {\n            f'{k}_unscaled': v for k, v in loss_dict_reduced.items()}\n        loss_dict_reduced_scaled = {\n            k: v * weight_dict[k] for k, v in loss_dict_reduced.items() if k in weight_dict}\n        losses_reduced_scaled = sum(loss_dict_reduced_scaled.values())\n\n        loss_value = losses_reduced_scaled.item()\n\n        if not math.isfinite(loss_value):\n            print(f\"Loss is {loss_value}, stopping training\")\n            print(loss_dict_reduced)\n            sys.exit(1)\n\n        optimizer.zero_grad()\n        losses.backward()\n        if args.clip_max_norm > 0:\n            torch.nn.utils.clip_grad_norm_(model.parameters(), args.clip_max_norm)\n        optimizer.step()\n\n        metric_logger.update(loss=loss_value,\n                             **loss_dict_reduced_scaled,\n                             **loss_dict_reduced_unscaled)\n        metric_logger.update(class_error=loss_dict_reduced['class_error'])\n        metric_logger.update(lr=optimizer.param_groups[0][\"lr\"],\n                             lr_backbone=optimizer.param_groups[1][\"lr\"])\n\n        if visualizers and (i == 0 or not i % args.vis_and_log_interval):\n            _, results = make_results(\n                outputs, targets, postprocessors, args.tracking, return_only_orig=False)\n\n            vis_results(\n                visualizers['example_results'],\n                samples.unmasked_tensor(0),\n                results[0],\n                targets[0],\n                args.tracking)\n\n    # gather the stats from all processes\n    metric_logger.synchronize_between_processes()\n    print(\"Averaged stats:\", metric_logger)\n\n    return {k: meter.global_avg for k, meter in metric_logger.meters.items()}\n\n\n@torch.no_grad()\ndef evaluate(model, criterion, postprocessors, data_loader, device,\n             output_dir: str, visualizers: dict, args, epoch: int = None):\n    model.eval()\n    criterion.eval()\n\n    metric_logger = utils.MetricLogger(\n        args.vis_and_log_interval,\n        delimiter=\"  \",\n        debug=args.debug)\n    metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))\n\n    base_ds = get_coco_api_from_dataset(data_loader.dataset)\n    iou_types = tuple(k for k in ('bbox', 'segm') if k in postprocessors.keys())\n    coco_evaluator = CocoEvaluator(base_ds, iou_types)\n    # coco_evaluator.coco_eval[iou_types[0]].params.iouThrs = [0, 0.1, 0.5, 0.75]\n\n    panoptic_evaluator = None\n    if 'panoptic' in postprocessors.keys():\n        panoptic_evaluator = PanopticEvaluator(\n            data_loader.dataset.ann_file,\n            data_loader.dataset.ann_folder,\n            output_dir=os.path.join(output_dir, \"panoptic_eval\"),\n        )\n\n    for i, (samples, targets) in enumerate(metric_logger.log_every(data_loader, 'Test:')):\n        samples = samples.to(device)\n        targets = [utils.nested_dict_to_device(t, device) for t in targets]\n\n        outputs, targets, *_ = model(samples, targets)\n\n        loss_dict = criterion(outputs, targets)\n        weight_dict = criterion.weight_dict\n\n        # reduce losses over all GPUs for logging purposes\n        loss_dict_reduced = utils.reduce_dict(loss_dict)\n        loss_dict_reduced_scaled = {k: v * weight_dict[k]\n                                    for k, v in loss_dict_reduced.items() if k in weight_dict}\n        loss_dict_reduced_unscaled = {f'{k}_unscaled': v\n                                      for k, v in loss_dict_reduced.items()}\n        metric_logger.update(loss=sum(loss_dict_reduced_scaled.values()),\n                             **loss_dict_reduced_scaled,\n                             **loss_dict_reduced_unscaled)\n        metric_logger.update(class_error=loss_dict_reduced['class_error'])\n\n        if visualizers and (i == 0 or not i % args.vis_and_log_interval):\n            results_orig, results = make_results(\n                outputs, targets, postprocessors, args.tracking, return_only_orig=False)\n\n            vis_results(\n                visualizers['example_results'],\n                samples.unmasked_tensor(0),\n                results[0],\n                targets[0],\n                args.tracking)\n        else:\n            results_orig, _ = make_results(outputs, targets, postprocessors, args.tracking)\n\n        # TODO. remove cocoDts from coco eval and change example results output\n        if coco_evaluator is not None:\n            results_orig = {\n                target['image_id'].item(): output\n                for target, output in zip(targets, results_orig)}\n\n            coco_evaluator.update(results_orig)\n\n        if panoptic_evaluator is not None:\n            target_sizes = torch.stack([t[\"size\"] for t in targets], dim=0)\n            orig_target_sizes = torch.stack([t[\"orig_size\"] for t in targets], dim=0)\n\n            res_pano = postprocessors[\"panoptic\"](outputs, target_sizes, orig_target_sizes)\n            for j, target in enumerate(targets):\n                image_id = target[\"image_id\"].item()\n                file_name = f\"{image_id:012d}.png\"\n                res_pano[j][\"image_id\"] = image_id\n                res_pano[j][\"file_name\"] = file_name\n\n            panoptic_evaluator.update(res_pano)\n\n    # gather the stats from all processes\n    metric_logger.synchronize_between_processes()\n    print(\"Averaged stats:\", metric_logger)\n    if coco_evaluator is not None:\n        coco_evaluator.synchronize_between_processes()\n    if panoptic_evaluator is not None:\n        panoptic_evaluator.synchronize_between_processes()\n\n    # accumulate predictions from all images\n    if coco_evaluator is not None:\n        coco_evaluator.accumulate()\n        coco_evaluator.summarize()\n    panoptic_res = None\n    if panoptic_evaluator is not None:\n        panoptic_res = panoptic_evaluator.summarize()\n    stats = {k: meter.global_avg for k, meter in metric_logger.meters.items()}\n    if coco_evaluator is not None:\n        if 'bbox' in coco_evaluator.coco_eval:\n            stats['coco_eval_bbox'] = coco_evaluator.coco_eval['bbox'].stats.tolist()\n        if 'segm' in coco_evaluator.coco_eval:\n            stats['coco_eval_masks'] = coco_evaluator.coco_eval['segm'].stats.tolist()\n    if panoptic_res is not None:\n        stats['PQ_all'] = panoptic_res[\"All\"]\n        stats['PQ_th'] = panoptic_res[\"Things\"]\n        stats['PQ_st'] = panoptic_res[\"Stuff\"]\n\n    # TRACK EVAL\n    if args.tracking and args.tracking_eval:\n        stats['track_bbox'] = []\n\n        ex.logger = logging.getLogger(\"submitit\")\n\n        # distribute evaluation of seqs to processes\n        seqs = data_loader.dataset.sequences\n        seqs_per_rank = {i: [] for i in range(utils.get_world_size())}\n        for i, seq in enumerate(seqs):\n            rank = i % utils.get_world_size()\n            seqs_per_rank[rank].append(seq)\n\n        # only evaluate one seq in debug mode\n        if args.debug:\n            seqs_per_rank = {k: v[:1] for k, v in seqs_per_rank.items()}\n            seqs = [s for ss in seqs_per_rank.values() for s in ss]\n\n        dataset_name = seqs_per_rank[utils.get_rank()]\n        if not dataset_name:\n            dataset_name = seqs_per_rank[0]\n\n        model_without_ddp = model\n        if args.distributed:\n            model_without_ddp = model.module\n\n        # mask prediction is too slow and consumes a lot of memory to\n        # run it during tracking training.\n        if isinstance(model, DETRSegm):\n            model_without_ddp = model_without_ddp.detr\n\n        obj_detector_model = {\n            'model': model_without_ddp,\n            'post': postprocessors,\n            'img_transform': args.img_transform}\n\n        config_updates = {\n            'seed': None,\n            'dataset_name': dataset_name,\n            'frame_range': data_loader.dataset.frame_range,\n            'obj_detector_model': obj_detector_model}\n        run = ex.run(config_updates=config_updates)\n\n        mot_accums = utils.all_gather(run.result)[:len(seqs)]\n        mot_accums = [item for sublist in mot_accums for item in sublist]\n\n        # we compute seqs results on muliple nodes but evaluate the accumulated\n        # results due to seqs being weighted differently (seg length)\n        eval_summary, eval_summary_str = evaluate_mot_accums(\n            mot_accums, seqs)\n        print(eval_summary_str)\n\n        for metric in ['mota', 'idf1']:\n            eval_m = eval_summary[metric]['OVERALL']\n            stats['track_bbox'].append(eval_m)\n\n    eval_stats = stats['coco_eval_bbox'][:3]\n    if 'coco_eval_masks' in stats:\n        eval_stats.extend(stats['coco_eval_masks'][:3])\n    if 'track_bbox' in stats:\n        eval_stats.extend(stats['track_bbox'])\n\n    # VIS\n    if visualizers:\n        vis_epoch = visualizers['epoch_metrics']\n        y_data = [stats[legend_name] for legend_name in vis_epoch.viz_opts['legend']]\n\n        vis_epoch.plot(y_data, epoch)\n\n        visualizers['epoch_eval'].plot(eval_stats, epoch)\n\n    if args.debug:\n        exit()\n\n    return eval_stats, coco_evaluator\n"
  },
  {
    "path": "src/trackformer/models/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport torch\n\nfrom .backbone import build_backbone\nfrom .deformable_detr import DeformableDETR, DeformablePostProcess\nfrom .deformable_transformer import build_deforamble_transformer\nfrom .detr import DETR, PostProcess, SetCriterion\nfrom .detr_segmentation import (DeformableDETRSegm, DeformableDETRSegmTracking,\n                                DETRSegm, DETRSegmTracking,\n                                PostProcessPanoptic, PostProcessSegm)\nfrom .detr_tracking import DeformableDETRTracking, DETRTracking\nfrom .matcher import build_matcher\nfrom .transformer import build_transformer\n\n\ndef build_model(args):\n    if args.dataset == 'coco':\n        num_classes = 91\n    elif args.dataset == 'coco_panoptic':\n        num_classes = 250\n    elif args.dataset in ['coco_person', 'mot', 'mot_crowdhuman', 'crowdhuman', 'mot_coco_person']:\n        # num_classes = 91\n        num_classes = 20\n        # num_classes = 1\n    else:\n        raise NotImplementedError\n\n    device = torch.device(args.device)\n    backbone = build_backbone(args)\n    matcher = build_matcher(args)\n\n    detr_kwargs = {\n        'backbone': backbone,\n        'num_classes': num_classes - 1 if args.focal_loss else num_classes,\n        'num_queries': args.num_queries,\n        'aux_loss': args.aux_loss,\n        'overflow_boxes': args.overflow_boxes}\n\n    tracking_kwargs = {\n        'track_query_false_positive_prob': args.track_query_false_positive_prob,\n        'track_query_false_negative_prob': args.track_query_false_negative_prob,\n        'matcher': matcher,\n        'backprop_prev_frame': args.track_backprop_prev_frame,}\n\n    mask_kwargs = {\n        'freeze_detr': args.freeze_detr}\n\n    if args.deformable:\n        transformer = build_deforamble_transformer(args)\n\n        detr_kwargs['transformer'] = transformer\n        detr_kwargs['num_feature_levels'] = args.num_feature_levels\n        detr_kwargs['with_box_refine'] = args.with_box_refine\n        detr_kwargs['two_stage'] = args.two_stage\n        detr_kwargs['multi_frame_attention'] = args.multi_frame_attention\n        detr_kwargs['multi_frame_encoding'] = args.multi_frame_encoding\n        detr_kwargs['merge_frame_features'] = args.merge_frame_features\n\n        if args.tracking:\n            if args.masks:\n                model = DeformableDETRSegmTracking(mask_kwargs, tracking_kwargs, detr_kwargs)\n            else:\n                model = DeformableDETRTracking(tracking_kwargs, detr_kwargs)\n        else:\n            if args.masks:\n                model = DeformableDETRSegm(mask_kwargs, detr_kwargs)\n            else:\n                model = DeformableDETR(**detr_kwargs)\n    else:\n        transformer = build_transformer(args)\n\n        detr_kwargs['transformer'] = transformer\n\n        if args.tracking:\n            if args.masks:\n                model = DETRSegmTracking(mask_kwargs, tracking_kwargs, detr_kwargs)\n            else:\n                model = DETRTracking(tracking_kwargs, detr_kwargs)\n        else:\n            if args.masks:\n                model = DETRSegm(mask_kwargs, detr_kwargs)\n            else:\n                model = DETR(**detr_kwargs)\n\n    weight_dict = {'loss_ce': args.cls_loss_coef,\n                   'loss_bbox': args.bbox_loss_coef,\n                   'loss_giou': args.giou_loss_coef,}\n\n    if args.masks:\n        weight_dict[\"loss_mask\"] = args.mask_loss_coef\n        weight_dict[\"loss_dice\"] = args.dice_loss_coef\n\n    # TODO this is a hack\n    if args.aux_loss:\n        aux_weight_dict = {}\n        for i in range(args.dec_layers - 1):\n            aux_weight_dict.update({k + f'_{i}': v for k, v in weight_dict.items()})\n\n        if args.two_stage:\n            aux_weight_dict.update({k + f'_enc': v for k, v in weight_dict.items()})\n        weight_dict.update(aux_weight_dict)\n\n    losses = ['labels', 'boxes', 'cardinality']\n    if args.masks:\n        losses.append('masks')\n\n    criterion = SetCriterion(\n        num_classes,\n        matcher=matcher,\n        weight_dict=weight_dict,\n        eos_coef=args.eos_coef,\n        losses=losses,\n        focal_loss=args.focal_loss,\n        focal_alpha=args.focal_alpha,\n        focal_gamma=args.focal_gamma,\n        tracking=args.tracking,\n        track_query_false_positive_eos_weight=args.track_query_false_positive_eos_weight,)\n    criterion.to(device)\n\n    if args.focal_loss:\n        postprocessors = {'bbox': DeformablePostProcess()}\n    else:\n        postprocessors = {'bbox': PostProcess()}\n    if args.masks:\n        postprocessors['segm'] = PostProcessSegm()\n        if args.dataset == \"coco_panoptic\":\n            is_thing_map = {i: i <= 90 for i in range(201)}\n            postprocessors[\"panoptic\"] = PostProcessPanoptic(is_thing_map, threshold=0.85)\n\n    return model, criterion, postprocessors\n"
  },
  {
    "path": "src/trackformer/models/backbone.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nBackbone modules.\n\"\"\"\nfrom typing import Dict, List\n\nimport torch\nimport torch.nn.functional as F\nimport torchvision\nfrom torch import nn\nfrom torchvision.models._utils import IntermediateLayerGetter\nfrom torchvision.ops.feature_pyramid_network import (FeaturePyramidNetwork,\n                                                     LastLevelMaxPool)\n\nfrom ..util.misc import NestedTensor, is_main_process\nfrom .position_encoding import build_position_encoding\n\n\nclass FrozenBatchNorm2d(torch.nn.Module):\n    \"\"\"\n    BatchNorm2d where the batch statistics and the affine parameters are fixed.\n\n    Copy-paste from torchvision.misc.ops with added eps before rqsrt,\n    without which any other models than torchvision.models.resnet[18,34,50,101]\n    produce nans.\n    \"\"\"\n\n    def __init__(self, n):\n        super(FrozenBatchNorm2d, self).__init__()\n        self.register_buffer(\"weight\", torch.ones(n))\n        self.register_buffer(\"bias\", torch.zeros(n))\n        self.register_buffer(\"running_mean\", torch.zeros(n))\n        self.register_buffer(\"running_var\", torch.ones(n))\n\n    def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,\n                              missing_keys, unexpected_keys, error_msgs):\n        num_batches_tracked_key = prefix + 'num_batches_tracked'\n        if num_batches_tracked_key in state_dict:\n            del state_dict[num_batches_tracked_key]\n\n        super(FrozenBatchNorm2d, self)._load_from_state_dict(\n            state_dict, prefix, local_metadata, strict,\n            missing_keys, unexpected_keys, error_msgs)\n\n    def forward(self, x):\n        # move reshapes to the beginning\n        # to make it fuser-friendly\n        w = self.weight.reshape(1, -1, 1, 1)\n        b = self.bias.reshape(1, -1, 1, 1)\n        rv = self.running_var.reshape(1, -1, 1, 1)\n        rm = self.running_mean.reshape(1, -1, 1, 1)\n        eps = 1e-5\n        scale = w * (rv + eps).rsqrt()\n        bias = b - rm * scale\n        return x * scale + bias\n\n\nclass BackboneBase(nn.Module):\n\n    def __init__(self, backbone: nn.Module, train_backbone: bool,\n                 return_interm_layers: bool):\n        super().__init__()\n        for name, parameter in backbone.named_parameters():\n            if (not train_backbone\n                or 'layer2' not in name\n                and 'layer3' not in name\n                and 'layer4' not in name):\n                parameter.requires_grad_(False)\n        if return_interm_layers:\n            return_layers = {\"layer1\": \"0\", \"layer2\": \"1\", \"layer3\": \"2\", \"layer4\": \"3\"}\n            # return_layers = {\"layer2\": \"0\", \"layer3\": \"1\", \"layer4\": \"2\"}\n            self.strides = [4, 8, 16, 32]\n            self.num_channels = [256, 512, 1024, 2048]\n        else:\n            return_layers = {'layer4': \"0\"}\n            self.strides = [32]\n            self.num_channels = [2048]\n        self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)\n\n    def forward(self, tensor_list: NestedTensor):\n        xs = self.body(tensor_list.tensors)\n        out: Dict[str, NestedTensor] = {}\n        for name, x in xs.items():\n            m = tensor_list.mask\n            assert m is not None\n            mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0]\n            out[name] = NestedTensor(x, mask)\n        return out\n\n\nclass Backbone(BackboneBase):\n    \"\"\"ResNet backbone with frozen BatchNorm.\"\"\"\n    def __init__(self, name: str,\n                 train_backbone: bool,\n                 return_interm_layers: bool,\n                 dilation: bool):\n        norm_layer = FrozenBatchNorm2d\n        backbone = getattr(torchvision.models, name)(\n            replace_stride_with_dilation=[False, False, dilation],\n            pretrained=is_main_process(), norm_layer=norm_layer)\n        super().__init__(backbone, train_backbone,\n                         return_interm_layers)\n        if dilation:\n            self.strides[-1] = self.strides[-1] // 2\n\n\nclass Joiner(nn.Sequential):\n    def __init__(self, backbone, position_embedding):\n        super().__init__(backbone, position_embedding)\n        self.strides = backbone.strides\n        self.num_channels = backbone.num_channels\n\n    def forward(self, tensor_list: NestedTensor):\n        xs = self[0](tensor_list)\n        out: List[NestedTensor] = []\n        pos = []\n        for x in xs.values():\n            out.append(x)\n            # position encoding\n            pos.append(self[1](x).to(x.tensors.dtype))\n\n        return out, pos\n\n\ndef build_backbone(args):\n    position_embedding = build_position_encoding(args)\n    train_backbone = args.lr_backbone > 0\n    return_interm_layers = args.masks or (args.num_feature_levels > 1)\n    backbone = Backbone(args.backbone,\n                        train_backbone,\n                        return_interm_layers,\n                        args.dilation)\n    model = Joiner(backbone, position_embedding)\n    return model\n"
  },
  {
    "path": "src/trackformer/models/deformable_detr.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\n\"\"\"\nDeformable DETR model and criterion classes.\n\"\"\"\nimport copy\nimport math\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\nfrom ..util import box_ops\nfrom ..util.misc import NestedTensor, inverse_sigmoid, nested_tensor_from_tensor_list\nfrom .detr import DETR, PostProcess, SetCriterion\n\n\ndef _get_clones(module, N):\n    return nn.ModuleList([copy.deepcopy(module) for i in range(N)])\n\n\nclass DeformableDETR(DETR):\n    \"\"\" This is the Deformable DETR module that performs object detection \"\"\"\n    def __init__(self, backbone, transformer, num_classes, num_queries, num_feature_levels,\n                 aux_loss=True, with_box_refine=False, two_stage=False, overflow_boxes=False,\n                 multi_frame_attention=False, multi_frame_encoding=False, merge_frame_features=False):\n        \"\"\" Initializes the model.\n        Parameters:\n            backbone: torch module of the backbone to be used. See backbone.py\n            transformer: torch module of the transformer architecture. See transformer.py\n            num_classes: number of object classes\n            num_queries: number of object queries, ie detection slot. This is the maximal\n                         number of objects DETR can detect in a single image. For COCO,\n                         we recommend 100 queries.\n            aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used.\n            with_box_refine: iterative bounding box refinement\n            two_stage: two-stage Deformable DETR\n        \"\"\"\n        super().__init__(backbone, transformer, num_classes, num_queries, aux_loss)\n\n        self.merge_frame_features = merge_frame_features\n        self.multi_frame_attention = multi_frame_attention\n        self.multi_frame_encoding = multi_frame_encoding\n        self.overflow_boxes = overflow_boxes\n        self.num_feature_levels = num_feature_levels\n        if not two_stage:\n            self.query_embed = nn.Embedding(num_queries, self.hidden_dim * 2)\n        num_channels = backbone.num_channels[-3:]\n        if num_feature_levels > 1:\n            # return_layers = {\"layer2\": \"0\", \"layer3\": \"1\", \"layer4\": \"2\"}\n            num_backbone_outs = len(backbone.strides) - 1\n\n            input_proj_list = []\n            for i in range(num_backbone_outs):\n                in_channels = num_channels[i]\n                input_proj_list.append(nn.Sequential(\n                    nn.Conv2d(in_channels, self.hidden_dim, kernel_size=1),\n                    nn.GroupNorm(32, self.hidden_dim),\n                ))\n            for _ in range(num_feature_levels - num_backbone_outs):\n                input_proj_list.append(nn.Sequential(\n                    nn.Conv2d(in_channels, self.hidden_dim, kernel_size=3, stride=2, padding=1),\n                    nn.GroupNorm(32, self.hidden_dim),\n                ))\n                in_channels = self.hidden_dim\n            self.input_proj = nn.ModuleList(input_proj_list)\n        else:\n            self.input_proj = nn.ModuleList([\n                nn.Sequential(\n                    nn.Conv2d(num_channels[0], self.hidden_dim, kernel_size=1),\n                    nn.GroupNorm(32, self.hidden_dim),\n                )])\n        self.with_box_refine = with_box_refine\n        self.two_stage = two_stage\n\n        prior_prob = 0.01\n        bias_value = -math.log((1 - prior_prob) / prior_prob)\n        self.class_embed.bias.data = torch.ones_like(self.class_embed.bias) * bias_value\n        nn.init.constant_(self.bbox_embed.layers[-1].weight.data, 0)\n        nn.init.constant_(self.bbox_embed.layers[-1].bias.data, 0)\n        for proj in self.input_proj:\n            nn.init.xavier_uniform_(proj[0].weight, gain=1)\n            nn.init.constant_(proj[0].bias, 0)\n\n        # if two-stage, the last class_embed and bbox_embed is for\n        # region proposal generation\n        num_pred = transformer.decoder.num_layers\n        if two_stage:\n            num_pred += 1\n\n        if with_box_refine:\n            self.class_embed = _get_clones(self.class_embed, num_pred)\n            self.bbox_embed = _get_clones(self.bbox_embed, num_pred)\n            nn.init.constant_(self.bbox_embed[0].layers[-1].bias.data[2:], -2.0)\n            # hack implementation for iterative bounding box refinement\n            self.transformer.decoder.bbox_embed = self.bbox_embed\n        else:\n            nn.init.constant_(self.bbox_embed.layers[-1].bias.data[2:], -2.0)\n            self.class_embed = nn.ModuleList([self.class_embed for _ in range(num_pred)])\n            self.bbox_embed = nn.ModuleList([self.bbox_embed for _ in range(num_pred)])\n            self.transformer.decoder.bbox_embed = None\n        if two_stage:\n            # hack implementation for two-stage\n            self.transformer.decoder.class_embed = self.class_embed\n            for box_embed in self.bbox_embed:\n                nn.init.constant_(box_embed.layers[-1].bias.data[2:], 0.0)\n\n        if self.merge_frame_features:\n            self.merge_features = nn.Conv2d(self.hidden_dim * 2, self.hidden_dim, kernel_size=1)\n            self.merge_features = _get_clones(self.merge_features, num_feature_levels)\n\n    # def fpn_channels(self):\n    #     \"\"\" Returns FPN channels. \"\"\"\n    #     num_backbone_outs = len(self.backbone.strides)\n    #     return [self.hidden_dim, ] * num_backbone_outs\n\n    def forward(self, samples: NestedTensor, targets: list = None, prev_features=None):\n        \"\"\" The forward expects a NestedTensor, which consists of:\n               - samples.tensors: batched images, of shape [batch_size x 3 x H x W]\n               - samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels\n\n            It returns a dict with the following elements:\n               - \"pred_logits\": the classification logits (including no-object) for all queries.\n                                Shape= [batch_size x num_queries x (num_classes + 1)]\n               - \"pred_boxes\": The normalized boxes coordinates for all queries, represented as\n                               (center_x, center_y, height, width). These values are normalized in [0, 1],\n                               relative to the size of each individual image (disregarding possible padding).\n                               See PostProcess for information on how to retrieve the unnormalized bounding box.\n               - \"aux_outputs\": Optional, only returned when auxilary losses are activated. It is a list of\n                                dictionnaries containing the two above keys for each decoder layer.\n        \"\"\"\n        if not isinstance(samples, NestedTensor):\n            samples = nested_tensor_from_tensor_list(samples)\n        features, pos = self.backbone(samples)\n\n        features_all = features\n        # pos_all = pos\n        # return_layers = {\"layer2\": \"0\", \"layer3\": \"1\", \"layer4\": \"2\"}\n        features = features[-3:]\n        # pos = pos[-3:]\n\n        if prev_features is None:\n            prev_features = features\n        else:\n            prev_features = prev_features[-3:]\n\n        # srcs = []\n        # masks = []\n        src_list = []\n        mask_list = []\n        pos_list = []\n        # for l, (feat, prev_feat) in enumerate(zip(features, prev_features)):\n\n        frame_features = [prev_features, features]\n        if not self.multi_frame_attention:\n            frame_features = [features]\n\n        for frame, frame_feat in enumerate(frame_features):\n            if self.multi_frame_attention and self.multi_frame_encoding:\n                pos_list.extend([p[:, frame] for p in pos[-3:]])\n            else:\n                pos_list.extend(pos[-3:])\n\n            # src, mask = feat.decompose()\n\n            # prev_src, _ = prev_feat.decompose()\n\n            for l, feat in enumerate(frame_feat):\n                src, mask = feat.decompose()\n\n                if self.merge_frame_features:\n                    prev_src, _ = prev_features[l].decompose()\n                    src_list.append(self.merge_features[l](torch.cat([self.input_proj[l](src), self.input_proj[l](prev_src)], dim=1)))\n                else:\n                    src_list.append(self.input_proj[l](src))\n\n                mask_list.append(mask)\n\n            # if hasattr(self, 'merge_features'):\n            #     srcs.append(self.merge_features[l](torch.cat([self.input_proj[l](src), self.input_proj[l](prev_src)], dim=1)))\n            # else:\n            #     srcs.append(self.input_proj[l](src))\n\n            # masks.append(mask)\n                assert mask is not None\n\n            if self.num_feature_levels > len(frame_feat):\n                _len_srcs = len(frame_feat)\n                for l in range(_len_srcs, self.num_feature_levels):\n                    if l == _len_srcs:\n                        # src = self.input_proj[l](frame_feat[-1].tensors)\n                        # if hasattr(self, 'merge_features'):\n                        #     src = self.merge_features[l](torch.cat([self.input_proj[l](features[-1].tensors), self.input_proj[l](prev_features[-1].tensors)], dim=1))\n                        # else:\n                        #     src = self.input_proj[l](features[-1].tensors)\n\n                        if self.merge_frame_features:\n                            src = self.merge_features[l](torch.cat([self.input_proj[l](frame_feat[-1].tensors), self.input_proj[l](prev_features[-1].tensors)], dim=1))\n                        else:\n                            src = self.input_proj[l](frame_feat[-1].tensors)\n                    else:\n                        src = self.input_proj[l](src_list[-1])\n                        # src = self.input_proj[l](srcs[-1])\n                    # m = samples.mask\n                    _, m = frame_feat[0].decompose()\n                    mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0]\n\n                    pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype)\n                    src_list.append(src)\n                    mask_list.append(mask)\n                    if self.multi_frame_attention and self.multi_frame_encoding:\n                        pos_list.append(pos_l[:, frame])\n                    else:\n                        pos_list.append(pos_l)\n\n        query_embeds = None\n        if not self.two_stage:\n            query_embeds = self.query_embed.weight\n        hs, memory, init_reference, inter_references, enc_outputs_class, enc_outputs_coord_unact = \\\n            self.transformer(src_list, mask_list, pos_list, query_embeds, targets)\n\n        outputs_classes = []\n        outputs_coords = []\n        for lvl in range(hs.shape[0]):\n            if lvl == 0:\n                reference = init_reference\n            else:\n                reference = inter_references[lvl - 1]\n            reference = inverse_sigmoid(reference)\n            outputs_class = self.class_embed[lvl](hs[lvl])\n            tmp = self.bbox_embed[lvl](hs[lvl])\n            if reference.shape[-1] == 4:\n                tmp += reference\n            else:\n                assert reference.shape[-1] == 2\n                tmp[..., :2] += reference\n            outputs_coord = tmp.sigmoid()\n            outputs_classes.append(outputs_class)\n            outputs_coords.append(outputs_coord)\n        outputs_class = torch.stack(outputs_classes)\n        outputs_coord = torch.stack(outputs_coords)\n\n        out = {'pred_logits': outputs_class[-1],\n               'pred_boxes': outputs_coord[-1],\n               'hs_embed': hs[-1]}\n\n        if self.aux_loss:\n            out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord)\n\n        if self.two_stage:\n            enc_outputs_coord = enc_outputs_coord_unact.sigmoid()\n            out['enc_outputs'] = {'pred_logits': enc_outputs_class, 'pred_boxes': enc_outputs_coord}\n\n        offset = 0\n        memory_slices = []\n        batch_size, _, channels = memory.shape\n        for src in src_list:\n            _, _, height, width = src.shape\n            memory_slice = memory[:, offset:offset + height * width].permute(0, 2, 1).view(\n                batch_size, channels, height, width)\n            memory_slices.append(memory_slice)\n            offset += height * width\n\n        memory = memory_slices\n        # memory = memory_slices[-1]\n        # features = [NestedTensor(memory_slide) for memory_slide in memory_slices]\n\n        return out, targets, features_all, memory, hs\n\n    @torch.jit.unused\n    def _set_aux_loss(self, outputs_class, outputs_coord):\n        # this is a workaround to make torchscript happy, as torchscript\n        # doesn't support dictionary with non-homogeneous values, such\n        # as a dict having both a Tensor and a list.\n        return [{'pred_logits': a, 'pred_boxes': b}\n                for a, b in zip(outputs_class[:-1], outputs_coord[:-1])]\n\n\nclass DeformablePostProcess(PostProcess):\n    \"\"\" This module converts the model's output into the format expected by the coco api\"\"\"\n\n    @torch.no_grad()\n    def forward(self, outputs, target_sizes, results_mask=None):\n        \"\"\" Perform the computation\n        Parameters:\n            outputs: raw outputs of the model\n            target_sizes: tensor of dimension [batch_size x 2] containing the size of each images of the batch\n                          For evaluation, this must be the original image size (before any data augmentation)\n                          For visualization, this should be the image size after data augment, but before padding\n        \"\"\"\n        out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes']\n\n        assert len(out_logits) == len(target_sizes)\n        assert target_sizes.shape[1] == 2\n\n        prob = out_logits.sigmoid()\n\n        ###\n        # topk_values, topk_indexes = torch.topk(prob.view(out_logits.shape[0], -1), 100, dim=1)\n        # scores = topk_values\n\n        # topk_boxes = topk_indexes // out_logits.shape[2]\n        # labels = topk_indexes % out_logits.shape[2]\n\n        # boxes = box_ops.box_cxcywh_to_xyxy(out_bbox)\n        # boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1,1,4))\n        ###\n\n        scores, labels = prob.max(-1)\n        # scores, labels = prob[..., 0:1].max(-1)\n        boxes = box_ops.box_cxcywh_to_xyxy(out_bbox)\n\n        # and from relative [0, 1] to absolute [0, height] coordinates\n        img_h, img_w = target_sizes.unbind(1)\n        scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1)\n        boxes = boxes * scale_fct[:, None, :]\n\n        results = [\n            {'scores': s, 'scores_no_object': 1 - s, 'labels': l, 'boxes': b}\n            for s, l, b in zip(scores, labels, boxes)]\n\n        if results_mask is not None:\n            for i, mask in enumerate(results_mask):\n                for k, v in results[i].items():\n                    results[i][k] = v[mask]\n\n        return results\n"
  },
  {
    "path": "src/trackformer/models/deformable_transformer.py",
    "content": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseTime. All Rights Reserved.\n# Licensed under the Apache License, Version 2.0 [see LICENSE for details]\n# ------------------------------------------------------------------------\n# Modified from DETR (https://github.com/facebookresearch/detr)\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# ------------------------------------------------------------------------\n\nimport math\n\nimport torch\nfrom torch import nn\nfrom torch.nn.init import constant_, normal_, xavier_uniform_\n\nfrom ..util.misc import inverse_sigmoid\nfrom .ops.modules import MSDeformAttn\nfrom .transformer import _get_clones, _get_activation_fn\n\n\nclass DeformableTransformer(nn.Module):\n    def __init__(self, d_model=256, nhead=8,\n                 num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=1024,\n                 dropout=0.1, activation=\"relu\", return_intermediate_dec=False,\n                 num_feature_levels=4, dec_n_points=4,  enc_n_points=4,\n                 two_stage=False, two_stage_num_proposals=300,\n                 multi_frame_attention_separate_encoder=False):\n        super().__init__()\n\n        self.d_model = d_model\n        self.nhead = nhead\n        self.two_stage = two_stage\n        self.two_stage_num_proposals = two_stage_num_proposals\n        self.num_feature_levels = num_feature_levels\n        self.multi_frame_attention_separate_encoder = multi_frame_attention_separate_encoder\n\n        enc_num_feature_levels = num_feature_levels\n        if multi_frame_attention_separate_encoder:\n            enc_num_feature_levels = enc_num_feature_levels // 2\n        encoder_layer = DeformableTransformerEncoderLayer(d_model, dim_feedforward,\n                                                          dropout, activation,\n                                                          enc_num_feature_levels, nhead, enc_n_points)\n        self.encoder = DeformableTransformerEncoder(encoder_layer, num_encoder_layers)\n\n        decoder_layer = DeformableTransformerDecoderLayer(d_model, dim_feedforward,\n                                                          dropout, activation,\n                                                          num_feature_levels, nhead, dec_n_points)\n        self.decoder = DeformableTransformerDecoder(decoder_layer, num_decoder_layers, return_intermediate_dec)\n\n        self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model))\n\n        if two_stage:\n            self.enc_output = nn.Linear(d_model, d_model)\n            self.enc_output_norm = nn.LayerNorm(d_model)\n            self.pos_trans = nn.Linear(d_model * 2, d_model * 2)\n            self.pos_trans_norm = nn.LayerNorm(d_model * 2)\n        else:\n            self.reference_points = nn.Linear(d_model, 2)\n            # self.hs_embed_to_query_embed = nn.Linear(d_model, d_model)\n            # self.hs_embed_to_tgt = nn.Linear(d_model, d_model)\n            # self.track_query_embed = nn.Embedding(1, d_model)\n\n        self._reset_parameters()\n\n    def _reset_parameters(self):\n        for p in self.parameters():\n            if p.dim() > 1:\n                nn.init.xavier_uniform_(p)\n        for m in self.modules():\n            if isinstance(m, MSDeformAttn):\n                m._reset_parameters()\n        if not self.two_stage:\n            xavier_uniform_(self.reference_points.weight.data, gain=1.0)\n            constant_(self.reference_points.bias.data, 0.)\n        normal_(self.level_embed)\n\n    def get_proposal_pos_embed(self, proposals):\n        num_pos_feats = 128\n        temperature = 10000\n        scale = 2 * math.pi\n\n        dim_t = torch.arange(num_pos_feats, dtype=torch.float32, device=proposals.device)\n        dim_t = temperature ** (2 * (dim_t // 2) / num_pos_feats)\n        # N, L, 4\n        proposals = proposals.sigmoid() * scale\n        # N, L, 4, 128\n        pos = proposals[:, :, :, None] / dim_t\n        # N, L, 4, 64, 2\n        pos = torch.stack((pos[:, :, :, 0::2].sin(), pos[:, :, :, 1::2].cos()), dim=4).flatten(2)\n        return pos\n\n    def gen_encoder_output_proposals(self, memory, memory_padding_mask, spatial_shapes):\n        N_, S_, C_ = memory.shape\n        base_scale = 4.0\n        proposals = []\n        _cur = 0\n        for lvl, (H_, W_) in enumerate(spatial_shapes):\n            mask_flatten_ = memory_padding_mask[:, _cur:(_cur + H_ * W_)].view(N_, H_, W_, 1)\n            valid_H = torch.sum(~mask_flatten_[:, :, 0, 0], 1)\n            valid_W = torch.sum(~mask_flatten_[:, 0, :, 0], 1)\n\n            grid_y, grid_x = torch.meshgrid(torch.linspace(0, H_ - 1, H_, dtype=torch.float32, device=memory.device),\n                                            torch.linspace(0, W_ - 1, W_, dtype=torch.float32, device=memory.device))\n            grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1)\n\n            scale = torch.cat([valid_W.unsqueeze(-1), valid_H.unsqueeze(-1)], 1).view(N_, 1, 1, 2)\n            grid = (grid.unsqueeze(0).expand(N_, -1, -1, -1) + 0.5) / scale\n            wh = torch.ones_like(grid) * 0.05 * (2.0 ** lvl)\n            proposal = torch.cat((grid, wh), -1).view(N_, -1, 4)\n            proposals.append(proposal)\n            _cur += (H_ * W_)\n        output_proposals = torch.cat(proposals, 1)\n        output_proposals_valid = ((output_proposals > 0.01) & (output_proposals < 0.99)).all(-1, keepdim=True)\n        output_proposals = torch.log(output_proposals / (1 - output_proposals))\n        output_proposals = output_proposals.masked_fill(memory_padding_mask.unsqueeze(-1), float('inf'))\n        output_proposals = output_proposals.masked_fill(~output_proposals_valid, float('inf'))\n\n        output_memory = memory\n        output_memory = output_memory.masked_fill(memory_padding_mask.unsqueeze(-1), float(0))\n        output_memory = output_memory.masked_fill(~output_proposals_valid, float(0))\n        output_memory = self.enc_output_norm(self.enc_output(output_memory))\n        return output_memory, output_proposals\n\n    def get_valid_ratio(self, mask):\n        _, H, W = mask.shape\n        valid_H = torch.sum(~mask[:, :, 0], 1)\n        valid_W = torch.sum(~mask[:, 0, :], 1)\n        valid_ratio_h = valid_H.float() / H\n        valid_ratio_w = valid_W.float() / W\n        valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)\n        return valid_ratio\n\n    def forward(self, srcs, masks, pos_embeds, query_embed=None, targets=None):\n        assert self.two_stage or query_embed is not None\n\n        # prepare input for encoder\n        src_flatten = []\n        mask_flatten = []\n        lvl_pos_embed_flatten = []\n        spatial_shapes = []\n        for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)):\n            bs, c, h, w = src.shape\n            spatial_shape = (h, w)\n            spatial_shapes.append(spatial_shape)\n            src = src.flatten(2).transpose(1, 2)\n            mask = mask.flatten(1)\n            pos_embed = pos_embed.flatten(2).transpose(1, 2)\n            lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1)\n            # lvl_pos_embed = pos_embed + self.level_embed[lvl % self.num_feature_levels].view(1, 1, -1)\n            lvl_pos_embed_flatten.append(lvl_pos_embed)\n            src_flatten.append(src)\n            mask_flatten.append(mask)\n        src_flatten = torch.cat(src_flatten, 1)\n        mask_flatten = torch.cat(mask_flatten, 1)\n        lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1)\n        spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device)\n        valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)\n\n        # encoder\n        if self.multi_frame_attention_separate_encoder:\n            prev_memory = self.encoder(\n                src_flatten[:, :src_flatten.shape[1] // 2],\n                spatial_shapes[:self.num_feature_levels // 2],\n                valid_ratios[:, :self.num_feature_levels // 2],\n                lvl_pos_embed_flatten[:, :src_flatten.shape[1] // 2],\n                mask_flatten[:, :src_flatten.shape[1] // 2])\n            memory = self.encoder(\n                src_flatten[:, src_flatten.shape[1] // 2:],\n                spatial_shapes[self.num_feature_levels // 2:],\n                valid_ratios[:, self.num_feature_levels // 2:],\n                lvl_pos_embed_flatten[:, src_flatten.shape[1] // 2:],\n                mask_flatten[:, src_flatten.shape[1] // 2:])\n            memory = torch.cat([memory, prev_memory], 1)\n        else:\n            memory = self.encoder(src_flatten, spatial_shapes, valid_ratios, lvl_pos_embed_flatten, mask_flatten)\n\n        # prepare input for decoder\n        bs, _, c = memory.shape\n        query_attn_mask = None\n        if self.two_stage:\n            output_memory, output_proposals = self.gen_encoder_output_proposals(memory, mask_flatten, spatial_shapes)\n\n            # hack implementation for two-stage Deformable DETR\n            enc_outputs_class = self.decoder.class_embed[self.decoder.num_layers](output_memory)\n            enc_outputs_coord_unact = self.decoder.bbox_embed[self.decoder.num_layers](output_memory) + output_proposals\n\n            topk = self.two_stage_num_proposals\n            topk_proposals = torch.topk(enc_outputs_class[..., 0], topk, dim=1)[1]\n            topk_coords_unact = torch.gather(enc_outputs_coord_unact, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4))\n            topk_coords_unact = topk_coords_unact.detach()\n            reference_points = topk_coords_unact.sigmoid()\n            init_reference_out = reference_points\n            pos_trans_out = self.pos_trans_norm(self.pos_trans(self.get_proposal_pos_embed(topk_coords_unact)))\n            query_embed, tgt = torch.split(pos_trans_out, c, dim=2)\n        else:\n            query_embed, tgt = torch.split(query_embed, c, dim=1)\n            query_embed = query_embed.unsqueeze(0).expand(bs, -1, -1)\n            tgt = tgt.unsqueeze(0).expand(bs, -1, -1)\n\n            reference_points = self.reference_points(query_embed).sigmoid()\n\n            if targets is not None and 'track_query_hs_embeds' in targets[0]:\n\n                # print([t['track_query_hs_embeds'].shape for t in targets])\n                # prev_hs_embed = torch.nn.utils.rnn.pad_sequence([t['track_query_hs_embeds'] for t in targets], batch_first=True, padding_value=float('nan'))\n                # prev_boxes = torch.nn.utils.rnn.pad_sequence([t['track_query_boxes'] for t in targets], batch_first=True, padding_value=float('nan'))\n                # print(prev_hs_embed.shape)\n                # query_mask = torch.isnan(prev_hs_embed)\n                # print(query_mask)\n\n                prev_hs_embed = torch.stack([t['track_query_hs_embeds'] for t in targets])\n                prev_boxes = torch.stack([t['track_query_boxes'] for t in targets])\n\n                prev_query_embed = torch.zeros_like(prev_hs_embed)\n                # prev_query_embed = self.track_query_embed.weight.expand_as(prev_hs_embed)\n                # prev_query_embed = self.hs_embed_to_query_embed(prev_hs_embed)\n                # prev_query_embed = None\n\n                prev_tgt = prev_hs_embed\n                # prev_tgt = self.hs_embed_to_tgt(prev_hs_embed)\n\n                query_embed = torch.cat([prev_query_embed, query_embed], dim=1)\n                tgt = torch.cat([prev_tgt, tgt], dim=1)\n\n                reference_points = torch.cat([prev_boxes[..., :2], reference_points], dim=1)\n\n                # if 'track_queries_placeholder_mask' in targets[0]:\n                #     query_attn_mask = torch.stack([t['track_queries_placeholder_mask'] for t in targets])\n\n            init_reference_out = reference_points\n\n        # decoder\n        # query_embed = None\n        hs, inter_references = self.decoder(\n            tgt, reference_points, memory, spatial_shapes,\n            valid_ratios, query_embed, mask_flatten, query_attn_mask)\n\n        inter_references_out = inter_references\n\n        # offset = 0\n        # memory_slices = []\n        # for src in srcs:\n        #     _, _, height, width = src.shape\n        #     memory_slice = memory[:, offset:offset +h * w].permute(0, 2, 1).view(\n        #         bs, c, height, width)\n        #     memory_slices.append(memory_slice)\n        #     offset += h * w\n\n        # # memory = memory_slices[-1]\n        # print([m.shape for m in memory_slices])\n\n        if self.two_stage:\n            return (hs, memory, init_reference_out, inter_references_out,\n                    enc_outputs_class, enc_outputs_coord_unact)\n        return hs, memory, init_reference_out, inter_references_out, None, None\n\n\nclass DeformableTransformerEncoderLayer(nn.Module):\n    def __init__(self,\n                 d_model=256, d_ffn=1024,\n                 dropout=0.1, activation=\"relu\",\n                 n_levels=4, n_heads=8, n_points=4):\n        super().__init__()\n\n        # self attention\n        self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points)\n        self.dropout1 = nn.Dropout(dropout)\n        self.norm1 = nn.LayerNorm(d_model)\n\n        # ffn\n        self.linear1 = nn.Linear(d_model, d_ffn)\n        self.activation = _get_activation_fn(activation)\n        self.dropout2 = nn.Dropout(dropout)\n        self.linear2 = nn.Linear(d_ffn, d_model)\n        self.dropout3 = nn.Dropout(dropout)\n        self.norm2 = nn.LayerNorm(d_model)\n\n    @staticmethod\n    def with_pos_embed(tensor, pos):\n        return tensor if pos is None else tensor + pos\n\n    def forward_ffn(self, src):\n        src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))\n        src = src + self.dropout3(src2)\n        src = self.norm2(src)\n        return src\n\n    def forward(self, src, pos, reference_points, spatial_shapes, padding_mask=None):\n        # self attention\n        src2 = self.self_attn(self.with_pos_embed(src, pos), reference_points, src, spatial_shapes, padding_mask)\n        src = src + self.dropout1(src2)\n        src = self.norm1(src)\n\n        # ffn\n        src = self.forward_ffn(src)\n\n        return src\n\n\nclass DeformableTransformerEncoder(nn.Module):\n    def __init__(self, encoder_layer, num_layers):\n        super().__init__()\n        self.layers = _get_clones(encoder_layer, num_layers)\n        self.num_layers = num_layers\n\n    @staticmethod\n    def get_reference_points(spatial_shapes, valid_ratios, device):\n        reference_points_list = []\n        for lvl, (H_, W_) in enumerate(spatial_shapes):\n\n            ref_y, ref_x = torch.meshgrid(torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device),\n                                          torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device))\n            ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_)\n            ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_)\n            ref = torch.stack((ref_x, ref_y), -1)\n            reference_points_list.append(ref)\n        reference_points = torch.cat(reference_points_list, 1)\n        reference_points = reference_points[:, :, None] * valid_ratios[:, None]\n        return reference_points\n\n    def forward(self, src, spatial_shapes, valid_ratios, pos=None, padding_mask=None):\n        output = src\n        reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=src.device)\n        for _, layer in enumerate(self.layers):\n            output = layer(output, pos, reference_points, spatial_shapes, padding_mask)\n\n        return output\n\n\nclass DeformableTransformerDecoderLayer(nn.Module):\n    def __init__(self, d_model=256, d_ffn=1024,\n                 dropout=0.1, activation=\"relu\",\n                 n_levels=4, n_heads=8, n_points=4):\n        super().__init__()\n\n        # cross attention\n        self.cross_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points)\n        self.dropout1 = nn.Dropout(dropout)\n        self.norm1 = nn.LayerNorm(d_model)\n\n        # self attention\n        self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)\n        self.dropout2 = nn.Dropout(dropout)\n        self.norm2 = nn.LayerNorm(d_model)\n\n        # ffn\n        self.linear1 = nn.Linear(d_model, d_ffn)\n        self.activation = _get_activation_fn(activation)\n        self.dropout3 = nn.Dropout(dropout)\n        self.linear2 = nn.Linear(d_ffn, d_model)\n        self.dropout4 = nn.Dropout(dropout)\n        self.norm3 = nn.LayerNorm(d_model)\n\n    @staticmethod\n    def with_pos_embed(tensor, pos):\n        return tensor if pos is None else tensor + pos\n\n    def forward_ffn(self, tgt):\n        tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt))))\n        tgt = tgt + self.dropout4(tgt2)\n        tgt = self.norm3(tgt)\n        return tgt\n\n    def forward(self, tgt, query_pos, reference_points, src, src_spatial_shapes, src_padding_mask=None, query_attn_mask=None):\n        # self attention\n        q = k = self.with_pos_embed(tgt, query_pos)\n\n        tgt2 = self.self_attn(q.transpose(0, 1), k.transpose(0, 1), tgt.transpose(0, 1), key_padding_mask=query_attn_mask)[0].transpose(0, 1)\n\n        tgt = tgt + self.dropout2(tgt2)\n        tgt = self.norm2(tgt)\n\n        # cross attention\n        tgt2 = self.cross_attn(self.with_pos_embed(tgt, query_pos),\n                               reference_points,\n                               src, src_spatial_shapes, src_padding_mask, query_attn_mask)\n        tgt = tgt + self.dropout1(tgt2)\n        tgt = self.norm1(tgt)\n\n        # ffn\n        tgt = self.forward_ffn(tgt)\n\n        return tgt\n\n\nclass DeformableTransformerDecoder(nn.Module):\n    def __init__(self, decoder_layer, num_layers, return_intermediate=False):\n        super().__init__()\n        self.layers = _get_clones(decoder_layer, num_layers)\n        self.num_layers = num_layers\n        self.return_intermediate = return_intermediate\n        # hack implementation for iterative bounding box refinement and two-stage Deformable DETR\n        self.bbox_embed = None\n        self.class_embed = None\n\n    def forward(self, tgt, reference_points, src, src_spatial_shapes, src_valid_ratios,\n                query_pos=None, src_padding_mask=None, query_attn_mask=None):\n        output = tgt\n\n        intermediate = []\n        intermediate_reference_points = []\n        for lid, layer in enumerate(self.layers):\n            if reference_points.shape[-1] == 4:\n                reference_points_input = reference_points[:, :, None] \\\n                                         * torch.cat([src_valid_ratios, src_valid_ratios], -1)[:, None]\n            else:\n                assert reference_points.shape[-1] == 2\n                reference_points_input = reference_points[:, :, None] * src_valid_ratios[:, None]\n            output = layer(output, query_pos, reference_points_input, src, src_spatial_shapes, src_padding_mask, query_attn_mask)\n\n            # hack implementation for iterative bounding box refinement\n            if self.bbox_embed is not None:\n                tmp = self.bbox_embed[lid](output)\n                if reference_points.shape[-1] == 4:\n                    new_reference_points = tmp + inverse_sigmoid(reference_points)\n                    new_reference_points = new_reference_points.sigmoid()\n                else:\n                    assert reference_points.shape[-1] == 2\n                    new_reference_points = tmp\n                    new_reference_points[..., :2] = tmp[..., :2] + inverse_sigmoid(reference_points)\n                    new_reference_points = new_reference_points.sigmoid()\n                reference_points = new_reference_points.detach()\n\n            if self.return_intermediate:\n                intermediate.append(output)\n                intermediate_reference_points.append(reference_points)\n\n        if self.return_intermediate:\n            return torch.stack(intermediate), torch.stack(intermediate_reference_points)\n\n        return output, reference_points\n\n\ndef build_deforamble_transformer(args):\n\n    num_feature_levels = args.num_feature_levels\n    if args.multi_frame_attention:\n        num_feature_levels *= 2\n\n    return DeformableTransformer(\n        d_model=args.hidden_dim,\n        nhead=args.nheads,\n        num_encoder_layers=args.enc_layers,\n        num_decoder_layers=args.dec_layers,\n        dim_feedforward=args.dim_feedforward,\n        dropout=args.dropout,\n        activation=\"relu\",\n        return_intermediate_dec=True,\n        num_feature_levels=num_feature_levels,\n        dec_n_points=args.dec_n_points,\n        enc_n_points=args.enc_n_points,\n        two_stage=args.two_stage,\n        two_stage_num_proposals=args.num_queries,\n        multi_frame_attention_separate_encoder=args.multi_frame_attention and args.multi_frame_attention_separate_encoder)\n"
  },
  {
    "path": "src/trackformer/models/detr.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nDETR model and criterion classes.\n\"\"\"\nimport copy\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\nfrom ..util import box_ops\nfrom ..util.misc import (NestedTensor, accuracy, dice_loss, get_world_size,\n                         interpolate, is_dist_avail_and_initialized,\n                         nested_tensor_from_tensor_list, sigmoid_focal_loss)\n\n\nclass DETR(nn.Module):\n    \"\"\" This is the DETR module that performs object detection. \"\"\"\n\n    def __init__(self, backbone, transformer, num_classes, num_queries,\n                 aux_loss=False, overflow_boxes=False):\n        \"\"\" Initializes the model.\n        Parameters:\n            backbone: torch module of the backbone to be used. See backbone.py\n            transformer: torch module of the transformer architecture. See transformer.py\n            num_classes: number of object classes\n            num_queries: number of object queries, ie detection slot. This is the maximal\n                         number of objects DETR can detect in a single image. For COCO, we\n                         recommend 100 queries.\n            aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used.\n        \"\"\"\n        super().__init__()\n\n        self.num_queries = num_queries\n        self.transformer = transformer\n        self.overflow_boxes = overflow_boxes\n        self.class_embed = nn.Linear(self.hidden_dim, num_classes + 1)\n        self.bbox_embed = MLP(self.hidden_dim, self.hidden_dim, 4, 3)\n        self.query_embed = nn.Embedding(num_queries, self.hidden_dim)\n\n        # match interface with deformable DETR\n        self.input_proj = nn.Conv2d(backbone.num_channels[-1], self.hidden_dim, kernel_size=1)\n        # self.input_proj = nn.ModuleList([\n        #     nn.Sequential(\n        #         nn.Conv2d(backbone.num_channels[-1], self.hidden_dim, kernel_size=1)\n        #     )])\n\n        self.backbone = backbone\n        self.aux_loss = aux_loss\n\n    @property\n    def hidden_dim(self):\n        \"\"\" Returns the hidden feature dimension size. \"\"\"\n        return self.transformer.d_model\n\n    @property\n    def fpn_channels(self):\n        \"\"\" Returns FPN channels. \"\"\"\n        return self.backbone.num_channels[:3][::-1]\n        # return [1024, 512, 256]\n\n    def forward(self, samples: NestedTensor, targets: list = None):\n        \"\"\" The forward expects a NestedTensor, which consists of:\n               - samples.tensor: batched images, of shape [batch_size x 3 x H x W]\n               - samples.mask: a binary mask of shape [batch_size x H x W],\n                               containing 1 on padded pixels\n\n            It returns a dict with the following elements:\n               - \"pred_logits\": the classification logits (including no-object) for all queries.\n                                Shape= [batch_size x num_queries x (num_classes + 1)]\n               - \"pred_boxes\": The normalized boxes coordinates for all queries, represented as\n                               (center_x, center_y, height, width). These values are normalized\n                               in [0, 1], relative to the size of each individual image\n                               (disregarding possible padding). See PostProcess for information\n                               on how to retrieve the unnormalized bounding box.\n               - \"aux_outputs\": Optional, only returned when auxilary losses are activated. It\n                                is a list of dictionnaries containing the two above keys for\n                                each decoder layer.\n        \"\"\"\n        if not isinstance(samples, NestedTensor):\n            samples = nested_tensor_from_tensor_list(samples)\n        features, pos = self.backbone(samples)\n\n        src, mask = features[-1].decompose()\n        # src = self.input_proj[-1](src)\n        src = self.input_proj(src)\n        pos = pos[-1]\n\n        batch_size, _, _, _ = src.shape\n\n        query_embed = self.query_embed.weight\n        query_embed = query_embed.unsqueeze(1).repeat(1, batch_size, 1)\n        tgt = None\n        if targets is not None and 'track_query_hs_embeds' in targets[0]:\n            # [BATCH_SIZE, NUM_PROBS, 4]\n            track_query_hs_embeds = torch.stack([t['track_query_hs_embeds'] for t in targets])\n\n            num_track_queries = track_query_hs_embeds.shape[1]\n\n            track_query_embed = torch.zeros(\n                num_track_queries,\n                batch_size,\n                self.hidden_dim).to(query_embed.device)\n            query_embed = torch.cat([\n                track_query_embed,\n                query_embed], dim=0)\n\n            tgt = torch.zeros_like(query_embed)\n            tgt[:num_track_queries] = track_query_hs_embeds.transpose(0, 1)\n\n            for i, target in enumerate(targets):\n                target['track_query_hs_embeds'] = tgt[:, i]\n\n        assert mask is not None\n        hs, hs_without_norm, memory = self.transformer(\n            src, mask, query_embed, pos, tgt)\n\n        outputs_class = self.class_embed(hs)\n        outputs_coord = self.bbox_embed(hs).sigmoid()\n        out = {'pred_logits': outputs_class[-1],\n               'pred_boxes': outputs_coord[-1],\n               'hs_embed': hs_without_norm[-1]}\n\n        if self.aux_loss:\n            out['aux_outputs'] = self._set_aux_loss(\n                outputs_class, outputs_coord)\n\n        return out, targets, features, memory, hs\n\n    @torch.jit.unused\n    def _set_aux_loss(self, outputs_class, outputs_coord):\n        # this is a workaround to make torchscript happy, as torchscript\n        # doesn't support dictionary with non-homogeneous values, such\n        # as a dict having both a Tensor and a list.\n        return [{'pred_logits': a, 'pred_boxes': b}\n                for a, b in zip(outputs_class[:-1], outputs_coord[:-1])]\n\n\nclass SetCriterion(nn.Module):\n    \"\"\" This class computes the loss for DETR.\n    The process happens in two steps:\n        1) we compute hungarian assignment between ground truth boxes and the outputs of the model\n        2) we supervise each pair of matched ground-truth / prediction (supervise class and box)\n    \"\"\"\n    def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses,\n                 focal_loss, focal_alpha, focal_gamma, tracking, track_query_false_positive_eos_weight):\n        \"\"\" Create the criterion.\n        Parameters:\n            num_classes: number of object categories, omitting the special no-object category\n            matcher: module able to compute a matching between targets and proposals\n            weight_dict: dict containing as key the names of the losses and as values their\n                         relative weight.\n            eos_coef: relative classification weight applied to the no-object category\n            losses: list of all the losses to be applied. See get_loss for list of\n                    available losses.\n        \"\"\"\n        super().__init__()\n        self.num_classes = num_classes\n        self.matcher = matcher\n        self.weight_dict = weight_dict\n        self.eos_coef = eos_coef\n        self.losses = losses\n        empty_weight = torch.ones(self.num_classes + 1)\n        empty_weight[-1] = self.eos_coef\n        self.register_buffer('empty_weight', empty_weight)\n        self.focal_loss = focal_loss\n        self.focal_alpha = focal_alpha\n        self.focal_gamma = focal_gamma\n        self.tracking = tracking\n        self.track_query_false_positive_eos_weight = track_query_false_positive_eos_weight\n\n    def loss_labels(self, outputs, targets, indices, _, log=True):\n        \"\"\"Classification loss (NLL)\n        targets dicts must contain the key \"labels\" containing a tensor of dim [nb_target_boxes]\n        \"\"\"\n        assert 'pred_logits' in outputs\n        src_logits = outputs['pred_logits']\n\n        idx = self._get_src_permutation_idx(indices)\n        target_classes_o = torch.cat([t[\"labels\"][J] for t, (_, J) in zip(targets, indices)])\n        target_classes = torch.full(src_logits.shape[:2], self.num_classes,\n                                    dtype=torch.int64, device=src_logits.device)\n        target_classes[idx] = target_classes_o\n\n        loss_ce = F.cross_entropy(src_logits.transpose(1, 2),\n                                  target_classes,\n                                  weight=self.empty_weight,\n                                  reduction='none')\n\n        if self.tracking and self.track_query_false_positive_eos_weight:\n            for i, target in enumerate(targets):\n                if 'track_query_boxes' in target:\n                    # remove no-object weighting for false track_queries\n                    loss_ce[i, target['track_queries_fal_pos_mask']] *= 1 / self.eos_coef\n                    # assign false track_queries to some object class for the final weighting\n                    target_classes = target_classes.clone()\n                    target_classes[i, target['track_queries_fal_pos_mask']] = 0\n\n        # weight = None\n        # if self.tracking:\n        #     weight = torch.stack([~t['track_queries_placeholder_mask'] for t in targets]).float()\n        #     loss_ce *= weight\n\n        loss_ce = loss_ce.sum() / self.empty_weight[target_classes].sum()\n\n        losses = {'loss_ce': loss_ce}\n\n        if log:\n            # TODO this should probably be a separate loss, not hacked in this one here\n            losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0]\n        return losses\n\n    def loss_labels_focal(self, outputs, targets, indices, num_boxes, log=True):\n        \"\"\"Classification loss (NLL)\n        targets dicts must contain the key \"labels\" containing a tensor of dim [nb_target_boxes]\n        \"\"\"\n        assert 'pred_logits' in outputs\n        src_logits = outputs['pred_logits']\n\n        idx = self._get_src_permutation_idx(indices)\n        target_classes_o = torch.cat([t[\"labels\"][J] for t, (_, J) in zip(targets, indices)])\n        target_classes = torch.full(src_logits.shape[:2], self.num_classes,\n                                    dtype=torch.int64, device=src_logits.device)\n        target_classes[idx] = target_classes_o\n\n        target_classes_onehot = torch.zeros([src_logits.shape[0], src_logits.shape[1], src_logits.shape[2] + 1],\n                                            dtype=src_logits.dtype, layout=src_logits.layout, device=src_logits.device)\n        target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1)\n\n        target_classes_onehot = target_classes_onehot[:,:,:-1]\n\n        # query_mask = None\n        # if self.tracking:\n        #     query_mask = torch.stack([~t['track_queries_placeholder_mask'] for t in targets])[..., None]\n        #     query_mask = query_mask.repeat(1, 1, self.num_classes)\n\n        loss_ce = sigmoid_focal_loss(\n            src_logits, target_classes_onehot, num_boxes,\n            alpha=self.focal_alpha, gamma=self.focal_gamma)\n            # , query_mask=query_mask)\n\n        # if self.tracking:\n        #     mean_num_queries = torch.tensor([len(m.nonzero()) for m in query_mask]).float().mean()\n        #     loss_ce *= mean_num_queries\n        # else:\n        #     loss_ce *= src_logits.shape[1]\n        loss_ce *= src_logits.shape[1]\n        losses = {'loss_ce': loss_ce}\n\n        if log:\n            # TODO this should probably be a separate loss, not hacked in this one here\n            losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0]\n\n        # compute seperate track and object query losses\n        # loss_ce = sigmoid_focal_loss(\n        #     src_logits, target_classes_onehot, num_boxes,\n        #     alpha=self.focal_alpha, gamma=self.focal_gamma, query_mask=query_mask, reduction=False)\n        # loss_ce *= src_logits.shape[1]\n\n        # track_query_target_masks = []\n        # for t, ind in zip(targets, indices):\n        #     track_query_target_mask = torch.zeros_like(ind[1]).bool()\n\n        #     for i in t['track_query_match_ids']:\n        #         track_query_target_mask[ind[1].eq(i).nonzero()[0]] = True\n\n        #     track_query_target_masks.append(track_query_target_mask)\n        # track_query_target_masks = torch.cat(track_query_target_masks)\n\n        # losses['loss_ce_track_queries'] = loss_ce[idx][track_query_target_masks].mean(1).sum() / num_boxes\n        # losses['loss_ce_object_queries'] = loss_ce[idx][~track_query_target_masks].mean(1).sum() / num_boxes\n\n        return losses\n\n    @torch.no_grad()\n    def loss_cardinality(self, outputs, targets, indices, num_boxes):\n        \"\"\" Compute the cardinality error, ie the absolute error in the number of\n            predicted non-empty boxes. This is not really a loss, it is intended\n            for logging purposes only. It doesn't propagate gradients\n        \"\"\"\n        pred_logits = outputs['pred_logits']\n        device = pred_logits.device\n        tgt_lengths = torch.as_tensor([len(v[\"labels\"]) for v in targets], device=device)\n        # Count the number of predictions that are NOT \"no-object\" (which is the last class)\n        card_pred = (pred_logits.argmax(-1) != pred_logits.shape[-1] - 1).sum(1)\n        card_err = F.l1_loss(card_pred.float(), tgt_lengths.float())\n        losses = {'cardinality_error': card_err}\n        return losses\n\n    def loss_boxes(self, outputs, targets, indices, num_boxes):\n        \"\"\"Compute the losses related to the bounding boxes, the L1 regression loss\n           and the GIoU loss targets dicts must contain the key \"boxes\" containing\n           a tensor of dim [nb_target_boxes, 4]. The target boxes are expected in\n           format (center_x, center_y, h, w), normalized by the image size.\n        \"\"\"\n        assert 'pred_boxes' in outputs\n        idx = self._get_src_permutation_idx(indices)\n        src_boxes = outputs['pred_boxes'][idx]\n        target_boxes = torch.cat([t['boxes'][i] for t, (_, i) in zip(targets, indices)], dim=0)\n\n        loss_bbox = F.l1_loss(src_boxes, target_boxes, reduction='none')\n\n        losses = {}\n        losses['loss_bbox'] = loss_bbox.sum() / num_boxes\n\n        loss_giou = 1 - torch.diag(box_ops.generalized_box_iou(\n            box_ops.box_cxcywh_to_xyxy(src_boxes),\n            box_ops.box_cxcywh_to_xyxy(target_boxes)))\n        losses['loss_giou'] = loss_giou.sum() / num_boxes\n\n        # compute seperate track and object query losses\n        # track_query_target_masks = []\n        # for t, ind in zip(targets, indices):\n        #     track_query_target_mask = torch.zeros_like(ind[1]).bool()\n\n        #     for i in t['track_query_match_ids']:\n        #         track_query_target_mask[ind[1].eq(i).nonzero()[0]] = True\n\n        #     track_query_target_masks.append(track_query_target_mask)\n        # track_query_target_masks = torch.cat(track_query_target_masks)\n\n        # losses['loss_bbox_track_queries'] = loss_bbox[track_query_target_masks].sum() / num_boxes\n        # losses['loss_bbox_object_queries'] = loss_bbox[~track_query_target_masks].sum() / num_boxes\n\n        # losses['loss_giou_track_queries'] = loss_giou[track_query_target_masks].sum() / num_boxes\n        # losses['loss_giou_object_queries'] = loss_giou[~track_query_target_masks].sum() / num_boxes\n\n        return losses\n\n    def loss_masks(self, outputs, targets, indices, num_boxes):\n        \"\"\"Compute the losses related to the masks: the focal loss and the dice loss.\n           targets dicts must contain the key \"masks\" containing a tensor of\n           dim [nb_target_boxes, h, w]\n        \"\"\"\n        assert \"pred_masks\" in outputs\n\n        src_idx = self._get_src_permutation_idx(indices)\n        tgt_idx = self._get_tgt_permutation_idx(indices)\n\n        src_masks = outputs[\"pred_masks\"]\n\n        # TODO use valid to mask invalid areas due to padding in loss\n        target_masks, _ = nested_tensor_from_tensor_list([t[\"masks\"] for t in targets]).decompose()\n        target_masks = target_masks.to(src_masks)\n\n        src_masks = src_masks[src_idx]\n        # upsample predictions to the target size\n        src_masks = interpolate(src_masks[:, None], size=target_masks.shape[-2:],\n                                mode=\"bilinear\", align_corners=False)\n        src_masks = src_masks[:, 0].flatten(1)\n\n        target_masks = target_masks[tgt_idx].flatten(1)\n\n        losses = {\n            \"loss_mask\": sigmoid_focal_loss(src_masks, target_masks, num_boxes),\n            \"loss_dice\": dice_loss(src_masks, target_masks, num_boxes),\n        }\n        return losses\n\n    def _get_src_permutation_idx(self, indices):\n        # permute predictions following indices\n        batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)])\n        src_idx = torch.cat([src for (src, _) in indices])\n        return batch_idx, src_idx\n\n    def _get_tgt_permutation_idx(self, indices):\n        # permute targets following indices\n        batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])\n        tgt_idx = torch.cat([tgt for (_, tgt) in indices])\n        return batch_idx, tgt_idx\n\n    def get_loss(self, loss, outputs, targets, indices, num_boxes, **kwargs):\n        loss_map = {\n            'labels': self.loss_labels_focal if self.focal_loss else self.loss_labels,\n            'cardinality': self.loss_cardinality,\n            'boxes': self.loss_boxes,\n            'masks': self.loss_masks,\n        }\n        assert loss in loss_map, f'do you really want to compute {loss} loss?'\n        return loss_map[loss](outputs, targets, indices, num_boxes, **kwargs)\n\n    def forward(self, outputs, targets):\n        \"\"\" This performs the loss computation.\n        Parameters:\n             outputs: dict of tensors, see the output specification of the model for the format\n             targets: list of dicts, such that len(targets) == batch_size.\n                      The expected keys in each dict depends on the losses applied,\n                      see each loss' doc\n        \"\"\"\n        outputs_without_aux = {k: v for k, v in outputs.items() if k != 'aux_outputs'}\n\n        # Retrieve the matching between the outputs of the last layer and the targets\n        indices = self.matcher(outputs_without_aux, targets)\n\n        # Compute the average number of target boxes accross all nodes, for normalization purposes\n        num_boxes = sum(len(t[\"labels\"]) for t in targets)\n        num_boxes = torch.as_tensor(\n            [num_boxes], dtype=torch.float, device=next(iter(outputs.values())).device)\n        if is_dist_avail_and_initialized():\n            torch.distributed.all_reduce(num_boxes)\n        num_boxes = torch.clamp(num_boxes / get_world_size(), min=1).item()\n\n        # Compute all the requested losses\n        losses = {}\n        for loss in self.losses:\n            losses.update(self.get_loss(loss, outputs, targets, indices, num_boxes))\n\n        # In case of auxiliary losses, we repeat this process with the\n        # output of each intermediate layer.\n        if 'aux_outputs' in outputs:\n            for i, aux_outputs in enumerate(outputs['aux_outputs']):\n                indices = self.matcher(aux_outputs, targets)\n                for loss in self.losses:\n                    if loss == 'masks':\n                        # Intermediate masks losses are too costly to compute, we ignore them.\n                        continue\n                    kwargs = {}\n                    if loss == 'labels':\n                        # Logging is enabled only for the last layer\n                        kwargs = {'log': False}\n                    l_dict = self.get_loss(loss, aux_outputs, targets, indices, num_boxes, **kwargs)\n                    l_dict = {k + f'_{i}': v for k, v in l_dict.items()}\n                    losses.update(l_dict)\n\n        if 'enc_outputs' in outputs:\n            enc_outputs = outputs['enc_outputs']\n            bin_targets = copy.deepcopy(targets)\n            for bt in bin_targets:\n                bt['labels'] = torch.zeros_like(bt['labels'])\n            indices = self.matcher(enc_outputs, bin_targets)\n            for loss in self.losses:\n                if loss == 'masks':\n                    # Intermediate masks losses are too costly to compute, we ignore them.\n                    continue\n                kwargs = {}\n                if loss == 'labels':\n                    # Logging is enabled only for the last layer\n                    kwargs['log'] = False\n                l_dict = self.get_loss(loss, enc_outputs, bin_targets, indices, num_boxes, **kwargs)\n                l_dict = {k + f'_enc': v for k, v in l_dict.items()}\n                losses.update(l_dict)\n\n        return losses\n\n\nclass PostProcess(nn.Module):\n    \"\"\" This module converts the model's output into the format expected by the coco api\"\"\"\n\n    def process_boxes(self, boxes, target_sizes):\n        # convert to [x0, y0, x1, y1] format\n        boxes = box_ops.box_cxcywh_to_xyxy(boxes)\n        # and from relative [0, 1] to absolute [0, height] coordinates\n        img_h, img_w = target_sizes.unbind(1)\n        scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1)\n        boxes = boxes * scale_fct[:, None, :]\n\n        return boxes\n\n    @torch.no_grad()\n    def forward(self, outputs, target_sizes, results_mask=None):\n        \"\"\" Perform the computation\n        Parameters:\n            outputs: raw outputs of the model\n            target_sizes: tensor of dimension [batch_size x 2] containing the size of\n                          each images of the batch For evaluation, this must be the\n                          original image size (before any data augmentation) For\n                          visualization, this should be the image size after data\n                          augment, but before padding\n        \"\"\"\n        out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes']\n\n        assert len(out_logits) == len(target_sizes)\n        assert target_sizes.shape[1] == 2\n\n        prob = F.softmax(out_logits, -1)\n        scores, labels = prob[..., :-1].max(-1)\n\n        boxes = self.process_boxes(out_bbox, target_sizes)\n\n\n        results = [\n            {'scores': s, 'labels': l, 'boxes': b, 'scores_no_object': s_n_o}\n            for s, l, b, s_n_o in zip(scores, labels, boxes, prob[..., -1])]\n\n        if results_mask is not None:\n            for i, mask in enumerate(results_mask):\n                for k, v in results[i].items():\n                    results[i][k] = v[mask]\n\n        return results\n\n\nclass MLP(nn.Module):\n    \"\"\" Very simple multi-layer perceptron (also called FFN)\"\"\"\n\n    def __init__(self, input_dim, hidden_dim, output_dim, num_layers):\n        super().__init__()\n        self.num_layers = num_layers\n        h = [hidden_dim] * (num_layers - 1)\n        self.layers = nn.ModuleList(\n            nn.Linear(n, k)\n            for n, k in zip([input_dim] + h, h + [output_dim]))\n\n    def forward(self, x):\n        for i, layer in enumerate(self.layers):\n            x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)\n        return x\n"
  },
  {
    "path": "src/trackformer/models/detr_segmentation.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nThis file provides the definition of the convolutional heads used\nto predict masks, as well as the losses.\n\"\"\"\nimport io\nfrom collections import defaultdict\nfrom typing import List, Optional\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom PIL import Image\nfrom torch import Tensor\n\nfrom ..util import box_ops\nfrom ..util.misc import NestedTensor, interpolate\n\ntry:\n    from panopticapi.utils import id2rgb, rgb2id\nexcept ImportError:\n    pass\n\nfrom .deformable_detr import DeformableDETR\nfrom .detr import DETR\nfrom .detr_tracking import DETRTrackingBase\n\n\nclass DETRSegmBase(nn.Module):\n    def __init__(self, freeze_detr=False):\n        if freeze_detr:\n            for param in self.parameters():\n                param.requires_grad_(False)\n\n        nheads = self.transformer.nhead\n        self.bbox_attention = MHAttentionMap(self.hidden_dim, self.hidden_dim, nheads, dropout=0.0)\n\n        self.mask_head = MaskHeadSmallConv(\n            self.hidden_dim + nheads, self.fpn_channels, self.hidden_dim)\n\n    def forward(self, samples: NestedTensor, targets: list = None):\n        out, targets, features, memory, hs = super().forward(samples, targets)\n\n        if isinstance(memory, list):\n            src, mask = features[-2].decompose()\n            batch_size = src.shape[0]\n\n            src = self.input_proj[-3](src)\n            mask = F.interpolate(mask[None].float(), size=src.shape[-2:]).to(torch.bool)[0]\n\n            # fpns = [memory[2], memory[1], memory[0]]\n            fpns = [features[-2].tensors, features[-3].tensors, features[-4].tensors]\n            memory = memory[-3]\n        else:\n            src, mask = features[-1].decompose()\n            batch_size = src.shape[0]\n\n            src = self.input_proj(src)\n\n            fpns = [features[2].tensors, features[1].tensors, features[0].tensors]\n\n        # FIXME h_boxes takes the last one computed, keep this in mind\n        bbox_mask = self.bbox_attention(hs[-1], memory, mask=mask)\n\n        seg_masks = self.mask_head(src, bbox_mask, fpns)\n        outputs_seg_masks = seg_masks.view(\n            batch_size, hs.shape[2], seg_masks.shape[-2], seg_masks.shape[-1])\n\n        out[\"pred_masks\"] = outputs_seg_masks\n\n        return out, targets, features, memory, hs\n\n\n# TODO: with meta classes\nclass DETRSegm(DETRSegmBase, DETR):\n    def __init__(self, mask_kwargs, detr_kwargs):\n        DETR.__init__(self, **detr_kwargs)\n        DETRSegmBase.__init__(self, **mask_kwargs)\n\n\nclass DeformableDETRSegm(DETRSegmBase, DeformableDETR):\n    def __init__(self, mask_kwargs, detr_kwargs):\n        DeformableDETR.__init__(self, **detr_kwargs)\n        DETRSegmBase.__init__(self, **mask_kwargs)\n\n\nclass DETRSegmTracking(DETRSegmBase, DETRTrackingBase, DETR):\n    def __init__(self, mask_kwargs, tracking_kwargs, detr_kwargs):\n        DETR.__init__(self, **detr_kwargs)\n        DETRTrackingBase.__init__(self, **tracking_kwargs)\n        DETRSegmBase.__init__(self, **mask_kwargs)\n\n\nclass DeformableDETRSegmTracking(DETRSegmBase, DETRTrackingBase, DeformableDETR):\n    def __init__(self, mask_kwargs, tracking_kwargs, detr_kwargs):\n        DeformableDETR.__init__(self, **detr_kwargs)\n        DETRTrackingBase.__init__(self, **tracking_kwargs)\n        DETRSegmBase.__init__(self, **mask_kwargs)\n\n\ndef _expand(tensor, length: int):\n    return tensor.unsqueeze(1).repeat(1, int(length), 1, 1, 1).flatten(0, 1)\n\n\nclass MaskHeadSmallConv(nn.Module):\n    \"\"\"\n    Simple convolutional head, using group norm.\n    Upsampling is done using a FPN approach\n    \"\"\"\n\n    def __init__(self, dim, fpn_dims, context_dim):\n        super().__init__()\n        inter_dims = [\n            dim,\n            context_dim // 2,\n            context_dim // 4,\n            context_dim // 8,\n            context_dim // 16,\n            context_dim // 64]\n        self.lay1 = torch.nn.Conv2d(dim, dim, 3, padding=1)\n        self.gn1 = torch.nn.GroupNorm(8, dim)\n        self.lay2 = torch.nn.Conv2d(dim, inter_dims[1], 3, padding=1)\n        self.gn2 = torch.nn.GroupNorm(8, inter_dims[1])\n        self.lay3 = torch.nn.Conv2d(inter_dims[1], inter_dims[2], 3, padding=1)\n        self.gn3 = torch.nn.GroupNorm(8, inter_dims[2])\n        self.lay4 = torch.nn.Conv2d(inter_dims[2], inter_dims[3], 3, padding=1)\n        self.gn4 = torch.nn.GroupNorm(8, inter_dims[3])\n        self.lay5 = torch.nn.Conv2d(inter_dims[3], inter_dims[4], 3, padding=1)\n        self.gn5 = torch.nn.GroupNorm(8, inter_dims[4])\n        self.out_lay = torch.nn.Conv2d(inter_dims[4], 1, 3, padding=1)\n\n        self.dim = dim\n\n        self.adapter1 = torch.nn.Conv2d(fpn_dims[0], inter_dims[1], 1)\n        self.adapter2 = torch.nn.Conv2d(fpn_dims[1], inter_dims[2], 1)\n        self.adapter3 = torch.nn.Conv2d(fpn_dims[2], inter_dims[3], 1)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_uniform_(m.weight, a=1)\n                nn.init.constant_(m.bias, 0)\n\n    def forward(self, x: Tensor, bbox_mask: Tensor, fpns: List[Tensor]):\n        x = torch.cat([_expand(x, bbox_mask.shape[1]), bbox_mask.flatten(0, 1)], 1)\n\n        x = self.lay1(x)\n        x = self.gn1(x)\n        x = F.relu(x)\n        x = self.lay2(x)\n        x = self.gn2(x)\n        x = F.relu(x)\n\n        cur_fpn = self.adapter1(fpns[0])\n        if cur_fpn.size(0) != x.size(0):\n            cur_fpn = _expand(cur_fpn, x.size(0) // cur_fpn.size(0))\n        x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode=\"nearest\")\n        x = self.lay3(x)\n        x = self.gn3(x)\n        x = F.relu(x)\n\n        cur_fpn = self.adapter2(fpns[1])\n        if cur_fpn.size(0) != x.size(0):\n            cur_fpn = _expand(cur_fpn, x.size(0) // cur_fpn.size(0))\n        x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode=\"nearest\")\n        x = self.lay4(x)\n        x = self.gn4(x)\n        x = F.relu(x)\n\n        cur_fpn = self.adapter3(fpns[2])\n        if cur_fpn.size(0) != x.size(0):\n            cur_fpn = _expand(cur_fpn, x.size(0) // cur_fpn.size(0))\n        x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode=\"nearest\")\n        x = self.lay5(x)\n        x = self.gn5(x)\n        x = F.relu(x)\n\n        x = self.out_lay(x)\n        return x\n\n\nclass MHAttentionMap(nn.Module):\n    \"\"\"This is a 2D attention module, which only returns\n       the attention softmax (no multiplication by value)\"\"\"\n\n    def __init__(self, query_dim, hidden_dim, num_heads, dropout=0.0, bias=True):\n        super().__init__()\n        self.num_heads = num_heads\n        self.hidden_dim = hidden_dim\n        self.dropout = nn.Dropout(dropout)\n\n        self.q_linear = nn.Linear(query_dim, hidden_dim, bias=bias)\n        self.k_linear = nn.Linear(query_dim, hidden_dim, bias=bias)\n\n        nn.init.zeros_(self.k_linear.bias)\n        nn.init.zeros_(self.q_linear.bias)\n        nn.init.xavier_uniform_(self.k_linear.weight)\n        nn.init.xavier_uniform_(self.q_linear.weight)\n        self.normalize_fact = float(hidden_dim / self.num_heads) ** -0.5\n\n    def forward(self, q, k, mask: Optional[Tensor] = None):\n        q = self.q_linear(q)\n        k = F.conv2d(k, self.k_linear.weight.unsqueeze(-1).unsqueeze(-1), self.k_linear.bias)\n        qh = q.view(q.shape[0], q.shape[1], self.num_heads, self.hidden_dim // self.num_heads)\n        kh = k.view(\n            k.shape[0],\n            self.num_heads,\n            self.hidden_dim // self.num_heads,\n            k.shape[-2],\n            k.shape[-1])\n        weights = torch.einsum(\"bqnc,bnchw->bqnhw\", qh * self.normalize_fact, kh)\n\n        if mask is not None:\n            weights.masked_fill_(mask.unsqueeze(1).unsqueeze(1), float(\"-inf\"))\n        weights = F.softmax(weights.flatten(2), dim=-1).view_as(weights)\n        weights = self.dropout(weights)\n        return weights\n\n\nclass PostProcessSegm(nn.Module):\n    def __init__(self, threshold=0.5):\n        super().__init__()\n        self.threshold = threshold\n\n    @torch.no_grad()\n    def forward(self, results, outputs, orig_target_sizes, max_target_sizes, return_probs=False, results_mask=None):\n        assert len(orig_target_sizes) == len(max_target_sizes)\n        max_h, max_w = max_target_sizes.max(0)[0].tolist()\n        outputs_masks = outputs[\"pred_masks\"].squeeze(2)\n        outputs_masks = F.interpolate(\n            outputs_masks,\n            size=(max_h, max_w),\n            mode=\"bilinear\",\n            align_corners=False)\n\n        outputs_masks = outputs_masks.sigmoid().cpu()\n        if not return_probs:\n            outputs_masks = outputs_masks > self.threshold\n\n        zip_iter = zip(outputs_masks, max_target_sizes, orig_target_sizes)\n        for i, (cur_mask, t, tt) in enumerate(zip_iter):\n            img_h, img_w = t[0], t[1]\n            masks = cur_mask[:, :img_h, :img_w].unsqueeze(1)\n            masks = F.interpolate(masks.float(), size=tuple(tt.tolist()), mode=\"nearest\")\n\n            if not return_probs:\n                masks = masks.byte()\n\n            if results_mask is not None:\n                masks = masks[results_mask[i]]\n\n            results[i][\"masks\"] = masks\n\n        return results\n\n\nclass PostProcessPanoptic(nn.Module):\n    \"\"\"This class converts the output of the model to the final panoptic result,\n    in the format expected by the coco panoptic API \"\"\"\n\n    def __init__(self, is_thing_map, threshold=0.85):\n        \"\"\"\n        Parameters:\n           is_thing_map: This is a whose keys are the class ids, and the values\n                         a boolean indicating whether the class is  a thing (True)\n                         or a stuff (False) class\n           threshold: confidence threshold: segments with confidence lower than\n                      this will be deleted\n        \"\"\"\n        super().__init__()\n        self.threshold = threshold\n        self.is_thing_map = is_thing_map\n\n    def forward(self, outputs, processed_sizes, target_sizes=None):\n        \"\"\" This function computes the panoptic prediction from the model's predictions.\n        Parameters:\n            outputs: This is a dict coming directly from the model. See the model\n                     doc for the content.\n            processed_sizes: This is a list of tuples (or torch tensors) of sizes\n                             of the images that were passed to the model, ie the\n                             size after data augmentation but before batching.\n            target_sizes: This is a list of tuples (or torch tensors) corresponding\n                          to the requested final size of each prediction. If left to\n                          None, it will default to the processed_sizes\n            \"\"\"\n        if target_sizes is None:\n            target_sizes = processed_sizes\n        assert len(processed_sizes) == len(target_sizes)\n        out_logits, raw_masks, raw_boxes = \\\n            outputs[\"pred_logits\"], outputs[\"pred_masks\"], outputs[\"pred_boxes\"]\n        assert len(out_logits) == len(raw_masks) == len(target_sizes)\n        preds = []\n\n        def to_tuple(tup):\n            if isinstance(tup, tuple):\n                return tup\n            return tuple(tup.cpu().tolist())\n\n        for cur_logits, cur_masks, cur_boxes, size, target_size in zip(\n            out_logits, raw_masks, raw_boxes, processed_sizes, target_sizes\n        ):\n            # we filter empty queries and detection below threshold\n            scores, labels = cur_logits.softmax(-1).max(-1)\n            keep = labels.ne(outputs[\"pred_logits\"].shape[-1] - 1) & (scores > self.threshold)\n            cur_scores, cur_classes = cur_logits.softmax(-1).max(-1)\n            cur_scores = cur_scores[keep]\n            cur_classes = cur_classes[keep]\n            cur_masks = cur_masks[keep]\n            cur_masks = interpolate(cur_masks[None], to_tuple(size), mode=\"bilinear\").squeeze(0)\n            cur_boxes = box_ops.box_cxcywh_to_xyxy(cur_boxes[keep])\n\n            h, w = cur_masks.shape[-2:]\n            assert len(cur_boxes) == len(cur_classes)\n\n            # It may be that we have several predicted masks for the same stuff class.\n            # In the following, we track the list of masks ids for each stuff class\n            # (they are merged later on)\n            cur_masks = cur_masks.flatten(1)\n            stuff_equiv_classes = defaultdict(lambda: [])\n            for k, label in enumerate(cur_classes):\n                if not self.is_thing_map[label.item()]:\n                    stuff_equiv_classes[label.item()].append(k)\n\n            def get_ids_area(masks, scores, dedup=False):\n                # This helper function creates the final panoptic segmentation image\n                # It also returns the area of the masks that appears on the image\n\n                m_id = masks.transpose(0, 1).softmax(-1)\n\n                if m_id.shape[-1] == 0:\n                    # We didn't detect any mask :(\n                    m_id = torch.zeros((h, w), dtype=torch.long, device=m_id.device)\n                else:\n                    m_id = m_id.argmax(-1).view(h, w)\n\n                if dedup:\n                    # Merge the masks corresponding to the same stuff class\n                    for equiv in stuff_equiv_classes.values():\n                        if len(equiv) > 1:\n                            for eq_id in equiv:\n                                m_id.masked_fill_(m_id.eq(eq_id), equiv[0])\n\n                final_h, final_w = to_tuple(target_size)\n\n                seg_img = Image.fromarray(id2rgb(m_id.view(h, w).cpu().numpy()))\n                seg_img = seg_img.resize(size=(final_w, final_h), resample=Image.NEAREST)\n\n                np_seg_img = (torch.ByteTensor(\n                    torch.ByteStorage.from_buffer(seg_img.tobytes())).view(final_h, final_w, 3).numpy())\n                m_id = torch.from_numpy(rgb2id(np_seg_img))\n\n                area = []\n                for i in range(len(scores)):\n                    area.append(m_id.eq(i).sum().item())\n                return area, seg_img\n\n            area, seg_img = get_ids_area(cur_masks, cur_scores, dedup=True)\n            if cur_classes.numel() > 0:\n                # We know filter empty masks as long as we find some\n                while True:\n                    filtered_small = torch.as_tensor([\n                        area[i] <= 4\n                        for i, c in enumerate(cur_classes)], dtype=torch.bool, device=keep.device)\n                    if filtered_small.any().item():\n                        cur_scores = cur_scores[~filtered_small]\n                        cur_classes = cur_classes[~filtered_small]\n                        cur_masks = cur_masks[~filtered_small]\n                        area, seg_img = get_ids_area(cur_masks, cur_scores)\n                    else:\n                        break\n\n            else:\n                cur_classes = torch.ones(1, dtype=torch.long, device=cur_classes.device)\n\n            segments_info = []\n            for i, a in enumerate(area):\n                cat = cur_classes[i].item()\n                segments_info.append({\n                    \"id\": i,\n                    \"isthing\": self.is_thing_map[cat],\n                    \"category_id\": cat,\n                    \"area\": a})\n            del cur_classes\n\n            with io.BytesIO() as out:\n                seg_img.save(out, format=\"PNG\")\n                predictions = {\"png_string\": out.getvalue(), \"segments_info\": segments_info}\n            preds.append(predictions)\n        return preds\n"
  },
  {
    "path": "src/trackformer/models/detr_tracking.py",
    "content": "import math\nimport random\nfrom contextlib import nullcontext\n\nimport torch\nimport torch.nn as nn\n\nfrom ..util import box_ops\nfrom ..util.misc import NestedTensor, get_rank\nfrom .deformable_detr import DeformableDETR\nfrom .detr import DETR\nfrom .matcher import HungarianMatcher\n\n\nclass DETRTrackingBase(nn.Module):\n\n    def __init__(self,\n                 track_query_false_positive_prob: float = 0.0,\n                 track_query_false_negative_prob: float = 0.0,\n                 matcher: HungarianMatcher = None,\n                 backprop_prev_frame=False):\n        self._matcher = matcher\n        self._track_query_false_positive_prob = track_query_false_positive_prob\n        self._track_query_false_negative_prob = track_query_false_negative_prob\n        self._backprop_prev_frame = backprop_prev_frame\n\n        self._tracking = False\n\n    def train(self, mode: bool = True):\n        \"\"\"Sets the module in train mode.\"\"\"\n        self._tracking = False\n        return super().train(mode)\n\n    def tracking(self):\n        \"\"\"Sets the module in tracking mode.\"\"\"\n        self.eval()\n        self._tracking = True\n\n    def add_track_queries_to_targets(self, targets, prev_indices, prev_out, add_false_pos=True):\n        device = prev_out['pred_boxes'].device\n\n        # for i, (target, prev_ind) in enumerate(zip(targets, prev_indices)):\n        min_prev_target_ind = min([len(prev_ind[1]) for prev_ind in prev_indices])\n        num_prev_target_ind = 0\n        if min_prev_target_ind:\n            num_prev_target_ind = torch.randint(0, min_prev_target_ind + 1, (1,)).item()\n\n        num_prev_target_ind_for_fps = 0\n        if num_prev_target_ind:\n            num_prev_target_ind_for_fps = \\\n                torch.randint(int(math.ceil(self._track_query_false_positive_prob * num_prev_target_ind)) + 1, (1,)).item()\n\n        for i, (target, prev_ind) in enumerate(zip(targets, prev_indices)):\n            prev_out_ind, prev_target_ind = prev_ind\n\n            # random subset\n            if self._track_query_false_negative_prob: # and len(prev_target_ind):\n                # random_subset_mask = torch.empty(len(prev_target_ind)).uniform_()\n                # random_subset_mask = random_subset_mask.ge(\n                #     self._track_query_false_negative_prob)\n\n                # random_subset_mask = torch.randperm(len(prev_target_ind))[:torch.randint(0, len(prev_target_ind) + 1, (1,))]\n                random_subset_mask = torch.randperm(len(prev_target_ind))[:num_prev_target_ind]\n\n                # if not len(random_subset_mask):\n                #     target['track_query_hs_embeds'] = torch.zeros(0, self.hidden_dim).float().to(device)\n                #     target['track_queries_placeholder_mask'] = torch.zeros(self.num_queries).bool().to(device)\n                #     target['track_queries_mask'] = torch.zeros(self.num_queries).bool().to(device)\n                #     target['track_queries_fal_pos_mask'] = torch.zeros(self.num_queries).bool().to(device)\n                #     target['track_query_boxes'] = torch.zeros(0, 4).to(device)\n                #     target['track_query_match_ids'] = torch.tensor([]).long().to(device)\n\n                #     continue\n\n                prev_out_ind = prev_out_ind[random_subset_mask]\n                prev_target_ind = prev_target_ind[random_subset_mask]\n\n            # detected prev frame tracks\n            prev_track_ids = target['prev_target']['track_ids'][prev_target_ind]\n\n            # match track ids between frames\n            target_ind_match_matrix = prev_track_ids.unsqueeze(dim=1).eq(target['track_ids'])\n            target_ind_matching = target_ind_match_matrix.any(dim=1)\n            target_ind_matched_idx = target_ind_match_matrix.nonzero()[:, 1]\n\n            # current frame track ids detected in the prev frame\n            # track_ids = target['track_ids'][target_ind_matched_idx]\n\n            # index of prev frame detection in current frame box list\n            target['track_query_match_ids'] = target_ind_matched_idx\n\n            # random false positives\n            if add_false_pos:\n                prev_boxes_matched = prev_out['pred_boxes'][i, prev_out_ind[target_ind_matching]]\n\n                not_prev_out_ind = torch.arange(prev_out['pred_boxes'].shape[1])\n                not_prev_out_ind = [\n                    ind.item()\n                    for ind in not_prev_out_ind\n                    if ind not in prev_out_ind]\n\n                random_false_out_ind = []\n\n                prev_target_ind_for_fps = torch.randperm(num_prev_target_ind)[:num_prev_target_ind_for_fps]\n\n                # for j, prev_box_matched in enumerate(prev_boxes_matched):\n                #     if j not in prev_target_ind_for_fps:\n                #         continue\n\n                for j in prev_target_ind_for_fps:\n                    # if random.uniform(0, 1) < self._track_query_false_positive_prob:\n                    prev_boxes_unmatched = prev_out['pred_boxes'][i, not_prev_out_ind]\n\n                    # only cxcy\n                    # box_dists = prev_box_matched[:2].sub(prev_boxes_unmatched[:, :2]).abs()\n                    # box_dists = box_dists.pow(2).sum(dim=-1).sqrt()\n                    # box_weights = 1.0 / box_dists.add(1e-8)\n\n                    # prev_box_ious, _ = box_ops.box_iou(\n                    #     box_ops.box_cxcywh_to_xyxy(prev_box_matched.unsqueeze(dim=0)),\n                    #     box_ops.box_cxcywh_to_xyxy(prev_boxes_unmatched))\n                    # box_weights = prev_box_ious[0]\n\n                    # dist = sqrt( (x2 - x1)**2 + (y2 - y1)**2 )\n\n                    if len(prev_boxes_matched) > j:\n                        prev_box_matched = prev_boxes_matched[j]\n                        box_weights = \\\n                            prev_box_matched.unsqueeze(dim=0)[:, :2] - \\\n                            prev_boxes_unmatched[:, :2]\n                        box_weights = box_weights[:, 0] ** 2 + box_weights[:, 0] ** 2\n                        box_weights = torch.sqrt(box_weights)\n\n                        # if box_weights.gt(0.0).any():\n                        # if box_weights.gt(0.0).any():\n                        random_false_out_idx = not_prev_out_ind.pop(\n                            torch.multinomial(box_weights.cpu(), 1).item())\n                    else:\n                        random_false_out_idx = not_prev_out_ind.pop(torch.randperm(len(not_prev_out_ind))[0])\n\n                    random_false_out_ind.append(random_false_out_idx)\n\n                prev_out_ind = torch.tensor(prev_out_ind.tolist() + random_false_out_ind).long()\n\n                target_ind_matching = torch.cat([\n                    target_ind_matching,\n                    torch.tensor([False, ] * len(random_false_out_ind)).bool().to(device)\n                ])\n\n            # MSDeformAttn can not deal with empty inputs therefore we\n            # add single false pos to have at least one track query per sample\n            # not_prev_out_ind = torch.tensor([\n            #     ind\n            #     for ind in torch.arange(prev_out['pred_boxes'].shape[1])\n            #     if ind not in prev_out_ind])\n            # false_samples_inds = torch.randperm(not_prev_out_ind.size(0))[:1]\n            # false_samples = not_prev_out_ind[false_samples_inds]\n            # prev_out_ind = torch.cat([prev_out_ind, false_samples])\n            # target_ind_matching = torch.tensor(\n            #     target_ind_matching.tolist() + [False, ]).bool().to(target_ind_matching.device)\n\n            # track query masks\n            track_queries_mask = torch.ones_like(target_ind_matching).bool()\n            track_queries_fal_pos_mask = torch.zeros_like(target_ind_matching).bool()\n            track_queries_fal_pos_mask[~target_ind_matching] = True\n\n            # track_queries_match_mask = torch.ones_like(target_ind_matching).float()\n            # matches indices with 1.0 and not matched -1.0\n            # track_queries_mask[~target_ind_matching] = -1.0\n\n            # set prev frame info\n            target['track_query_hs_embeds'] = prev_out['hs_embed'][i, prev_out_ind]\n            target['track_query_boxes'] = prev_out['pred_boxes'][i, prev_out_ind].detach()\n\n            target['track_queries_mask'] = torch.cat([\n                track_queries_mask,\n                torch.tensor([False, ] * self.num_queries).to(device)\n            ]).bool()\n\n            target['track_queries_fal_pos_mask'] = torch.cat([\n                track_queries_fal_pos_mask,\n                torch.tensor([False, ] * self.num_queries).to(device)\n            ]).bool()\n\n        # add placeholder track queries to allow for batch sizes > 1\n        # max_track_query_hs_embeds = max([len(t['track_query_hs_embeds']) for t in targets])\n        # for i, target in enumerate(targets):\n\n        #     num_add = max_track_query_hs_embeds - len(target['track_query_hs_embeds'])\n\n        #     if not num_add:\n        #         target['track_queries_placeholder_mask'] = torch.zeros_like(target['track_queries_mask']).bool()\n        #         continue\n\n        #     raise NotImplementedError\n\n        #     target['track_query_hs_embeds'] = torch.cat(\n        #         [torch.zeros(num_add, self.hidden_dim).to(device),\n        #          target['track_query_hs_embeds']\n        #     ])\n        #     target['track_query_boxes'] = torch.cat(\n        #         [torch.zeros(num_add, 4).to(device),\n        #          target['track_query_boxes']\n        #     ])\n\n        #     target['track_queries_mask'] = torch.cat([\n        #         torch.tensor([True, ] * num_add).to(device),\n        #         target['track_queries_mask']\n        #     ]).bool()\n\n        #     target['track_queries_fal_pos_mask'] = torch.cat([\n        #         torch.tensor([False, ] * num_add).to(device),\n        #         target['track_queries_fal_pos_mask']\n        #     ]).bool()\n\n        #     target['track_queries_placeholder_mask'] = torch.zeros_like(target['track_queries_mask']).bool()\n        #     target['track_queries_placeholder_mask'][:num_add] = True\n\n    def forward(self, samples: NestedTensor, targets: list = None, prev_features=None):\n        if targets is not None and not self._tracking:\n            prev_targets = [target['prev_target'] for target in targets]\n\n            # if self.training and random.uniform(0, 1) < 0.5:\n            if self.training:\n            # if True:\n                backprop_context = torch.no_grad\n                if self._backprop_prev_frame:\n                    backprop_context = nullcontext\n\n                with backprop_context():\n                    if 'prev_prev_image' in targets[0]:\n                        for target, prev_target in zip(targets, prev_targets):\n                            prev_target['prev_target'] = target['prev_prev_target']\n\n                        prev_prev_targets = [target['prev_prev_target'] for target in targets]\n\n                        # PREV PREV\n                        prev_prev_out, _, prev_prev_features, _, _ = super().forward([t['prev_prev_image'] for t in targets])\n\n                        prev_prev_outputs_without_aux = {\n                            k: v for k, v in prev_prev_out.items() if 'aux_outputs' not in k}\n                        prev_prev_indices = self._matcher(prev_prev_outputs_without_aux, prev_prev_targets)\n\n                        self.add_track_queries_to_targets(\n                            prev_targets, prev_prev_indices, prev_prev_out, add_false_pos=False)\n\n                        # PREV\n                        prev_out, _, prev_features, _, _ = super().forward(\n                            [t['prev_image'] for t in targets],\n                            prev_targets,\n                            prev_prev_features)\n                    else:\n                        prev_out, _, prev_features, _, _ = super().forward([t['prev_image'] for t in targets])\n\n                    # prev_out = {k: v.detach() for k, v in prev_out.items() if torch.is_tensor(v)}\n\n                    prev_outputs_without_aux = {\n                        k: v for k, v in prev_out.items() if 'aux_outputs' not in k}\n                    prev_indices = self._matcher(prev_outputs_without_aux, prev_targets)\n\n                    self.add_track_queries_to_targets(targets, prev_indices, prev_out)\n            else:\n                # if not training we do not add track queries and evaluate detection performance only.\n                # tracking performance is evaluated by the actual tracking evaluation.\n                for target in targets:\n                    device = target['boxes'].device\n\n                    target['track_query_hs_embeds'] = torch.zeros(0, self.hidden_dim).float().to(device)\n                    # target['track_queries_placeholder_mask'] = torch.zeros(self.num_queries).bool().to(device)\n                    target['track_queries_mask'] = torch.zeros(self.num_queries).bool().to(device)\n                    target['track_queries_fal_pos_mask'] = torch.zeros(self.num_queries).bool().to(device)\n                    target['track_query_boxes'] = torch.zeros(0, 4).to(device)\n                    target['track_query_match_ids'] = torch.tensor([]).long().to(device)\n\n        out, targets, features, memory, hs  = super().forward(samples, targets, prev_features)\n\n        return out, targets, features, memory, hs\n\n\n# TODO: with meta classes\nclass DETRTracking(DETRTrackingBase, DETR):\n    def __init__(self, tracking_kwargs, detr_kwargs):\n        DETR.__init__(self, **detr_kwargs)\n        DETRTrackingBase.__init__(self, **tracking_kwargs)\n\n\nclass DeformableDETRTracking(DETRTrackingBase, DeformableDETR):\n    def __init__(self, tracking_kwargs, detr_kwargs):\n        DeformableDETR.__init__(self, **detr_kwargs)\n        DETRTrackingBase.__init__(self, **tracking_kwargs)\n"
  },
  {
    "path": "src/trackformer/models/matcher.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nModules to compute the matching cost and solve the corresponding LSAP.\n\"\"\"\nimport numpy as np\nimport torch\nfrom scipy.optimize import linear_sum_assignment\nfrom torch import nn\n\nfrom ..util.box_ops import box_cxcywh_to_xyxy, generalized_box_iou\n\n\nclass HungarianMatcher(nn.Module):\n    \"\"\"This class computes an assignment between the targets and the predictions of the network\n\n    For efficiency reasons, the targets don't include the no_object. Because of this, in general,\n    there are more predictions than targets. In this case, we do a 1-to-1 matching of the best\n    predictions, while the others are un-matched (and thus treated as non-objects).\n    \"\"\"\n\n    def __init__(self, cost_class: float = 1, cost_bbox: float = 1, cost_giou: float = 1,\n                 focal_loss: bool = False, focal_alpha: float = 0.25, focal_gamma: float = 2.0):\n        \"\"\"Creates the matcher\n\n        Params:\n            cost_class: This is the relative weight of the classification error in the matching cost\n            cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates\n                       in the matching cost\n            cost_giou: This is the relative weight of the giou loss of the bounding box in the\n                       matching cost\n        \"\"\"\n        super().__init__()\n        self.cost_class = cost_class\n        self.cost_bbox = cost_bbox\n        self.cost_giou = cost_giou\n        self.focal_loss = focal_loss\n        self.focal_alpha = focal_alpha\n        self.focal_gamma = focal_gamma\n        assert cost_class != 0 or cost_bbox != 0 or cost_giou != 0, \"all costs cant be 0\"\n\n    @torch.no_grad()\n    def forward(self, outputs, targets):\n        \"\"\" Performs the matching\n\n        Params:\n            outputs: This is a dict that contains at least these entries:\n                 \"pred_logits\": Tensor of dim [batch_size, num_queries, num_classes] with the\n                                classification logits\n                 \"pred_boxes\": Tensor of dim [batch_size, num_queries, 4] with the predicted\n                               box coordinates\n\n            targets: This is a list of targets (len(targets) = batch_size), where each target\n                     is a dict containing:\n                 \"labels\": Tensor of dim [num_target_boxes] (where num_target_boxes is the number\n                           of ground-truth objects in the target) containing the class labels\n                 \"boxes\": Tensor of dim [num_target_boxes, 4] containing the target box coordinates\n\n        Returns:\n            A list of size batch_size, containing tuples of (index_i, index_j) where:\n                - index_i is the indices of the selected predictions (in order)\n                - index_j is the indices of the corresponding selected targets (in order)\n            For each batch element, it holds:\n                len(index_i) = len(index_j) = min(num_queries, num_target_boxes)\n        \"\"\"\n        batch_size, num_queries = outputs[\"pred_logits\"].shape[:2]\n\n        # We flatten to compute the cost matrices in a batch\n        #\n        # [batch_size * num_queries, num_classes]\n        if self.focal_loss:\n            out_prob = outputs[\"pred_logits\"].flatten(0, 1).sigmoid()\n        else:\n            out_prob = outputs[\"pred_logits\"].flatten(0, 1).softmax(-1)\n\n        # [batch_size * num_queries, 4]\n        out_bbox = outputs[\"pred_boxes\"].flatten(0, 1)\n\n        # Also concat the target labels and boxes\n        tgt_ids = torch.cat([v[\"labels\"] for v in targets])\n        tgt_bbox = torch.cat([v[\"boxes\"] for v in targets])\n\n        # Compute the classification cost.\n        if self.focal_loss:\n            neg_cost_class = (1 - self.focal_alpha) * (out_prob ** self.focal_gamma) * (-(1 - out_prob + 1e-8).log())\n            pos_cost_class = self.focal_alpha * ((1 - out_prob) ** self.focal_gamma) * (-(out_prob + 1e-8).log())\n            cost_class = pos_cost_class[:, tgt_ids] - neg_cost_class[:, tgt_ids]\n        else:\n            # Contrary to the loss, we don't use the NLL, but approximate it in 1 - proba[target class].\n            # The 1 is a constant that doesn't change the matching, it can be ommitted.\n            cost_class = -out_prob[:, tgt_ids]\n\n        # Compute the L1 cost between boxes\n        cost_bbox = torch.cdist(out_bbox, tgt_bbox, p=1)\n\n        # Compute the giou cost betwen boxes\n        cost_giou = -generalized_box_iou(\n            box_cxcywh_to_xyxy(out_bbox),\n            box_cxcywh_to_xyxy(tgt_bbox))\n\n        # Final cost matrix\n        cost_matrix = self.cost_bbox * cost_bbox \\\n            + self.cost_class * cost_class \\\n            + self.cost_giou * cost_giou\n        cost_matrix = cost_matrix.view(batch_size, num_queries, -1).cpu()\n\n        sizes = [len(v[\"boxes\"]) for v in targets]\n\n        for i, target in enumerate(targets):\n            if 'track_query_match_ids' not in target:\n                continue\n\n            prop_i = 0\n            for j in range(cost_matrix.shape[1]):\n                # if target['track_queries_fal_pos_mask'][j] or target['track_queries_placeholder_mask'][j]:\n                if target['track_queries_fal_pos_mask'][j]:\n                    # false positive and palceholder track queries should not\n                    # be matched to any target\n                    cost_matrix[i, j] = np.inf\n                elif target['track_queries_mask'][j]:\n                    track_query_id = target['track_query_match_ids'][prop_i]\n                    prop_i += 1\n\n                    cost_matrix[i, j] = np.inf\n                    cost_matrix[i, :, track_query_id + sum(sizes[:i])] = np.inf\n                    cost_matrix[i, j, track_query_id + sum(sizes[:i])] = -1\n\n        indices = [linear_sum_assignment(c[i])\n                   for i, c in enumerate(cost_matrix.split(sizes, -1))]\n\n        return [(torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64))\n                for i, j in indices]\n\n\ndef build_matcher(args):\n    return HungarianMatcher(\n        cost_class=args.set_cost_class,\n        cost_bbox=args.set_cost_bbox,\n        cost_giou=args.set_cost_giou,\n        focal_loss=args.focal_loss,\n        focal_alpha=args.focal_alpha,\n        focal_gamma=args.focal_gamma,)\n"
  },
  {
    "path": "src/trackformer/models/ops/.gitignore",
    "content": "build\ndist\n*egg-info\n*.linux*\n"
  },
  {
    "path": "src/trackformer/models/ops/functions/__init__.py",
    "content": "from .ms_deform_attn_func import MSDeformAttnFunction, ms_deform_attn_core_pytorch, ms_deform_attn_core_pytorch_mot\r\n\r\n"
  },
  {
    "path": "src/trackformer/models/ops/functions/ms_deform_attn_func.py",
    "content": "#!/usr/bin/env python\r\nfrom __future__ import absolute_import\r\nfrom __future__ import print_function\r\nfrom __future__ import division\r\n\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom torch.autograd import Function\r\nfrom torch.autograd.function import once_differentiable\r\n\r\nimport MultiScaleDeformableAttention as MSDA\r\n\r\n\r\nclass MSDeformAttnFunction(Function):\r\n    @staticmethod\r\n    def forward(ctx, value, value_spatial_shapes, sampling_locations, attention_weights, im2col_step):\r\n        ctx.im2col_step = im2col_step\r\n        output = MSDA.ms_deform_attn_forward(\r\n            value, value_spatial_shapes, sampling_locations, attention_weights, ctx.im2col_step)\r\n        ctx.save_for_backward(value, value_spatial_shapes, sampling_locations, attention_weights)\r\n        return output\r\n\r\n    @staticmethod\r\n    @once_differentiable\r\n    def backward(ctx, grad_output):\r\n        value, value_spatial_shapes, sampling_locations, attention_weights = ctx.saved_tensors\r\n        grad_value, grad_sampling_loc, grad_attn_weight = \\\r\n            MSDA.ms_deform_attn_backward(\r\n                value, value_spatial_shapes, sampling_locations, attention_weights, grad_output, ctx.im2col_step)\r\n\r\n        return grad_value, None, grad_sampling_loc, grad_attn_weight, None\r\n\r\n\r\ndef ms_deform_attn_core_pytorch(value, value_spatial_shapes, sampling_locations, attention_weights):\r\n    # for debug and test only,\r\n    # need to use cuda version instead\r\n    N_, S_, M_, D_ = value.shape\r\n    _, Lq_, M_, L_, P_, _ = sampling_locations.shape\r\n    value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1)\r\n    sampling_grids = 2 * sampling_locations - 1\r\n    sampling_value_list = []\r\n    for lid_, (H_, W_) in enumerate(value_spatial_shapes):\r\n        # N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, D_, H_, W_\r\n        value_l_ = value_list[lid_].flatten(2).transpose(1, 2).reshape(N_*M_, D_, H_, W_)\r\n        # N_, Lq_, M_, P_, 2 -> N_, M_, Lq_, P_, 2 -> N_*M_, Lq_, P_, 2\r\n        sampling_grid_l_ = sampling_grids[:, :, :, lid_].transpose(1, 2).flatten(0, 1)\r\n        # N_*M_, D_, Lq_, P_\r\n        sampling_value_l_ = F.grid_sample(value_l_, sampling_grid_l_,\r\n                                          mode='bilinear', padding_mode='zeros', align_corners=False)\r\n        sampling_value_list.append(sampling_value_l_)\r\n    # (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_, M_, 1, Lq_, L_*P_)\r\n    attention_weights = attention_weights.transpose(1, 2).reshape(N_*M_, 1, Lq_, L_*P_)\r\n    output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights).sum(-1).view(N_, M_*D_, Lq_)\r\n    return output.transpose(1, 2).contiguous()\r\n\r\ndef ms_deform_attn_core_pytorch_mot(query, value, value_spatial_shapes, sampling_locations, key_proj, attention_weights=None):\r\n    # for debug and test only,\r\n    # need to use cuda version instead\r\n    N_, S_, M_, D_ = value.shape\r\n    _, Lq_, M_, L_, P_, _ = sampling_locations.shape\r\n    value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1)\r\n    sampling_grids = 2 * sampling_locations - 1\r\n    sampling_value_list = []\r\n    for lid_, (H_, W_) in enumerate(value_spatial_shapes):\r\n        # N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, D_, H_, W_\r\n        value_l_ = value_list[lid_].flatten(2).transpose(1, 2).reshape(N_*M_, D_, H_, W_)\r\n        # N_, Lq_, M_, P_, 2 -> N_, M_, Lq_, P_, 2 -> N_*M_, Lq_, P_, 2\r\n        sampling_grid_l_ = sampling_grids[:, :, :, lid_].transpose(1, 2).flatten(0, 1)\r\n        # N_*M_, D_, Lq_, P_\r\n        sampling_value_l_ = F.grid_sample(value_l_, sampling_grid_l_,\r\n                                          mode='bilinear', padding_mode='zeros', align_corners=False)\r\n        sampling_value_list.append(sampling_value_l_)\r\n    # (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_, M_, 1, Lq_, L_*P_)\r\n    q = query.transpose(1, 2).reshape(N_*M_, D_, Lq_, 1)\r\n    v = torch.stack(sampling_value_list, dim=-2).flatten(-2) # (N_*M_, D_, Lq_, L_*P_)\r\n    k = key_proj(v.reshape(N_, M_*D_, Lq_, L_*P_).permute(0, 2, 3, 1)).permute(0, 3, 1, 2).reshape(N_*M_, D_, Lq_, L_*P_)\r\n\r\n    sim = (q * k).sum(1).reshape(N_*M_, 1, Lq_, L_*P_)\r\n    attention_weights = F.softmax(sim, -1)\r\n    output = (v * attention_weights).sum(-1).view(N_, M_*D_, Lq_)\r\n\r\n    return output.transpose(1, 2).contiguous()"
  },
  {
    "path": "src/trackformer/models/ops/make.sh",
    "content": "python setup.py build install\r\n"
  },
  {
    "path": "src/trackformer/models/ops/modules/__init__.py",
    "content": "from .ms_deform_attn import MSDeformAttn\r\n"
  },
  {
    "path": "src/trackformer/models/ops/modules/ms_deform_attn.py",
    "content": "#!/usr/bin/env python\r\nfrom __future__ import absolute_import\r\nfrom __future__ import print_function\r\nfrom __future__ import division\r\n\r\nimport torch\r\nfrom torch import nn\r\nimport torch.nn.functional as F\r\nfrom torch.nn.init import xavier_uniform_, constant_\r\n\r\nfrom ..functions import MSDeformAttnFunction, ms_deform_attn_core_pytorch\r\nfrom ..functions import ms_deform_attn_core_pytorch_mot\r\n\r\n\r\nclass MSDeformAttn(nn.Module):\r\n    def __init__(self, d_model=256, n_levels=4, n_heads=8, n_points=4, im2col_step=64):\r\n        super().__init__()\r\n        assert d_model % n_heads == 0\r\n\r\n        self.im2col_step = im2col_step\r\n\r\n        self.d_model = d_model\r\n        self.n_levels = n_levels\r\n        self.n_heads = n_heads\r\n        self.n_points = n_points\r\n\r\n        self.sampling_offsets = nn.Linear(d_model, n_heads * n_levels * n_points * 2)\r\n        self.attention_weights = nn.Linear(d_model, n_heads * n_levels * n_points)\r\n        self.value_proj = nn.Linear(d_model, d_model)\r\n        self.output_proj = nn.Linear(d_model, d_model)\r\n\r\n        self._reset_parameters()\r\n\r\n    def _reset_parameters(self):\r\n        constant_(self.sampling_offsets.weight.data, 0.)\r\n        grid_init = torch.tensor([-1, -1, -1, 0, -1, 1, 0, -1, 0, 1, 1, -1, 1, 0, 1, 1], dtype=torch.float32) \\\r\n            .view(self.n_heads, 1, 1, 2).repeat(1, self.n_levels, self.n_points, 1)\r\n        for i in range(self.n_points):\r\n            grid_init[:, :, i, :] *= i + 1\r\n        with torch.no_grad():\r\n            self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1))\r\n        constant_(self.attention_weights.weight.data, 0.)\r\n        constant_(self.attention_weights.bias.data, 0.)\r\n        xavier_uniform_(self.value_proj.weight.data)\r\n        constant_(self.value_proj.bias.data, 0.)\r\n        xavier_uniform_(self.output_proj.weight.data)\r\n        constant_(self.output_proj.bias.data, 0.)\r\n\r\n    def forward(self, query, reference_points, input_flatten, input_spatial_shapes, input_padding_mask=None, query_attn_mask=None):\r\n        \"\"\"\r\n        :param query                       (N, Length_{query}, C)\r\n        :param reference_points            (N, Length_{query}, n_levels, 2), range in [0, 1], top-left (0,0), bottom-right (1, 1), including padding area\r\n                                        or (N, Length_{query}, n_levels, 4), add additional (w, h) to form reference boxes\r\n        :param input_flatten               (N, \\sum_{l=0}^{L-1} H_l \\cdot W_l, C)\r\n        :param input_spatial_shapes        (n_levels, 2), [(H_0, W_0), (H_1, W_1), ..., (H_{L-1}, W_{L-1})]\r\n        :param input_padding_mask          (N, \\sum_{l=0}^{L-1} H_l \\cdot W_l), True for padding elements, False for non-padding elements\r\n\r\n        :return output                     (N, Length_{query}, C)\r\n        \"\"\"\r\n        N, Len_q, _ = query.shape\r\n        N, Len_in, _ = input_flatten.shape\r\n        assert (input_spatial_shapes[:, 0] * input_spatial_shapes[:, 1]).sum() == Len_in\r\n\r\n        value = self.value_proj(input_flatten)\r\n        if input_padding_mask is not None:\r\n            value = value.masked_fill(input_padding_mask[..., None], float(0))\r\n        value = value.view(N, Len_in, self.n_heads, self.d_model // self.n_heads)\r\n\r\n        sampling_offsets = self.sampling_offsets(query).view(N, Len_q, self.n_heads, self.n_levels, self.n_points, 2)\r\n        attention_weights = self.attention_weights(query).view(N, Len_q, self.n_heads, self.n_levels * self.n_points)\r\n        attention_weights = F.softmax(attention_weights, -1).view(N, Len_q, self.n_heads, self.n_levels, self.n_points)\r\n\r\n        if query_attn_mask is not None:\r\n            attention_weights = attention_weights.masked_fill(query_attn_mask[..., None, None, None], float(0))\r\n\r\n        # N, Len_q, n_heads, n_levels, n_points, 2\r\n        if reference_points.shape[-1] == 2:\r\n            sampling_locations = reference_points[:, :, None, :, None, :] \\\r\n                                 + sampling_offsets / input_spatial_shapes[None, None, None, :, None, :]\r\n        elif reference_points.shape[-1] == 4:\r\n            sampling_locations = reference_points[:, :, None, :, None, :2] \\\r\n                                 + sampling_offsets / self.n_points * reference_points[:, :, None, :, None, 2:] * 0.5\r\n        else:\r\n            raise ValueError(\r\n                'Last dim of reference_points must be 2 or 4, but get {} instead.'.format(reference_points.shape[-1]))\r\n        output = MSDeformAttnFunction.apply(\r\n            value, input_spatial_shapes, sampling_locations, attention_weights, self.im2col_step)\r\n        output = self.output_proj(output)\r\n        return output\r\n"
  },
  {
    "path": "src/trackformer/models/ops/setup.py",
    "content": "#!/usr/bin/env python\n\nimport os\nimport glob\n\nimport torch\n\nfrom torch.utils.cpp_extension import CUDA_HOME\nfrom torch.utils.cpp_extension import CppExtension\nfrom torch.utils.cpp_extension import CUDAExtension\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nrequirements = [\"torch\", \"torchvision\"]\n\ndef get_extensions():\n    this_dir = os.path.dirname(os.path.abspath(__file__))\n    extensions_dir = os.path.join(this_dir, \"src\")\n\n    main_file = glob.glob(os.path.join(extensions_dir, \"*.cpp\"))\n    source_cpu = glob.glob(os.path.join(extensions_dir, \"cpu\", \"*.cpp\"))\n    source_cuda = glob.glob(os.path.join(extensions_dir, \"cuda\", \"*.cu\"))\n\n    sources = main_file + source_cpu\n    extension = CppExtension\n    extra_compile_args = {\"cxx\": []}\n    define_macros = []\n\n    if torch.cuda.is_available() and CUDA_HOME is not None:\n        extension = CUDAExtension\n        sources += source_cuda\n        define_macros += [(\"WITH_CUDA\", None)]\n        extra_compile_args[\"nvcc\"] = [\n            \"-DCUDA_HAS_FP16=1\",\n            \"-D__CUDA_NO_HALF_OPERATORS__\",\n            \"-D__CUDA_NO_HALF_CONVERSIONS__\",\n            \"-D__CUDA_NO_HALF2_OPERATORS__\",\n        ]\n    else:\n        raise NotImplementedError('Cuda is not available')\n\n    sources = [os.path.join(extensions_dir, s) for s in sources]\n    include_dirs = [extensions_dir]\n    ext_modules = [\n        extension(\n            \"MultiScaleDeformableAttention\",\n            sources,\n            include_dirs=include_dirs,\n            define_macros=define_macros,\n            extra_compile_args=extra_compile_args,\n        )\n    ]\n    return ext_modules\n\nsetup(\n    name=\"MultiScaleDeformableAttention\",\n    version=\"1.0\",\n    author=\"Weijie Su\",\n    url=\"xxx\",\n    description=\"Multi-Scale Deformable Attention Module in Deformable DETR\",\n    packages=find_packages(exclude=(\"configs\", \"tests\",)),\n    # install_requires=requirements,\n    ext_modules=get_extensions(),\n    cmdclass={\"build_ext\": torch.utils.cpp_extension.BuildExtension},\n)\n"
  },
  {
    "path": "src/trackformer/models/ops/src/cpu/ms_deform_attn_cpu.cpp",
    "content": "#include <vector>\r\n\r\n#include <ATen/ATen.h>\r\n#include <ATen/cuda/CUDAContext.h>\r\n\r\n\r\nat::Tensor\r\nms_deform_attn_cpu_forward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const int im2col_step)\r\n{\r\n    AT_ERROR(\"Not implement on cpu\");\r\n}\r\n\r\nstd::vector<at::Tensor>\r\nms_deform_attn_cpu_backward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const at::Tensor &grad_output,\r\n    const int im2col_step)\r\n{\r\n    AT_ERROR(\"Not implement on cpu\");\r\n}\r\n\r\n"
  },
  {
    "path": "src/trackformer/models/ops/src/cpu/ms_deform_attn_cpu.h",
    "content": "#pragma once\r\n#include <torch/extension.h>\r\n\r\nat::Tensor\r\nms_deform_attn_cpu_forward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const int im2col_step);\r\n\r\nstd::vector<at::Tensor>\r\nms_deform_attn_cpu_backward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const at::Tensor &grad_output,\r\n    const int im2col_step);\r\n\r\n\r\n"
  },
  {
    "path": "src/trackformer/models/ops/src/cuda/ms_deform_attn_cuda.cu",
    "content": "#include <vector>\r\n#include \"cuda/ms_deform_im2col_cuda.cuh\"\r\n\r\n#include <ATen/ATen.h>\r\n#include <ATen/cuda/CUDAContext.h>\r\n#include <cuda.h>\r\n#include <cuda_runtime.h>\r\n\r\n// #include <THC/THC.h>\r\n// #include <THC/THCAtomics.cuh>\r\n// #include <THC/THCDeviceUtils.cuh>\r\n\r\n// extern THCState *state;\r\n\r\n// author: Charles Shang\r\n// https://github.com/torch/cunn/blob/master/lib/THCUNN/generic/SpatialConvolutionMM.cu\r\n\r\n\r\nat::Tensor ms_deform_attn_cuda_forward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const int im2col_step)\r\n    // value: N_, S_, M_, D_\r\n    // spatial_shapes: L_, 2\r\n    // sampling_loc: N_, Lq_, M_, L_, P_, 2\r\n{\r\n    AT_ASSERTM(value.is_contiguous(), \"value tensor has to be contiguous\");\r\n\r\n    AT_ASSERTM(value.type().is_cuda(), \"value must be a CUDA tensor\");\r\n    AT_ASSERTM(spatial_shapes.type().is_cuda(), \"spatial_shapes must be a CUDA tensor\");\r\n    AT_ASSERTM(sampling_loc.type().is_cuda(), \"sampling_loc must be a CUDA tensor\");\r\n    AT_ASSERTM(attn_weight.type().is_cuda(), \"attn_weight must be a CUDA tensor\");\r\n\r\n    const int batch = value.size(0);\r\n    const int spatial_size = value.size(1);\r\n    const int num_heads = value.size(2);\r\n    const int channels = value.size(3);\r\n\r\n    const int num_levels = spatial_shapes.size(0);\r\n\r\n    const int num_query = sampling_loc.size(1);\r\n    const int num_point = sampling_loc.size(4);\r\n\r\n    const int im2col_step_ = std::min(batch, im2col_step);\r\n\r\n    AT_ASSERTM(batch % im2col_step_ == 0, \"batch(%d) must divide im2col_step(%d)\", batch, im2col_step_);\r\n    \r\n    auto output = at::empty({batch, num_query, num_heads, channels}, value.options());\r\n\r\n    auto level_start_index = at::zeros({num_levels}, spatial_shapes.options());\r\n    for (int lvl = 1; lvl < num_levels; ++lvl)\r\n    {\r\n        auto shape_prev = spatial_shapes.select(0, lvl-1);\r\n        auto size_prev =  at::mul(shape_prev.select(0, 0), shape_prev.select(0, 1));\r\n        level_start_index.select(0, lvl) = at::add(level_start_index.select(0, lvl-1), size_prev);\r\n    }\r\n\r\n    // define alias for easy use\r\n    const int batch_n = im2col_step_;\r\n    auto output_n = output.view({batch/im2col_step_, batch_n, num_query, num_heads, channels});\r\n    auto per_value_size = spatial_size * num_heads * channels;\r\n    auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2;\r\n    auto per_attn_weight_size = num_query * num_heads * num_levels * num_point;\r\n    for (int n = 0; n < batch/im2col_step_; ++n)\r\n    {\r\n        auto columns = at::empty({num_levels*num_point, batch_n, num_query, num_heads, channels}, value.options());\r\n        AT_DISPATCH_FLOATING_TYPES(value.type(), \"ms_deform_attn_forward_cuda\", ([&] {\r\n            ms_deformable_im2col_cuda(at::cuda::getCurrentCUDAStream(),\r\n                value.data<scalar_t>() + n * im2col_step_ * per_value_size,\r\n                spatial_shapes.data<int64_t>(),\r\n                level_start_index.data<int64_t>(),\r\n                sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size,\r\n                attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size,\r\n                batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,\r\n                columns.data<scalar_t>());\r\n\r\n        }));\r\n        output_n.select(0, n) = at::sum(columns, 0);\r\n    }\r\n\r\n    output = output.view({batch, num_query, num_heads*channels});\r\n\r\n    return output;\r\n}\r\n\r\n\r\nstd::vector<at::Tensor> ms_deform_attn_cuda_backward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const at::Tensor &grad_output,\r\n    const int im2col_step)\r\n{\r\n\r\n    AT_ASSERTM(value.is_contiguous(), \"value tensor has to be contiguous\");\r\n\r\n    AT_ASSERTM(value.type().is_cuda(), \"value must be a CUDA tensor\");\r\n    AT_ASSERTM(spatial_shapes.type().is_cuda(), \"spatial_shapes must be a CUDA tensor\");\r\n    AT_ASSERTM(sampling_loc.type().is_cuda(), \"sampling_loc must be a CUDA tensor\");\r\n    AT_ASSERTM(attn_weight.type().is_cuda(), \"attn_weight must be a CUDA tensor\");\r\n\r\n    const int batch = value.size(0);\r\n    const int spatial_size = value.size(1);\r\n    const int num_heads = value.size(2);\r\n    const int channels = value.size(3);\r\n\r\n    const int num_levels = spatial_shapes.size(0);\r\n\r\n    const int num_query = sampling_loc.size(1);\r\n    const int num_point = sampling_loc.size(4);\r\n\r\n    const int im2col_step_ = std::min(batch, im2col_step);\r\n\r\n    AT_ASSERTM(batch % im2col_step_ == 0, \"batch(%d) must divide im2col_step(%d)\", batch, im2col_step_);\r\n\r\n    auto grad_value = at::zeros_like(value);\r\n    auto grad_sampling_loc = at::zeros_like(sampling_loc);\r\n    auto grad_attn_weight = at::zeros_like(attn_weight);\r\n\r\n    auto level_start_index = at::zeros({num_levels}, spatial_shapes.options());\r\n    for (int lvl = 1; lvl < num_levels; ++lvl)\r\n    {\r\n        auto shape_prev = spatial_shapes.select(0, lvl-1);\r\n        auto size_prev =  at::mul(shape_prev.select(0, 0), shape_prev.select(0, 1));\r\n        level_start_index.select(0, lvl) = at::add(level_start_index.select(0, lvl-1), size_prev);\r\n    }\r\n\r\n    const int batch_n = im2col_step_;\r\n    auto per_value_size = spatial_size * num_heads * channels;\r\n    auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2;\r\n    auto per_attn_weight_size = num_query * num_heads * num_levels * num_point;\r\n    auto grad_output_n = grad_output.view({batch/im2col_step_, batch_n, num_query, num_heads, channels});\r\n    for (int n = 0; n < batch/im2col_step_; ++n)\r\n    {\r\n        auto grad_output_g = grad_output_n.select(0, n);\r\n        AT_DISPATCH_FLOATING_TYPES(value.type(), \"deform_conv_backward_cuda\", ([&] {\r\n\r\n            // gradient w.r.t. sampling location & attention weight\r\n            ms_deformable_col2im_coord_cuda(at::cuda::getCurrentCUDAStream(),\r\n                                            grad_output_g.data<scalar_t>(),\r\n                                            value.data<scalar_t>() + n * im2col_step_ * per_value_size,\r\n                                            spatial_shapes.data<int64_t>(),\r\n                                            level_start_index.data<int64_t>(),\r\n                                            sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size,\r\n                                            attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size,\r\n                                            batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,\r\n                                            grad_sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size,\r\n                                            grad_attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size);\r\n            // gradient w.r.t. value\r\n            ms_deformable_col2im_cuda(at::cuda::getCurrentCUDAStream(),\r\n                                    grad_output_g.data<scalar_t>(),\r\n                                    spatial_shapes.data<int64_t>(),\r\n                                    level_start_index.data<int64_t>(),\r\n                                    sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size,\r\n                                    attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size,\r\n                                    batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,\r\n                                    grad_value.data<scalar_t>() +  n * im2col_step_ * per_value_size);\r\n\r\n        }));\r\n    }\r\n\r\n    return {\r\n        grad_value, grad_sampling_loc, grad_attn_weight\r\n    };\r\n}"
  },
  {
    "path": "src/trackformer/models/ops/src/cuda/ms_deform_attn_cuda.h",
    "content": "#pragma once\r\n#include <torch/extension.h>\r\n\r\nat::Tensor ms_deform_attn_cuda_forward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const int im2col_step);\r\n\r\nstd::vector<at::Tensor> ms_deform_attn_cuda_backward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const at::Tensor &grad_output,\r\n    const int im2col_step);\r\n\r\n"
  },
  {
    "path": "src/trackformer/models/ops/src/cuda/ms_deform_im2col_cuda.cuh",
    "content": "#include <cstdio>\r\n#include <algorithm>\r\n#include <cstring>\r\n\r\n#include <ATen/ATen.h>\r\n#include <ATen/cuda/CUDAContext.h>\r\n\r\n// #include <THC/THC.h>\r\n#include <THC/THCAtomics.cuh>\r\n// #include <THC/THCDeviceUtils.cuh>\r\n\r\n#define CUDA_KERNEL_LOOP(i, n)                          \\\r\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x;   \\\r\n      i < (n);                                          \\\r\n      i += blockDim.x * gridDim.x)\r\n\r\nconst int CUDA_NUM_THREADS = 1024;\r\ninline int GET_BLOCKS(const int N)\r\n{\r\n  return (N + CUDA_NUM_THREADS - 1) / CUDA_NUM_THREADS;\r\n}\r\n\r\n\r\ntemplate <typename scalar_t>\r\n__device__ scalar_t ms_deform_attn_im2col_bilinear(const scalar_t *bottom_data, \r\n                                                   const int height, const int width, const int nheads, const int channels, \r\n                                                   scalar_t h, scalar_t w, const int m, const int c)\r\n{\r\n  int h_low = floor(h);\r\n  int w_low = floor(w);\r\n  int h_high = h_low + 1;\r\n  int w_high = w_low + 1;\r\n\r\n  scalar_t lh = h - h_low;\r\n  scalar_t lw = w - w_low;\r\n  scalar_t hh = 1 - lh, hw = 1 - lw;\r\n\r\n  scalar_t v1 = 0;\r\n  if (h_low >= 0 && w_low >= 0)\r\n  {\r\n    int ptr1 = h_low * width * nheads * channels + w_low * nheads * channels + m * channels + c;\r\n    v1 = bottom_data[ptr1];\r\n  }\r\n  scalar_t v2 = 0;\r\n  if (h_low >= 0 && w_high <= width - 1)\r\n  {\r\n    int ptr2 = h_low * width * nheads * channels + w_high * nheads * channels + m * channels + c;\r\n    v2 = bottom_data[ptr2];\r\n  }\r\n  scalar_t v3 = 0;\r\n  if (h_high <= height - 1 && w_low >= 0)\r\n  {\r\n    int ptr3 = h_high * width * nheads * channels + w_low * nheads * channels + m * channels + c;\r\n    v3 = bottom_data[ptr3];\r\n  }\r\n  scalar_t v4 = 0;\r\n  if (h_high <= height - 1 && w_high <= width - 1)\r\n  {\r\n    int ptr4 = h_high * width * nheads * channels + w_high * nheads * channels + m * channels + c;\r\n    v4 = bottom_data[ptr4];\r\n  }\r\n\r\n  scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;\r\n\r\n  scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\r\n  return val;\r\n}\r\n\r\ntemplate <typename scalar_t>\r\n__device__ scalar_t ms_deform_attn_get_gradient_weight(scalar_t h, scalar_t w,\r\n                                                       const int gh, const int gw, const int height, const int width)\r\n{\r\n  if (h <= -1 || h >= height || w <= -1 || w >= width)\r\n  {\r\n    //empty\r\n    return 0;\r\n  }\r\n\r\n  int h_low = floor(h);\r\n  int w_low = floor(w);\r\n  int h_high = h_low + 1;\r\n  int w_high = w_low + 1;\r\n\r\n  scalar_t weight = 0;\r\n  if (gh == h_low && gw == w_low)\r\n    weight = (gh + 1 - h) * (gw + 1 - w);\r\n  if (gh == h_low && gw == w_high)\r\n    weight = (gh + 1 - h) * (w + 1 - gw);\r\n  if (gh == h_high && gw == w_low)\r\n    weight = (h + 1 - gh) * (gw + 1 - w);\r\n  if (gh == h_high && gw == w_high)\r\n    weight = (h + 1 - gh) * (w + 1 - gw);\r\n  return weight;\r\n}\r\n\r\ntemplate <typename scalar_t>\r\n__device__ scalar_t ms_deform_attn_get_coordinate_weight(scalar_t h, scalar_t w, const int m, const int c,\r\n                                            const int height, const int width, const int nheads, const int channels, \r\n                                            const scalar_t *bottom_data, const int bp_dir)\r\n{\r\n  if (h <= -1 || h >= height || w <= -1 || w >= width)\r\n  {\r\n    //empty\r\n    return 0;\r\n  }\r\n\r\n  int h_low = floor(h);\r\n  int w_low = floor(w);\r\n  int h_high = h_low + 1;\r\n  int w_high = w_low + 1;\r\n\r\n  scalar_t weight = 0;\r\n\r\n  scalar_t v1 = 0;\r\n  if (h_low >= 0 && w_low >= 0)\r\n  {\r\n    int ptr1 = h_low * width * nheads * channels + w_low * nheads * channels + m * channels + c;\r\n    v1 = bottom_data[ptr1];\r\n  }\r\n  scalar_t v2 = 0;\r\n  if (h_low >= 0 && w_high <= width - 1)\r\n  {\r\n    int ptr2 = h_low * width * nheads * channels + w_high * nheads * channels + m * channels + c;\r\n    v2 = bottom_data[ptr2];\r\n  }\r\n  scalar_t v3 = 0;\r\n  if (h_high <= height - 1 && w_low >= 0)\r\n  {\r\n    int ptr3 = h_high * width * nheads * channels + w_low * nheads * channels + m * channels + c;\r\n    v3 = bottom_data[ptr3];\r\n  }\r\n  scalar_t v4 = 0;\r\n  if (h_high <= height - 1 && w_high <= width - 1)\r\n  {\r\n    int ptr4 = h_high * width * nheads * channels + w_high * nheads * channels + m * channels + c;\r\n    v4 = bottom_data[ptr4];\r\n  }\r\n\r\n  if (bp_dir == 1)\r\n  {\r\n    if (h_low >= 0 && w_low >= 0)\r\n      weight += -1 * (w_low + 1 - w) * v1;\r\n    if (h_low >= 0 && w_high <= width - 1)\r\n      weight += -1 * (w - w_low) * v2;\r\n    if (h_high <= height - 1 && w_low >= 0)\r\n      weight += (w_low + 1 - w) * v3;\r\n    if (h_high <= height - 1 && w_high <= width - 1)\r\n      weight += (w - w_low) * v4;\r\n  }\r\n  else if (bp_dir == 0)\r\n  {\r\n    if (h_low >= 0 && w_low >= 0)\r\n      weight += -1 * (h_low + 1 - h) * v1;\r\n    if (h_low >= 0 && w_high <= width - 1)\r\n      weight += (h_low + 1 - h) * v2;\r\n    if (h_high <= height - 1 && w_low >= 0)\r\n      weight += -1 * (h - h_low) * v3;\r\n    if (h_high <= height - 1 && w_high <= width - 1)\r\n      weight += (h - h_low) * v4;\r\n  }\r\n\r\n  return weight;\r\n}\r\n\r\ntemplate <typename scalar_t>\r\n__global__ void ms_deformable_im2col_gpu_kernel(const int n,\r\n                                                const scalar_t *data_value, \r\n                                                const int64_t *data_spatial_shapes,\r\n                                                const int64_t *data_level_start_index, \r\n                                                const scalar_t *data_sampling_loc,\r\n                                                const scalar_t *data_attn_weight,\r\n                                                const int batch_size, \r\n                                                const int spatial_size, \r\n                                                const int num_heads,\r\n                                                const int channels, \r\n                                                const int num_levels,\r\n                                                const int num_query,\r\n                                                const int num_point,\r\n                                                scalar_t *data_col)\r\n{\r\n  // launch batch_size * num_levels * num_query * num_point * channels cores\r\n  // data_value: batch_size, spatial_size, num_heads, channels\r\n  // data_sampling_loc: batch_size, num_query, num_heads, num_levels, num_point, 2\r\n  // data_attn_weight: batch_size, num_query, num_heads, num_levels, num_point\r\n  // data_col: num_levels*num_point, batch_size, num_query, num_heads, channels\r\n  CUDA_KERNEL_LOOP(index, n)\r\n  {\r\n    // index index of output matrix\r\n    const int c_col = index % channels;\r\n    const int p_col = (index / channels) % num_point;\r\n    const int q_col = (index / channels / num_point) % num_query;\r\n    const int l_col = (index / channels / num_point / num_query) % num_levels;\r\n    const int b_col = index / channels / num_point / num_query / num_levels;\r\n    const int level_start_id = data_level_start_index[l_col];\r\n    const int spatial_h = data_spatial_shapes[l_col * 2];\r\n    const int spatial_w = data_spatial_shapes[l_col * 2 + 1];\r\n\r\n    // num_heads, channels\r\n    scalar_t *data_col_ptr = data_col \r\n                           + (  c_col \r\n                              + channels * 0\r\n                              + channels * num_heads * q_col\r\n                              + channels * num_heads * num_query * b_col\r\n                              + channels * num_heads * num_query * batch_size * p_col\r\n                              + channels * num_heads * num_query * batch_size * num_point * l_col);\r\n    // spatial_h, spatial_w, num_heads, channels\r\n    const scalar_t *data_value_ptr = data_value \r\n                                   + (b_col * spatial_size * num_heads * channels + level_start_id * num_heads * channels);  \r\n    // num_heads, num_levels, num_point, 2\r\n    const scalar_t *data_sampling_loc_ptr = data_sampling_loc \r\n                                          + (  b_col * num_query * num_heads * num_levels * num_point * 2\r\n                                             + q_col * num_heads * num_levels * num_point * 2);\r\n    // num_heads, num_levels, num_point\r\n    const scalar_t *data_attn_weight_ptr = data_attn_weight \r\n                                         + (  b_col * num_query * num_heads * num_levels * num_point\r\n                                            + q_col * num_heads * num_levels * num_point);\r\n\r\n    for (int i = 0; i < num_heads; ++i)\r\n    {      \r\n      const int data_loc_h_ptr = i * num_levels * num_point * 2 + l_col * num_point * 2 + p_col * 2 + 1;\r\n      const int data_loc_w_ptr = i * num_levels * num_point * 2 + l_col * num_point * 2 + p_col * 2;\r\n      const int data_weight_ptr = i * num_levels * num_point + l_col * num_point + p_col;\r\n      const scalar_t loc_h = data_sampling_loc_ptr[data_loc_h_ptr];\r\n      const scalar_t loc_w = data_sampling_loc_ptr[data_loc_w_ptr];\r\n      const scalar_t weight = data_attn_weight_ptr[data_weight_ptr];\r\n      scalar_t val = static_cast<scalar_t>(0);\r\n      const scalar_t h_im = loc_h * spatial_h - 0.5;\r\n      const scalar_t w_im = loc_w * spatial_w - 0.5;\r\n      if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)\r\n      {\r\n        val = ms_deform_attn_im2col_bilinear(data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, i, c_col);\r\n      }\r\n      *data_col_ptr = val * weight;\r\n      data_col_ptr += channels;\r\n    }\r\n  }\r\n}\r\n\r\ntemplate <typename scalar_t>\r\n__global__ void ms_deformable_col2im_gpu_kernel(const int n,\r\n                                                const scalar_t *data_col,\r\n                                                const int64_t *data_spatial_shapes,\r\n                                                const int64_t *data_level_start_index, \r\n                                                const scalar_t *data_sampling_loc,\r\n                                                const scalar_t *data_attn_weight,\r\n                                                const int batch_size, \r\n                                                const int spatial_size, \r\n                                                const int num_heads,\r\n                                                const int channels, \r\n                                                const int num_levels,\r\n                                                const int num_query,\r\n                                                const int num_point,\r\n                                                scalar_t *grad_value)\r\n{\r\n  // launch batch_size * num_levels * num_query * num_point * num_heads * channels cores\r\n  // grad_value: batch_size, spatial_size, num_heads, channels\r\n  // data_sampling_loc: batch_size, num_query, num_heads, num_levels, num_point, 2\r\n  // data_attn_weight: batch_size, num_query, num_heads, num_levels, num_point\r\n  // data_col: batch_size, num_query, num_heads, channels\r\n  CUDA_KERNEL_LOOP(index, n)\r\n  {\r\n    const int c_col = index % channels;\r\n    const int m_col = (index / channels) % num_heads;\r\n    const int p_col = (index / channels / num_heads) % num_point;\r\n    const int q_col = (index / channels / num_heads / num_point) % num_query;\r\n    const int l_col = (index / channels / num_heads / num_point / num_query) % num_levels;\r\n    const int b_col = index / channels / num_heads / num_point / num_query / num_levels;\r\n    const int level_start_id = data_level_start_index[l_col];\r\n    const int spatial_h = data_spatial_shapes[l_col * 2];\r\n    const int spatial_w = data_spatial_shapes[l_col * 2 + 1];\r\n\r\n    const scalar_t col = data_col[  c_col\r\n                                  + channels * m_col\r\n                                  + channels * num_heads * q_col\r\n                                  + channels * num_heads * num_query * b_col];\r\n    int sampling_ptr = b_col * num_query * num_heads * num_levels * num_point\r\n                    + q_col * num_heads * num_levels * num_point\r\n                    + m_col * num_levels * num_point\r\n                    + l_col * num_point\r\n                    + p_col;\r\n    const scalar_t sampling_x = data_sampling_loc[2 * sampling_ptr] * spatial_w - 0.5;\r\n    const scalar_t sampling_y = data_sampling_loc[2 * sampling_ptr + 1] * spatial_h - 0.5;\r\n    const scalar_t attn_weight = data_attn_weight[sampling_ptr];\r\n    const scalar_t cur_top_grad = col * attn_weight;\r\n    const int cur_h = (int)sampling_y;\r\n    const int cur_w = (int)sampling_x;\r\n    for (int dy = -2; dy <= 2; dy++)\r\n    {\r\n      for (int dx = -2; dx <= 2; dx++)\r\n      {\r\n        if (cur_h + dy >= 0 && cur_h + dy < spatial_h &&\r\n            cur_w + dx >= 0 && cur_w + dx < spatial_w &&\r\n            abs(sampling_y - (cur_h + dy)) < 1 &&\r\n            abs(sampling_x - (cur_w + dx)) < 1)\r\n        {\r\n          int cur_bottom_grad_pos = b_col * spatial_size * num_heads * channels \r\n                                  + (level_start_id + (cur_h+dy)*spatial_w + (cur_w+dx)) * num_heads * channels \r\n                                  + m_col * channels\r\n                                  + c_col;\r\n          scalar_t weight = ms_deform_attn_get_gradient_weight(sampling_y, sampling_x, cur_h + dy, cur_w + dx, spatial_h, spatial_w);\r\n          atomicAdd(grad_value + cur_bottom_grad_pos, weight * cur_top_grad);\r\n        }\r\n      }\r\n    }\r\n  }\r\n}\r\n\r\ntemplate <typename scalar_t>\r\n__global__ void ms_deformable_col2im_coord_gpu_kernel(const int n,\r\n                                                      const scalar_t *data_col,   \r\n                                                      const scalar_t *data_value, \r\n                                                      const int64_t *data_spatial_shapes,\r\n                                                      const int64_t *data_level_start_index, \r\n                                                      const scalar_t *data_sampling_loc,\r\n                                                      const scalar_t *data_attn_weight,\r\n                                                      const int batch_size, \r\n                                                      const int spatial_size, \r\n                                                      const int num_heads,\r\n                                                      const int channels, \r\n                                                      const int num_levels,\r\n                                                      const int num_query,\r\n                                                      const int num_point,\r\n                                                      scalar_t *grad_sampling_loc, scalar_t *grad_attn_weight)\r\n{\r\n  // sampling_loc: batch_size, num_query, num_heads, num_levels, num_point, 2\r\n  // attn_weight:  batch_size, num_query, num_heads, num_levels, num_point\r\n  // column: batch_size, num_query, num_heads, channels\r\n  // value: batch_size, spatial_size, num_heads, channels\r\n  // num_kernels = batch_size * num_query * num_heads * num_levels * num_point * 2\r\n  CUDA_KERNEL_LOOP(index, n)\r\n  {\r\n    scalar_t val = 0, wval = 0;\r\n\r\n    const int loc_c = index % 2;\r\n    const int k = (index / 2) % num_point;\r\n    const int l = (index / 2 / num_point) % num_levels;\r\n    const int m = (index / 2 / num_point / num_levels) % num_heads;\r\n    const int q = (index / 2 / num_point / num_levels / num_heads) % num_query;\r\n    const int b = index / 2 / num_point / num_levels / num_heads / num_query;\r\n    const int level_start_id = data_level_start_index[l];\r\n    const int spatial_h = data_spatial_shapes[l * 2];\r\n    const int spatial_w = data_spatial_shapes[l * 2 + 1];\r\n    \r\n    const scalar_t *data_col_ptr = data_col \r\n                                 +( m * channels\r\n                                  + q * channels * num_heads\r\n                                  + b * channels * num_heads * num_query);\r\n    const scalar_t *data_value_ptr = data_value \r\n                                   + (  0 * channels  \r\n                                      + level_start_id * channels * num_heads\r\n                                      + b * channels * num_heads * spatial_size);\r\n    scalar_t sampling_x = data_sampling_loc[(index / 2) * 2] * spatial_w - 0.5;\r\n    scalar_t sampling_y = data_sampling_loc[(index / 2) * 2 + 1] * spatial_h - 0.5;\r\n    const scalar_t attn_weight = data_attn_weight[index / 2];\r\n\r\n    for (int col_c = 0; col_c < channels; col_c += 1)\r\n    {\r\n      const scalar_t col = data_col_ptr[col_c];\r\n      if (sampling_x <= -1 || sampling_y <= -1 || sampling_x >= spatial_w || sampling_y >= spatial_h)\r\n      {\r\n        sampling_x = sampling_y = -2;\r\n      }\r\n      else\r\n      {\r\n        wval += col * ms_deform_attn_im2col_bilinear(data_value_ptr, spatial_h, spatial_w, num_heads, channels, sampling_y, sampling_x, m, col_c);\r\n      }\r\n      const scalar_t weight = ms_deform_attn_get_coordinate_weight(\r\n          sampling_y, sampling_x, m, col_c,\r\n          spatial_h, spatial_w, num_heads, channels, \r\n          data_value_ptr, loc_c);\r\n      val += weight * col * attn_weight;\r\n    }\r\n    if (loc_c == 0) val *= spatial_w;\r\n    else if (loc_c == 1) val *= spatial_h;\r\n    grad_sampling_loc[index] = val;\r\n    if (loc_c % 2 == 0) grad_attn_weight[index / 2] = wval;\r\n  }\r\n}\r\n\r\ntemplate <typename scalar_t>\r\nvoid ms_deformable_im2col_cuda(cudaStream_t stream,\r\n                              const scalar_t* data_value,\r\n                              const int64_t* data_spatial_shapes, \r\n                              const int64_t* data_level_start_index, \r\n                              const scalar_t* data_sampling_loc,\r\n                              const scalar_t* data_attn_weight,\r\n                              const int batch_size,\r\n                              const int spatial_size, \r\n                              const int num_heads, \r\n                              const int channels, \r\n                              const int num_levels, \r\n                              const int num_query,\r\n                              const int num_point,\r\n                              scalar_t* data_col)\r\n{\r\n  // num_axes should be smaller than block size\r\n  const int num_kernels = batch_size * num_levels * num_query * num_point * channels;\r\n  ms_deformable_im2col_gpu_kernel<scalar_t>\r\n      <<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS,\r\n          0, stream>>>(\r\n      num_kernels, data_value, data_spatial_shapes, data_level_start_index, data_sampling_loc, data_attn_weight, \r\n      batch_size, spatial_size, num_heads, channels, num_levels, num_query, num_point, data_col);\r\n  \r\n  cudaError_t err = cudaGetLastError();\r\n  if (err != cudaSuccess)\r\n  {\r\n    printf(\"error in ms_deformable_im2col_cuda: %s\\n\", cudaGetErrorString(err));\r\n  }\r\n\r\n}\r\n\r\ntemplate <typename scalar_t>\r\nvoid ms_deformable_col2im_cuda(cudaStream_t stream,\r\n                              const scalar_t* data_col, \r\n                              const int64_t *data_spatial_shapes,\r\n                              const int64_t *data_level_start_index, \r\n                              const scalar_t *data_sampling_loc,\r\n                              const scalar_t *data_attn_weight,\r\n                              const int batch_size, \r\n                              const int spatial_size, \r\n                              const int num_heads,\r\n                              const int channels, \r\n                              const int num_levels,\r\n                              const int num_query,\r\n                              const int num_point, \r\n                              scalar_t* grad_value)\r\n{\r\n  const int num_kernels = batch_size * num_levels * num_query * num_point * num_heads *  channels;\r\n  ms_deformable_col2im_gpu_kernel<scalar_t>\r\n      <<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS,\r\n          0, stream>>>(\r\n                    num_kernels, \r\n                    data_col, \r\n                    data_spatial_shapes,\r\n                    data_level_start_index, \r\n                    data_sampling_loc,\r\n                    data_attn_weight,\r\n                    batch_size, \r\n                    spatial_size, \r\n                    num_heads,\r\n                    channels, \r\n                    num_levels,\r\n                    num_query,\r\n                    num_point,\r\n                    grad_value);\r\n  cudaError_t err = cudaGetLastError();\r\n  if (err != cudaSuccess)\r\n  {\r\n    printf(\"error in ms_deformable_col2im_cuda: %s\\n\", cudaGetErrorString(err));\r\n  }\r\n\r\n}\r\n\r\ntemplate <typename scalar_t>\r\nvoid ms_deformable_col2im_coord_cuda(cudaStream_t stream,\r\n                                    const scalar_t* data_col, \r\n                                    const scalar_t *data_value, \r\n                                    const int64_t *data_spatial_shapes,\r\n                                    const int64_t *data_level_start_index, \r\n                                    const scalar_t *data_sampling_loc,\r\n                                    const scalar_t *data_attn_weight,\r\n                                    const int batch_size, \r\n                                    const int spatial_size, \r\n                                    const int num_heads,\r\n                                    const int channels, \r\n                                    const int num_levels,\r\n                                    const int num_query,\r\n                                    const int num_point,\r\n                                    scalar_t *grad_sampling_loc, scalar_t *grad_attn_weight)\r\n{\r\n  // data_sampling_loc: batch_size, num_query, num_heads, num_levels, num_point, 2\r\n  // data_attn_weight: batch_size, num_query, num_heads, num_levels, num_point\r\n  const int num_kernels = batch_size * num_query * num_heads * num_levels * num_point * 2;\r\n  ms_deformable_col2im_coord_gpu_kernel<scalar_t>\r\n      <<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS,\r\n        0, stream>>>(num_kernels, \r\n                    data_col,\r\n                    data_value, \r\n                    data_spatial_shapes,\r\n                    data_level_start_index, \r\n                    data_sampling_loc,\r\n                    data_attn_weight,\r\n                    batch_size, \r\n                    spatial_size, \r\n                    num_heads,\r\n                    channels, \r\n                    num_levels,\r\n                    num_query,\r\n                    num_point,\r\n                    grad_sampling_loc, grad_attn_weight);\r\n  cudaError_t err = cudaGetLastError();\r\n  if (err != cudaSuccess)\r\n  {\r\n    printf(\"error in ms_deformable_col2im_coord_cuda: %s\\n\", cudaGetErrorString(err));\r\n  }\r\n}"
  },
  {
    "path": "src/trackformer/models/ops/src/ms_deform_attn.h",
    "content": "#pragma once\r\n\r\n#include \"cpu/ms_deform_attn_cpu.h\"\r\n\r\n#ifdef WITH_CUDA\r\n#include \"cuda/ms_deform_attn_cuda.h\"\r\n#endif\r\n\r\n\r\nat::Tensor\r\nms_deform_attn_forward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const int im2col_step)\r\n{\r\n    if (value.type().is_cuda())\r\n    {\r\n#ifdef WITH_CUDA\r\n        return ms_deform_attn_cuda_forward(\r\n            value, spatial_shapes, sampling_loc, attn_weight, im2col_step);\r\n#else\r\n        AT_ERROR(\"Not compiled with GPU support\");\r\n#endif\r\n    }\r\n    AT_ERROR(\"Not implemented on the CPU\");\r\n}\r\n\r\nstd::vector<at::Tensor>\r\nms_deform_attn_backward(\r\n    const at::Tensor &value, \r\n    const at::Tensor &spatial_shapes,\r\n    const at::Tensor &sampling_loc,\r\n    const at::Tensor &attn_weight,\r\n    const at::Tensor &grad_output,\r\n    const int im2col_step)\r\n{\r\n    if (value.type().is_cuda())\r\n    {\r\n#ifdef WITH_CUDA\r\n        return ms_deform_attn_cuda_backward(\r\n            value, spatial_shapes, sampling_loc, attn_weight, grad_output, im2col_step);\r\n#else\r\n        AT_ERROR(\"Not compiled with GPU support\");\r\n#endif\r\n    }\r\n    AT_ERROR(\"Not implemented on the CPU\");\r\n}\r\n\r\n"
  },
  {
    "path": "src/trackformer/models/ops/src/vision.cpp",
    "content": "\r\n#include \"ms_deform_attn.h\"\r\n\r\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\r\n  m.def(\"ms_deform_attn_forward\", &ms_deform_attn_forward, \"ms_deform_attn_forward\");\r\n  m.def(\"ms_deform_attn_backward\", &ms_deform_attn_backward, \"ms_deform_attn_backward\");\r\n}\r\n"
  },
  {
    "path": "src/trackformer/models/ops/test.py",
    "content": "#!/usr/bin/env python\r\nfrom __future__ import absolute_import\r\nfrom __future__ import print_function\r\nfrom __future__ import division\r\n\r\nimport time\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch.autograd import gradcheck\r\n\r\nfrom functions.ms_deform_attn_func import MSDeformAttnFunction, ms_deform_attn_core_pytorch\r\n\r\n\r\nN, M, D = 2, 2, 4\r\nLq, L, P = 3, 3, 2\r\nshapes = torch.as_tensor([(8, 8), (4, 4), (2, 2)], dtype=torch.long).cuda()\r\nS = sum([(H*W).item() for H, W in shapes])\r\n\r\n\r\ntorch.manual_seed(3)\r\n\r\n\r\ndef check_forward_equal_with_pytorch():\r\n    value = torch.rand(N, S, M, D).cuda() * 0.01\r\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\r\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\r\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\r\n    im2col_step = 2\r\n    output_pytorch = ms_deform_attn_core_pytorch(value, shapes, sampling_locations, attention_weights)\r\n    output_cuda = MSDeformAttnFunction.apply(value, shapes, sampling_locations, attention_weights, im2col_step)\r\n    fwdok = torch.allclose(output_cuda, output_pytorch, rtol=1e-2, atol=1e-3)\r\n    max_abs_err = (output_cuda - output_pytorch).abs().max()\r\n    max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max()\r\n\r\n    print(f'* {fwdok} check_forward_equal_with_pytorch: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}')\r\n\r\n\r\ndef check_backward_equal_with_pytorch():\r\n    value = torch.rand(N, S, M, D).cuda() * 0.01\r\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\r\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\r\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\r\n    im2col_step = 2\r\n    value.requires_grad = True\r\n    sampling_locations.requires_grad = True\r\n    attention_weights.requires_grad = True\r\n    output_pytorch = ms_deform_attn_core_pytorch(value, shapes, sampling_locations, attention_weights)\r\n    output_cuda = MSDeformAttnFunction.apply(value, shapes, sampling_locations, attention_weights, im2col_step)\r\n    loss_pytorch = output_pytorch.abs().sum()\r\n    loss_cuda = output_cuda.abs().sum()\r\n\r\n    grad_value_pytorch = torch.autograd.grad(loss_pytorch, value, retain_graph=True)[0]\r\n    grad_value_cuda = torch.autograd.grad(loss_cuda, value, retain_graph=True)[0]\r\n    bwdok = torch.allclose(grad_value_cuda, grad_value_pytorch, rtol=1e-2, atol=1e-3)\r\n    max_abs_err = (grad_value_cuda - grad_value_pytorch).abs().max()\r\n    zero_grad_mask = grad_value_pytorch == 0\r\n    max_rel_err = ((grad_value_cuda - grad_value_pytorch).abs() / grad_value_pytorch.abs())[~zero_grad_mask].max()\r\n    if zero_grad_mask.sum() == 0:\r\n        max_abs_err_0 = 0\r\n    else:\r\n        max_abs_err_0 = (grad_value_cuda - grad_value_pytorch).abs()[zero_grad_mask].max()\r\n    print(f'* {bwdok} check_backward_equal_with_pytorch - input1: '\r\n          f'max_abs_err {max_abs_err:.2e} '\r\n          f'max_rel_err {max_rel_err:.2e} '\r\n          f'max_abs_err_0 {max_abs_err_0:.2e}')\r\n\r\n    grad_sampling_loc_pytorch = torch.autograd.grad(loss_pytorch, sampling_locations, retain_graph=True)[0]\r\n    grad_sampling_loc_cuda = torch.autograd.grad(loss_cuda, sampling_locations, retain_graph=True)[0]\r\n    bwdok = torch.allclose(grad_sampling_loc_cuda, grad_sampling_loc_pytorch, rtol=1e-2, atol=1e-3)\r\n    max_abs_err = (grad_sampling_loc_cuda - grad_sampling_loc_pytorch).abs().max()\r\n    zero_grad_mask = grad_sampling_loc_pytorch == 0\r\n    max_rel_err = ((grad_sampling_loc_cuda - grad_sampling_loc_pytorch).abs() / grad_sampling_loc_pytorch.abs())[~zero_grad_mask].max()\r\n    if zero_grad_mask.sum() == 0:\r\n        max_abs_err_0 = 0\r\n    else:\r\n        max_abs_err_0 = (grad_sampling_loc_cuda - grad_sampling_loc_pytorch).abs()[zero_grad_mask].max()\r\n    print(f'* {bwdok} check_backward_equal_with_pytorch - input2: '\r\n          f'max_abs_err {max_abs_err:.2e} '\r\n          f'max_rel_err {max_rel_err:.2e} '\r\n          f'max_abs_err_0 {max_abs_err_0:.2e}')\r\n\r\n    grad_attn_weight_pytorch = torch.autograd.grad(loss_pytorch, attention_weights, retain_graph=True)[0]\r\n    grad_attn_weight_cuda = torch.autograd.grad(loss_cuda, attention_weights, retain_graph=True)[0]\r\n    bwdok = torch.allclose(grad_attn_weight_cuda, grad_attn_weight_pytorch, rtol=1e-2, atol=1e-3)\r\n    max_abs_err = (grad_attn_weight_cuda - grad_attn_weight_pytorch).abs().max()\r\n    zero_grad_mask = grad_attn_weight_pytorch == 0\r\n    max_rel_err = ((grad_attn_weight_cuda - grad_attn_weight_pytorch).abs() / grad_attn_weight_pytorch.abs())[~zero_grad_mask].max()\r\n    if zero_grad_mask.sum() == 0:\r\n        max_abs_err_0 = 0\r\n    else:\r\n        max_abs_err_0 = (grad_attn_weight_cuda - grad_attn_weight_pytorch).abs()[zero_grad_mask].max()\r\n    print(f'* {bwdok} check_backward_equal_with_pytorch - input3: '\r\n          f'max_abs_err {max_abs_err:.2e} '\r\n          f'max_rel_err {max_rel_err:.2e} '\r\n          f'max_abs_err_0 {max_abs_err_0:.2e}')\r\n\r\n\r\ndef check_gradient_ms_deform_attn(\r\n        use_pytorch=False,\r\n        grad_value=True, grad_sampling_loc=True, grad_attn_weight=True):\r\n\r\n    value = torch.rand(N, S, M, D).cuda() * 0.01\r\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\r\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\r\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\r\n    im2col_step = 2\r\n    if use_pytorch:\r\n        func = ms_deform_attn_core_pytorch\r\n    else:\r\n        func = MSDeformAttnFunction.apply\r\n\r\n    value.requires_grad = grad_value\r\n    sampling_locations.requires_grad = grad_sampling_loc\r\n    attention_weights.requires_grad = grad_attn_weight\r\n\r\n    eps = 1e-3 if not grad_sampling_loc else 2e-4\r\n    if use_pytorch:\r\n        gradok = gradcheck(func, (value, shapes, sampling_locations, attention_weights),\r\n                           eps=eps, atol=1e-3, rtol=1e-2, raise_exception=True)\r\n    else:\r\n        gradok = gradcheck(func, (value, shapes, sampling_locations, attention_weights, im2col_step),\r\n                           eps=eps, atol=1e-3, rtol=1e-2, raise_exception=True)\r\n\r\n    print(f'* {gradok} '\r\n          f'check_gradient_ms_deform_attn('\r\n          f'{use_pytorch}, {grad_value}, {grad_sampling_loc}, {grad_attn_weight})')\r\n\r\n\r\nif __name__ == '__main__':\r\n    print('checking forward')\r\n    check_forward_equal_with_pytorch()\r\n\r\n    print('checking backward')\r\n    check_backward_equal_with_pytorch()\r\n\r\n    print('checking gradient of pytorch version')\r\n    check_gradient_ms_deform_attn(True, True, False, False)\r\n    check_gradient_ms_deform_attn(True, False, True, False)\r\n    check_gradient_ms_deform_attn(True, False, False, True)\r\n    check_gradient_ms_deform_attn(True, True, True, True)\r\n\r\n    print('checking gradient of cuda version')\r\n    check_gradient_ms_deform_attn(False, True, False, False)\r\n    check_gradient_ms_deform_attn(False, False, True, False)\r\n    check_gradient_ms_deform_attn(False, False, False, True)\r\n    check_gradient_ms_deform_attn(False, True, True, True)\r\n\r\n\r\n\r\n"
  },
  {
    "path": "src/trackformer/models/ops/test_double_precision.py",
    "content": "#!/usr/bin/env python\r\nfrom __future__ import absolute_import\r\nfrom __future__ import print_function\r\nfrom __future__ import division\r\n\r\nimport time\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch.autograd import gradcheck\r\n\r\nfrom functions.ms_deform_attn_func import MSDeformAttnFunction, ms_deform_attn_core_pytorch\r\n\r\n\r\nN, M, D = 2, 2, 4\r\nLq, L, P = 3, 3, 2\r\nshapes = torch.as_tensor([(12, 8), (6, 4), (3, 2)], dtype=torch.long).cuda()\r\nS = sum([(H*W).item() for H, W in shapes])\r\n\r\ntorch.manual_seed(3)\r\n\r\n@torch.no_grad()\r\ndef check_forward_equal_with_pytorch():\r\n    value = torch.rand(N, S, M, D).cuda() * 0.01\r\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\r\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\r\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\r\n    im2col_step = 2\r\n    output_pytorch = ms_deform_attn_core_pytorch(value.double(), shapes, sampling_locations.double(), attention_weights.double()).detach().cpu()\r\n    output_cuda = MSDeformAttnFunction.apply(value.double(), shapes, sampling_locations.double(), attention_weights.double(), im2col_step).detach().cpu()\r\n    fwdok = torch.allclose(output_cuda, output_pytorch, rtol=1e-2, atol=1e-3)\r\n    max_abs_err = (output_cuda - output_pytorch).abs().max()\r\n    max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max()\r\n\r\n    print(f'* {fwdok} check_forward_equal_with_pytorch: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}')\r\n\r\n\r\ndef check_backward_equal_with_pytorch():\r\n    value = torch.rand(N, S, M, D).cuda() * 0.01\r\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\r\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\r\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\r\n    im2col_step = 2\r\n    value.requires_grad = True\r\n    sampling_locations.requires_grad = True\r\n    attention_weights.requires_grad = True\r\n    output_pytorch = ms_deform_attn_core_pytorch(value.double(), shapes, sampling_locations.double(), attention_weights.double())\r\n    output_cuda = MSDeformAttnFunction.apply(value.double(), shapes, sampling_locations.double(), attention_weights.double(), im2col_step)\r\n    loss_pytorch = output_pytorch.abs().sum()\r\n    loss_cuda = output_cuda.abs().sum()\r\n\r\n    grad_value_pytorch = torch.autograd.grad(loss_pytorch, value, retain_graph=True)[0].detach().cpu()\r\n    grad_value_cuda = torch.autograd.grad(loss_cuda, value, retain_graph=True)[0].detach().cpu()\r\n    bwdok = torch.allclose(grad_value_cuda, grad_value_pytorch, rtol=1e-2, atol=1e-3)\r\n    max_abs_err = (grad_value_cuda - grad_value_pytorch).abs().max()\r\n    zero_grad_mask = grad_value_pytorch == 0\r\n    max_rel_err = ((grad_value_cuda - grad_value_pytorch).abs() / grad_value_pytorch.abs())[~zero_grad_mask].max()\r\n    if zero_grad_mask.sum() == 0:\r\n        max_abs_err_0 = 0\r\n    else:\r\n        max_abs_err_0 = (grad_value_cuda - grad_value_pytorch).abs()[zero_grad_mask].max()\r\n    print(f'* {bwdok} check_backward_equal_with_pytorch - input1: '\r\n          f'max_abs_err {max_abs_err:.2e} '\r\n          f'max_rel_err {max_rel_err:.2e} '\r\n          f'max_abs_err_0 {max_abs_err_0:.2e}')\r\n\r\n    grad_sampling_loc_pytorch = torch.autograd.grad(loss_pytorch, sampling_locations, retain_graph=True)[0].detach().cpu()\r\n    grad_sampling_loc_cuda = torch.autograd.grad(loss_cuda, sampling_locations, retain_graph=True)[0].detach().cpu()\r\n    bwdok = torch.allclose(grad_sampling_loc_cuda, grad_sampling_loc_pytorch, rtol=1e-2, atol=1e-3)\r\n    max_abs_err = (grad_sampling_loc_cuda - grad_sampling_loc_pytorch).abs().max()\r\n    zero_grad_mask = grad_sampling_loc_pytorch == 0\r\n    max_rel_err = ((grad_sampling_loc_cuda - grad_sampling_loc_pytorch).abs() / grad_sampling_loc_pytorch.abs())[~zero_grad_mask].max()\r\n    if zero_grad_mask.sum() == 0:\r\n        max_abs_err_0 = 0\r\n    else:\r\n        max_abs_err_0 = (grad_sampling_loc_cuda - grad_sampling_loc_pytorch).abs()[zero_grad_mask].max()\r\n    print(f'* {bwdok} check_backward_equal_with_pytorch - input2: '\r\n          f'max_abs_err {max_abs_err:.2e} '\r\n          f'max_rel_err {max_rel_err:.2e} '\r\n          f'max_abs_err_0 {max_abs_err_0:.2e}')\r\n\r\n    grad_attn_weight_pytorch = torch.autograd.grad(loss_pytorch, attention_weights, retain_graph=True)[0].detach().cpu()\r\n    grad_attn_weight_cuda = torch.autograd.grad(loss_cuda, attention_weights, retain_graph=True)[0].detach().cpu()\r\n    bwdok = torch.allclose(grad_attn_weight_cuda, grad_attn_weight_pytorch, rtol=1e-2, atol=1e-3)\r\n    max_abs_err = (grad_attn_weight_cuda - grad_attn_weight_pytorch).abs().max()\r\n    zero_grad_mask = grad_attn_weight_pytorch == 0\r\n    max_rel_err = ((grad_attn_weight_cuda - grad_attn_weight_pytorch).abs() / grad_attn_weight_pytorch.abs())[~zero_grad_mask].max()\r\n    if zero_grad_mask.sum() == 0:\r\n        max_abs_err_0 = 0\r\n    else:\r\n        max_abs_err_0 = (grad_attn_weight_cuda - grad_attn_weight_pytorch).abs()[zero_grad_mask].max()\r\n    print(f'* {bwdok} check_backward_equal_with_pytorch - input3: '\r\n          f'max_abs_err {max_abs_err:.2e} '\r\n          f'max_rel_err {max_rel_err:.2e} '\r\n          f'max_abs_err_0 {max_abs_err_0:.2e}')\r\n\r\n\r\ndef check_gradient_ms_deform_attn(\r\n        use_pytorch=False,\r\n        grad_value=True, grad_sampling_loc=True, grad_attn_weight=True):\r\n\r\n    value = torch.rand(N, S, M, D).cuda() * 0.01\r\n    sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()\r\n    attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5\r\n    attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)\r\n    im2col_step = 2\r\n    if use_pytorch:\r\n        func = ms_deform_attn_core_pytorch\r\n    else:\r\n        func = MSDeformAttnFunction.apply\r\n\r\n    value.requires_grad = grad_value\r\n    sampling_locations.requires_grad = grad_sampling_loc\r\n    attention_weights.requires_grad = grad_attn_weight\r\n\r\n    eps = 1e-3 if not grad_sampling_loc else 2e-4\r\n    if use_pytorch:\r\n        gradok = gradcheck(func, (value.double(), shapes, sampling_locations.double(), attention_weights.double()))\r\n    else:\r\n        gradok = gradcheck(func, (value.double(), shapes, sampling_locations.double(), attention_weights.double(), im2col_step))\r\n\r\n    print(f'* {gradok} '\r\n          f'check_gradient_ms_deform_attn('\r\n          f'{use_pytorch}, {grad_value}, {grad_sampling_loc}, {grad_attn_weight})')\r\n\r\n\r\nif __name__ == '__main__':\r\n    print('checking forward')\r\n    check_forward_equal_with_pytorch()\r\n\r\n    print('checking backward')\r\n    check_backward_equal_with_pytorch()\r\n\r\n    print('checking gradient of pytorch version')\r\n    check_gradient_ms_deform_attn(True, True, False, False)\r\n    check_gradient_ms_deform_attn(True, False, True, False)\r\n    check_gradient_ms_deform_attn(True, False, False, True)\r\n    check_gradient_ms_deform_attn(True, True, True, True)\r\n\r\n    print('checking gradient of cuda version')\r\n    check_gradient_ms_deform_attn(False, True, False, False)\r\n    check_gradient_ms_deform_attn(False, False, True, False)\r\n    check_gradient_ms_deform_attn(False, False, False, True)\r\n    check_gradient_ms_deform_attn(False, True, True, True)\r\n\r\n\r\n\r\n"
  },
  {
    "path": "src/trackformer/models/position_encoding.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nVarious positional encodings for the transformer.\n\"\"\"\nimport math\nimport torch\nfrom torch import nn\n\nfrom ..util.misc import NestedTensor\n\n\nclass PositionEmbeddingSine3D(nn.Module):\n    \"\"\"\n    This is a more standard version of the position embedding, very similar to the one\n    used by the Attention is all you need paper, generalized to work on images.\n    \"\"\"\n    # def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):\n    def __init__(self, num_pos_feats=64, num_frames=2, temperature=10000, normalize=False, scale=None):\n        super().__init__()\n        self.num_pos_feats = num_pos_feats\n        self.temperature = temperature\n        self.normalize = normalize\n        self.frames = num_frames\n\n        if scale is not None and normalize is False:\n            raise ValueError(\"normalize should be True if scale is passed\")\n        if scale is None:\n            scale = 2 * math.pi\n        self.scale = scale\n\n    def forward(self, tensor_list: NestedTensor):\n        x = tensor_list.tensors\n        mask = tensor_list.mask\n        n, h, w = mask.shape\n        # assert n == 1\n        # mask = mask.reshape(1, 1, h, w)\n        mask = mask.view(n, 1, h, w)\n        mask = mask.expand(n, self.frames, h, w)\n\n        assert mask is not None\n        not_mask = ~mask\n        # y_embed = not_mask.cumsum(1, dtype=torch.float32)\n        # x_embed = not_mask.cumsum(2, dtype=torch.float32)\n\n        z_embed = not_mask.cumsum(1, dtype=torch.float32)\n        y_embed = not_mask.cumsum(2, dtype=torch.float32)\n        x_embed = not_mask.cumsum(3, dtype=torch.float32)\n\n        if self.normalize:\n            eps = 1e-6\n            # y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale\n            # x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale\n\n            z_embed = z_embed / (z_embed[:, -1:, :, :] + eps) * self.scale\n            y_embed = y_embed / (y_embed[:, :, -1:, :] + eps) * self.scale\n            x_embed = x_embed / (x_embed[:, :, :, -1:] + eps) * self.scale\n\n        dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)\n        dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)\n\n        # pos_x = x_embed[:, :, :, None] / dim_t\n        # pos_y = y_embed[:, :, :, None] / dim_t\n        # pos_x = torch.stack((\n        #     pos_x[:, :, :, 0::2].sin(),\n        #     pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3)\n        # pos_y = torch.stack((\n        #     pos_y[:, :, :, 0::2].sin(),\n        #     pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3)\n        # pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)\n\n        pos_x = x_embed[:, :, :, :, None] / dim_t\n        pos_y = y_embed[:, :, :, :, None] / dim_t\n        pos_z = z_embed[:, :, :, :, None] / dim_t\n        pos_x = torch.stack((pos_x[:, :, :, :, 0::2].sin(), pos_x[:, :, :, :, 1::2].cos()), dim=5).flatten(4)\n        pos_y = torch.stack((pos_y[:, :, :, :, 0::2].sin(), pos_y[:, :, :, :, 1::2].cos()), dim=5).flatten(4)\n        pos_z = torch.stack((pos_z[:, :, :, :, 0::2].sin(), pos_z[:, :, :, :, 1::2].cos()), dim=5).flatten(4)\n        # pos_w = torch.zeros_like(pos_z)\n        # pos = torch.cat((pos_w, pos_z, pos_y, pos_x), dim=4).permute(0, 1, 4, 2, 3)\n        pos = torch.cat((pos_z, pos_y, pos_x), dim=4).permute(0, 1, 4, 2, 3)\n\n        return pos\n\n\nclass PositionEmbeddingSine(nn.Module):\n    \"\"\"\n    This is a more standard version of the position embedding, very similar to the one\n    used by the Attention is all you need paper, generalized to work on images.\n    \"\"\"\n    def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):\n        super().__init__()\n        self.num_pos_feats = num_pos_feats\n        self.temperature = temperature\n        self.normalize = normalize\n        if scale is not None and normalize is False:\n            raise ValueError(\"normalize should be True if scale is passed\")\n        if scale is None:\n            scale = 2 * math.pi\n        self.scale = scale\n\n    def forward(self, tensor_list: NestedTensor):\n        x = tensor_list.tensors\n        mask = tensor_list.mask\n        assert mask is not None\n        not_mask = ~mask\n        y_embed = not_mask.cumsum(1, dtype=torch.float32)\n        x_embed = not_mask.cumsum(2, dtype=torch.float32)\n        if self.normalize:\n            eps = 1e-6\n            y_embed = (y_embed - 0.5) / (y_embed[:, -1:, :] + eps) * self.scale\n            x_embed = (x_embed - 0.5) / (x_embed[:, :, -1:] + eps) * self.scale\n\n        dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)\n        dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)\n\n        pos_x = x_embed[:, :, :, None] / dim_t\n        pos_y = y_embed[:, :, :, None] / dim_t\n        pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3)\n        pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3)\n        pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)\n        return pos\n\n\nclass PositionEmbeddingLearned(nn.Module):\n    \"\"\"\n    Absolute pos embedding, learned.\n    \"\"\"\n    def __init__(self, num_pos_feats=256):\n        super().__init__()\n        self.row_embed = nn.Embedding(50, num_pos_feats)\n        self.col_embed = nn.Embedding(50, num_pos_feats)\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        nn.init.uniform_(self.row_embed.weight)\n        nn.init.uniform_(self.col_embed.weight)\n\n    def forward(self, tensor_list: NestedTensor):\n        x = tensor_list.tensors\n        h, w = x.shape[-2:]\n        i = torch.arange(w, device=x.device)\n        j = torch.arange(h, device=x.device)\n        x_emb = self.col_embed(i)\n        y_emb = self.row_embed(j)\n        pos = torch.cat([\n            x_emb.unsqueeze(0).repeat(h, 1, 1),\n            y_emb.unsqueeze(1).repeat(1, w, 1),\n        ], dim=-1).permute(2, 0, 1).unsqueeze(0).repeat(x.shape[0], 1, 1, 1)\n        return pos\n\n\ndef build_position_encoding(args):\n    # n_steps = args.hidden_dim // 2\n    # n_steps = args.hidden_dim // 4\n    if args.multi_frame_attention and args.multi_frame_encoding:\n        n_steps = args.hidden_dim // 3\n        sine_emedding_func = PositionEmbeddingSine3D\n    else:\n        n_steps = args.hidden_dim // 2\n        sine_emedding_func = PositionEmbeddingSine\n\n    if args.position_embedding in ('v2', 'sine'):\n        # TODO find a better way of exposing other arguments\n        position_embedding = sine_emedding_func(n_steps, normalize=True)\n    elif args.position_embedding in ('v3', 'learned'):\n        position_embedding = PositionEmbeddingLearned(n_steps)\n    else:\n        raise ValueError(f\"not supported {args.position_embedding}\")\n\n    return position_embedding\n"
  },
  {
    "path": "src/trackformer/models/tracker.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nTracker which achieves MOT with the provided object detector.\n\"\"\"\nfrom collections import deque\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom scipy.optimize import linear_sum_assignment\nfrom torchvision.ops.boxes import clip_boxes_to_image, nms, box_iou\n\nfrom ..util.box_ops import box_xyxy_to_cxcywh\n\n\nclass Tracker:\n    \"\"\"The main tracking file, here is where magic happens.\"\"\"\n\n    def __init__(self, obj_detector, obj_detector_post, tracker_cfg,\n                 generate_attention_maps, logger=None, verbose=False):\n        self.obj_detector = obj_detector\n        self.obj_detector_post = obj_detector_post\n        self.detection_obj_score_thresh = tracker_cfg['detection_obj_score_thresh']\n        self.track_obj_score_thresh = tracker_cfg['track_obj_score_thresh']\n        self.detection_nms_thresh = tracker_cfg['detection_nms_thresh']\n        self.track_nms_thresh = tracker_cfg['track_nms_thresh']\n        self.public_detections = tracker_cfg['public_detections']\n        self.inactive_patience = float(tracker_cfg['inactive_patience'])\n        self.reid_sim_threshold = tracker_cfg['reid_sim_threshold']\n        self.reid_sim_only = tracker_cfg['reid_sim_only']\n        self.generate_attention_maps = generate_attention_maps\n        self.reid_score_thresh = tracker_cfg['reid_score_thresh']\n        self.reid_greedy_matching = tracker_cfg['reid_greedy_matching']\n        self.prev_frame_dist = tracker_cfg['prev_frame_dist']\n        self.steps_termination = tracker_cfg['steps_termination']\n\n        if self.generate_attention_maps:\n            assert hasattr(self.obj_detector.transformer.decoder.layers[-1], 'multihead_attn'), 'Generation of attention maps not possible for deformable DETR.'\n\n            attention_data = {\n                'maps': None,\n                'conv_features': {},\n                'hooks': []}\n\n            hook = self.obj_detector.backbone[-2].register_forward_hook(\n                lambda self, input, output: attention_data.update({'conv_features': output}))\n            attention_data['hooks'].append(hook)\n\n            def add_attention_map_to_data(self, input, output):\n                height, width = attention_data['conv_features']['3'].tensors.shape[-2:]\n                attention_maps = output[1].view(-1, height, width)\n\n                attention_data.update({'maps': attention_maps})\n\n            multihead_attn = self.obj_detector.transformer.decoder.layers[-1].multihead_attn\n            hook = multihead_attn.register_forward_hook(\n                add_attention_map_to_data)\n            attention_data['hooks'].append(hook)\n\n            self.attention_data = attention_data\n\n        self._logger = logger\n        if self._logger is None:\n            self._logger = lambda *log_strs: None\n        self._verbose = verbose\n\n    @property\n    def num_object_queries(self):\n        return self.obj_detector.num_queries\n\n    def reset(self, hard=True):\n        self.tracks = []\n        self.inactive_tracks = []\n        self._prev_features = deque([None], maxlen=self.prev_frame_dist)\n\n        if hard:\n            self.track_num = 0\n            self.results = {}\n            self.frame_index = 0\n            self.num_reids = 0\n\n    @property\n    def device(self):\n        return next(self.obj_detector.parameters()).device\n\n    def tracks_to_inactive(self, tracks):\n        self.tracks = [t for t in self.tracks if t not in tracks]\n\n        for track in tracks:\n            track.pos = track.last_pos[-1]\n        self.inactive_tracks += tracks\n\n    def add_tracks(self, pos, scores, hs_embeds, indices, masks=None, attention_maps=None, aux_results=None):\n        \"\"\"Initializes new Track objects and saves them.\"\"\"\n        new_track_ids = []\n        for i in range(len(pos)):\n            self.tracks.append(Track(\n                pos[i],\n                scores[i],\n                self.track_num + i,\n                hs_embeds[i],\n                indices[i],\n                None if masks is None else masks[i],\n                None if attention_maps is None else attention_maps[i],\n            ))\n            new_track_ids.append(self.track_num + i)\n        self.track_num += len(new_track_ids)\n\n        if new_track_ids:\n            self._logger(\n                f'INIT TRACK IDS (detection_obj_score_thresh={self.detection_obj_score_thresh}): '\n                f'{new_track_ids}')\n\n            if aux_results is not None:\n                aux_scores = torch.cat([\n                    a['scores'][-self.num_object_queries:][indices]\n                    for a in aux_results] + [scores[..., None], ], dim=-1)\n\n                for new_track_id, aux_score in zip(new_track_ids, aux_scores):\n                    self._logger(f\"AUX SCORES ID {new_track_id}: {[f'{s:.2f}' for s in aux_score]}\")\n\n        return new_track_ids\n\n    def public_detections_mask(self, new_det_boxes, public_det_boxes):\n        \"\"\"Returns mask to filter current frame detections with provided set of\n           public detections.\"\"\"\n\n        if not self.public_detections:\n            return torch.ones(new_det_boxes.size(0)).bool().to(self.device)\n\n        if not len(public_det_boxes) or not len(new_det_boxes):\n            return torch.zeros(new_det_boxes.size(0)).bool().to(self.device)\n\n        public_detections_mask = torch.zeros(new_det_boxes.size(0)).bool().to(self.device)\n\n        if self.public_detections == 'center_distance':\n            item_size = [((box[2] - box[0]) * (box[3] - box[1]))\n                         for box in new_det_boxes]\n            item_size = np.array(item_size, np.float32)\n\n            new_det_boxes_cxcy = box_xyxy_to_cxcywh(new_det_boxes).cpu().numpy()[:,:2]\n            public_det_boxes_cxcy = box_xyxy_to_cxcywh(public_det_boxes).cpu().numpy()[:,:2]\n\n            dist3 = new_det_boxes_cxcy.reshape(-1, 1, 2) - public_det_boxes_cxcy.reshape(1, -1, 2)\n            dist3 = (dist3 ** 2).sum(axis=2)\n\n            for j in range(len(public_det_boxes)):\n                i = dist3[:, j].argmin()\n\n                if dist3[i, j] < item_size[i]:\n                    dist3[i, :] = 1e18\n                    public_detections_mask[i] = True\n        elif self.public_detections == 'min_iou_0_5':\n            iou_matrix = box_iou(new_det_boxes, public_det_boxes.to(self.device))\n\n            for j in range(len(public_det_boxes)):\n                i = iou_matrix[:, j].argmax()\n\n                if iou_matrix[i, j] >= 0.5:\n                    iou_matrix[i, :] = 0\n                    public_detections_mask[i] = True\n        else:\n            raise NotImplementedError\n\n        return public_detections_mask\n\n    def reid(self, new_det_boxes, new_det_scores, new_det_hs_embeds,\n             new_det_masks=None, new_det_attention_maps=None):\n        \"\"\"Tries to ReID inactive tracks with provided detections.\"\"\"\n\n        self.inactive_tracks = [\n            t for t in self.inactive_tracks\n            if t.has_positive_area() and t.count_inactive <= self.inactive_patience\n        ]\n\n        if not self.inactive_tracks or not len(new_det_boxes):\n            return torch.ones(new_det_boxes.size(0)).bool().to(self.device)\n\n        # calculate distances\n        dist_mat = []\n        if self.reid_greedy_matching:\n            new_det_boxes_cxcyhw = box_xyxy_to_cxcywh(new_det_boxes).cpu().numpy()\n            inactive_boxes_cxcyhw = box_xyxy_to_cxcywh(torch.stack([\n                track.pos for track in self.inactive_tracks])).cpu().numpy()\n\n            dist_mat = inactive_boxes_cxcyhw[:, :2].reshape(-1, 1, 2) - \\\n                new_det_boxes_cxcyhw[:, :2].reshape(1, -1, 2)\n            dist_mat = (dist_mat ** 2).sum(axis=2)\n\n            track_size = inactive_boxes_cxcyhw[:, 2] * inactive_boxes_cxcyhw[:, 3]\n            item_size = new_det_boxes_cxcyhw[:, 2] * new_det_boxes_cxcyhw[:, 3]\n\n            invalid = ((dist_mat > track_size.reshape(len(track_size), 1)) + \\\n                       (dist_mat > item_size.reshape(1, len(item_size))))\n            dist_mat = dist_mat + invalid * 1e18\n\n            def greedy_assignment(dist):\n                matched_indices = []\n                if dist.shape[1] == 0:\n                    return np.array(matched_indices, np.int32).reshape(-1, 2)\n                for i in range(dist.shape[0]):\n                    j = dist[i].argmin()\n                    if dist[i][j] < 1e16:\n                        dist[:, j] = 1e18\n                        dist[i, j] = 0.0\n                        matched_indices.append([i, j])\n                return np.array(matched_indices, np.int32).reshape(-1, 2)\n\n            matched_indices = greedy_assignment(dist_mat)\n            row_indices, col_indices = matched_indices[:, 0], matched_indices[:, 1]\n\n        else:\n            for track in self.inactive_tracks:\n                track_sim = track.hs_embed[-1]\n\n                track_sim_dists = torch.cat([\n                    F.pairwise_distance(track_sim, sim.unsqueeze(0))\n                    for sim in new_det_hs_embeds])\n\n                dist_mat.append(track_sim_dists)\n\n            dist_mat = torch.stack(dist_mat)\n\n            dist_mat = dist_mat.cpu().numpy()\n            row_indices, col_indices = linear_sum_assignment(dist_mat)\n\n        assigned_indices = []\n        remove_inactive = []\n        for row_ind, col_ind in zip(row_indices, col_indices):\n            if dist_mat[row_ind, col_ind] <= self.reid_sim_threshold:\n                track = self.inactive_tracks[row_ind]\n\n                self._logger(\n                    f'REID: track.id={track.id} - '\n                    f'count_inactive={track.count_inactive} - '\n                    f'to_inactive_frame={self.frame_index - track.count_inactive}')\n\n                track.count_inactive = 0\n                track.pos = new_det_boxes[col_ind]\n                track.score = new_det_scores[col_ind]\n                track.hs_embed.append(new_det_hs_embeds[col_ind])\n                track.reset_last_pos()\n\n                if new_det_masks is not None:\n                    track.mask = new_det_masks[col_ind]\n                if new_det_attention_maps is not None:\n                    track.attention_map = new_det_attention_maps[col_ind]\n\n                assigned_indices.append(col_ind)\n                remove_inactive.append(track)\n\n                self.tracks.append(track)\n\n                self.num_reids += 1\n\n        for track in remove_inactive:\n            self.inactive_tracks.remove(track)\n\n        reid_mask = torch.ones(new_det_boxes.size(0)).bool().to(self.device)\n\n        for ind in assigned_indices:\n            reid_mask[ind] = False\n\n        return reid_mask\n\n    def step(self, blob):\n        \"\"\"This function should be called every timestep to perform tracking with a blob\n        containing the image information.\n        \"\"\"\n        self.inactive_tracks = [\n            t for t in self.inactive_tracks\n            if t.has_positive_area() and t.count_inactive <= self.inactive_patience\n        ]\n\n        self._logger(f'FRAME: {self.frame_index + 1}')\n        if self.inactive_tracks:\n            self._logger(f'INACTIVE TRACK IDS: {[t.id for t in self.inactive_tracks]}')\n\n        # add current position to last_pos list\n        for track in self.tracks:\n            track.last_pos.append(track.pos.clone())\n\n        img = blob['img'].to(self.device)\n        orig_size = blob['orig_size'].to(self.device)\n\n        target = None\n        num_prev_track = len(self.tracks + self.inactive_tracks)\n        if num_prev_track:\n            track_query_boxes = torch.stack([\n                t.pos for t in self.tracks + self.inactive_tracks], dim=0).cpu()\n\n            track_query_boxes = box_xyxy_to_cxcywh(track_query_boxes)\n            track_query_boxes = track_query_boxes / torch.tensor([\n                orig_size[0, 1], orig_size[0, 0],\n                orig_size[0, 1], orig_size[0, 0]], dtype=torch.float32)\n\n            target = {'track_query_boxes': track_query_boxes}\n\n            target['image_id'] = torch.tensor([1]).to(self.device)\n            target['track_query_hs_embeds'] = torch.stack([\n                t.hs_embed[-1] for t in self.tracks + self.inactive_tracks], dim=0)\n\n            target = {k: v.to(self.device) for k, v in target.items()}\n            target = [target]\n\n        outputs, _, features, _, _ = self.obj_detector(img, target, self._prev_features[0])\n\n        hs_embeds = outputs['hs_embed'][0]\n\n        results = self.obj_detector_post['bbox'](outputs, orig_size)\n        if \"segm\" in self.obj_detector_post:\n            results = self.obj_detector_post['segm'](\n                results,\n                outputs,\n                orig_size,\n                blob[\"size\"].to(self.device),\n                return_probs=True)\n        result = results[0]\n\n        if 'masks' in result:\n            result['masks'] = result['masks'].squeeze(dim=1)\n\n        if self.obj_detector.overflow_boxes:\n            boxes = result['boxes']\n        else:\n            boxes = clip_boxes_to_image(result['boxes'], orig_size[0])\n\n        # TRACKS\n        if num_prev_track:\n            track_scores = result['scores'][:-self.num_object_queries]\n            track_boxes = boxes[:-self.num_object_queries]\n\n            if 'masks' in result:\n                track_masks = result['masks'][:-self.num_object_queries]\n            if self.generate_attention_maps:\n                track_attention_maps = self.attention_data['maps'][:-self.num_object_queries]\n\n            track_keep = torch.logical_and(\n                track_scores > self.track_obj_score_thresh,\n                result['labels'][:-self.num_object_queries] == 0)\n\n            tracks_to_inactive = []\n            tracks_from_inactive = []\n\n            for i, track in enumerate(self.tracks):\n                if track_keep[i]:\n                    track.score = track_scores[i]\n                    track.hs_embed.append(hs_embeds[i])\n                    track.pos = track_boxes[i]\n                    track.count_termination = 0\n\n                    if 'masks' in result:\n                        track.mask = track_masks[i]\n                    if self.generate_attention_maps:\n                        track.attention_map = track_attention_maps[i]\n                else:\n                    track.count_termination += 1\n                    if track.count_termination >= self.steps_termination:\n                        tracks_to_inactive.append(track)\n\n            track_keep = torch.logical_and(\n                track_scores > self.reid_score_thresh,\n                result['labels'][:-self.num_object_queries] == 0)\n\n            # reid queries\n            for i, track in enumerate(self.inactive_tracks, start=len(self.tracks)):\n                if track_keep[i]:\n                    track.score = track_scores[i]\n                    track.hs_embed.append(hs_embeds[i])\n                    track.pos = track_boxes[i]\n\n                    if 'masks' in result:\n                        track.mask = track_masks[i]\n                    if self.generate_attention_maps:\n                        track.attention_map = track_attention_maps[i]\n\n                    tracks_from_inactive.append(track)\n\n            if tracks_to_inactive:\n                self._logger(\n                    f'NEW INACTIVE TRACK IDS '\n                    f'(track_obj_score_thresh={self.track_obj_score_thresh}): '\n                    f'{[t.id for t in tracks_to_inactive]}')\n\n            self.num_reids += len(tracks_from_inactive)\n            for track in tracks_from_inactive:\n                self.inactive_tracks.remove(track)\n                self.tracks.append(track)\n\n            self.tracks_to_inactive(tracks_to_inactive)\n            # self.tracks = [\n            #         track for track in self.tracks\n            #         if track not in tracks_to_inactive]\n\n            if self.track_nms_thresh and self.tracks:\n                track_boxes = torch.stack([t.pos for t in self.tracks])\n                track_scores = torch.stack([t.score for t in self.tracks])\n\n                keep = nms(track_boxes, track_scores, self.track_nms_thresh)\n                remove_tracks = [\n                    track for i, track in enumerate(self.tracks)\n                    if i not in keep]\n\n                if remove_tracks:\n                    self._logger(\n                        f'REMOVE TRACK IDS (track_nms_thresh={self.track_nms_thresh}): '\n                        f'{[track.id for track in remove_tracks]}')\n\n                # self.tracks_to_inactive(remove_tracks)\n                self.tracks = [\n                    track for track in self.tracks\n                    if track not in remove_tracks]\n\n        # NEW DETS\n        new_det_scores = result['scores'][-self.num_object_queries:]\n        new_det_boxes = boxes[-self.num_object_queries:]\n        new_det_hs_embeds = hs_embeds[-self.num_object_queries:]\n\n        if 'masks' in result:\n            new_det_masks = result['masks'][-self.num_object_queries:]\n        if self.generate_attention_maps:\n            new_det_attention_maps = self.attention_data['maps'][-self.num_object_queries:]\n\n        new_det_keep = torch.logical_and(\n            new_det_scores > self.detection_obj_score_thresh,\n            result['labels'][-self.num_object_queries:] == 0)\n\n        new_det_boxes = new_det_boxes[new_det_keep]\n        new_det_scores = new_det_scores[new_det_keep]\n        new_det_hs_embeds = new_det_hs_embeds[new_det_keep]\n        new_det_indices = new_det_keep.float().nonzero()\n\n        if 'masks' in result:\n            new_det_masks = new_det_masks[new_det_keep]\n        if self.generate_attention_maps:\n            new_det_attention_maps = new_det_attention_maps[new_det_keep]\n\n        # public detection\n        public_detections_mask = self.public_detections_mask(\n            new_det_boxes, blob['dets'][0])\n\n        new_det_boxes = new_det_boxes[public_detections_mask]\n        new_det_scores = new_det_scores[public_detections_mask]\n        new_det_hs_embeds = new_det_hs_embeds[public_detections_mask]\n        new_det_indices = new_det_indices[public_detections_mask]\n        if 'masks' in result:\n            new_det_masks = new_det_masks[public_detections_mask]\n        if self.generate_attention_maps:\n            new_det_attention_maps = new_det_attention_maps[public_detections_mask]\n\n        # reid\n        reid_mask = self.reid(\n            new_det_boxes,\n            new_det_scores,\n            new_det_hs_embeds,\n            new_det_masks if 'masks' in result else None,\n            new_det_attention_maps if self.generate_attention_maps else None)\n\n        new_det_boxes = new_det_boxes[reid_mask]\n        new_det_scores = new_det_scores[reid_mask]\n        new_det_hs_embeds = new_det_hs_embeds[reid_mask]\n        new_det_indices = new_det_indices[reid_mask]\n        if 'masks' in result:\n            new_det_masks = new_det_masks[reid_mask]\n        if self.generate_attention_maps:\n            new_det_attention_maps = new_det_attention_maps[reid_mask]\n\n        # final add track\n        aux_results = None\n        if self._verbose:\n            aux_results = [\n                self.obj_detector_post['bbox'](out, orig_size)[0]\n                for out in outputs['aux_outputs']]\n\n        new_track_ids = self.add_tracks(\n            new_det_boxes,\n            new_det_scores,\n            new_det_hs_embeds,\n            new_det_indices,\n            new_det_masks if 'masks' in result else None,\n            new_det_attention_maps if self.generate_attention_maps else None,\n            aux_results)\n\n        # NMS\n        if self.detection_nms_thresh and self.tracks:\n            track_boxes = torch.stack([t.pos for t in self.tracks])\n            track_scores = torch.stack([t.score for t in self.tracks])\n\n            new_track_mask = torch.tensor([\n                True if t.id in new_track_ids\n                else False\n                for t in self.tracks])\n            track_scores[~new_track_mask] = np.inf\n\n            keep = nms(track_boxes, track_scores, self.detection_nms_thresh)\n            remove_tracks = [track for i, track in enumerate(self.tracks) if i not in keep]\n\n            if remove_tracks:\n                self._logger(\n                    f'REMOVE TRACK IDS (detection_nms_thresh={self.detection_nms_thresh}): '\n                    f'{[track.id for track in remove_tracks]}')\n\n            self.tracks = [track for track in self.tracks if track not in remove_tracks]\n\n        ####################\n        # Generate Results #\n        ####################\n\n        if 'masks' in result and self.tracks:\n            track_mask_probs = torch.stack([track.mask for track in self.tracks])\n            index_map = torch.arange(track_mask_probs.size(0))[:, None, None]\n            index_map = index_map.expand_as(track_mask_probs)\n\n            track_masks = torch.logical_and(\n                # remove background\n                track_mask_probs > 0.5,\n                # remove overlapp by largest probablity\n                index_map == track_mask_probs.argmax(dim=0)\n            )\n            for i, track in enumerate(self.tracks):\n                track.mask = track_masks[i]\n\n        for track in self.tracks:\n            if track.id not in self.results:\n                self.results[track.id] = {}\n\n            self.results[track.id][self.frame_index] = {}\n\n            if self.obj_detector.overflow_boxes:\n                self.results[track.id][self.frame_index]['bbox'] = track.pos.cpu().numpy()\n            else:\n                self.results[track.id][self.frame_index]['bbox'] = clip_boxes_to_image(track.pos, orig_size[0]).cpu().numpy()\n\n            self.results[track.id][self.frame_index]['score'] = track.score.cpu().numpy()\n            self.results[track.id][self.frame_index]['obj_ind'] = track.obj_ind.cpu().item()\n\n            if track.mask is not None:\n                self.results[track.id][self.frame_index]['mask'] = track.mask.cpu().numpy()\n            if track.attention_map is not None:\n                self.results[track.id][self.frame_index]['attention_map'] = \\\n                    track.attention_map.cpu().numpy()\n\n        for t in self.inactive_tracks:\n            t.count_inactive += 1\n\n        self.frame_index += 1\n        self._prev_features.append(features)\n\n        if self.reid_sim_only:\n            self.tracks_to_inactive(self.tracks)\n\n    def get_results(self):\n        \"\"\"Return current tracking results.\"\"\"\n        return self.results\n\n\nclass Track(object):\n    \"\"\"This class contains all necessary for every individual track.\"\"\"\n\n    def __init__(self, pos, score, track_id, hs_embed, obj_ind,\n                 mask=None, attention_map=None):\n        self.id = track_id\n        self.pos = pos\n        self.last_pos = deque([pos.clone()])\n        self.score = score\n        self.ims = deque([])\n        self.count_inactive = 0\n        self.count_termination = 0\n        self.gt_id = None\n        self.hs_embed = [hs_embed]\n        self.mask = mask\n        self.attention_map = attention_map\n        self.obj_ind = obj_ind\n\n    def has_positive_area(self) -> bool:\n        \"\"\"Checks if the current position of the track has\n           a valid, .i.e., positive area, bounding box.\"\"\"\n        return self.pos[2] > self.pos[0] and self.pos[3] > self.pos[1]\n\n    def reset_last_pos(self) -> None:\n        \"\"\"Reset last_pos to the current position of the track.\"\"\"\n        self.last_pos.clear()\n        self.last_pos.append(self.pos.clone())\n"
  },
  {
    "path": "src/trackformer/models/transformer.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nDETR Transformer class.\n\nCopy-paste from torch.nn.Transformer with modifications:\n    * positional encodings are passed in MHattention\n    * extra LN at the end of encoder is removed\n    * decoder returns a stack of activations from all decoding layers\n\"\"\"\nimport copy\nfrom typing import Optional\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import Tensor, nn\n\n\nclass Transformer(nn.Module):\n\n    def __init__(self, d_model=512, nhead=8, num_encoder_layers=6,\n                 num_decoder_layers=6, dim_feedforward=2048, dropout=0.1,\n                 activation=\"relu\", normalize_before=False,\n                 return_intermediate_dec=False,\n                 track_attention=False):\n        super().__init__()\n\n        encoder_layer = TransformerEncoderLayer(d_model, nhead, dim_feedforward,\n                                                dropout, activation, normalize_before)\n        encoder_norm = nn.LayerNorm(d_model) if normalize_before else None\n        self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm)\n\n        decoder_layer = TransformerDecoderLayer(d_model, nhead, dim_feedforward,\n                                                dropout, activation, normalize_before)\n        decoder_norm = nn.LayerNorm(d_model)\n        self.decoder = TransformerDecoder(\n            decoder_layer, encoder_layer, num_decoder_layers, decoder_norm,\n            return_intermediate=return_intermediate_dec,\n            track_attention=track_attention)\n\n        self._reset_parameters()\n\n        self.d_model = d_model\n        self.nhead = nhead\n\n    def _reset_parameters(self):\n        for p in self.parameters():\n            if p.dim() > 1:\n                nn.init.xavier_uniform_(p)\n\n    def forward(self, src, mask, query_embed, pos_embed, tgt=None, prev_frame=None):\n        # flatten NxCxHxW to HWxNxC\n        bs, c, h, w = src.shape\n        src = src.flatten(2).permute(2, 0, 1)\n        pos_embed = pos_embed.flatten(2).permute(2, 0, 1)\n        mask = mask.flatten(1)\n\n        if tgt is None:\n            tgt = torch.zeros_like(query_embed)\n        memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed)\n\n        memory_prev_frame = None\n        if prev_frame is not None:\n            src_prev_frame = prev_frame['src'].flatten(2).permute(2, 0, 1)\n            pos_embed_prev_frame = prev_frame['pos'].flatten(2).permute(2, 0, 1)\n            mask_prev_frame = prev_frame['mask'].flatten(1)\n\n            memory_prev_frame = self.encoder(\n                src_prev_frame, src_key_padding_mask=mask_prev_frame, pos=pos_embed_prev_frame)\n\n            prev_frame['memory'] = memory_prev_frame\n            prev_frame['memory_key_padding_mask'] = mask_prev_frame\n            prev_frame['pos'] = pos_embed_prev_frame\n\n        hs, hs_without_norm = self.decoder(tgt, memory, memory_key_padding_mask=mask,\n                                           pos=pos_embed, query_pos=query_embed,\n                                           prev_frame=prev_frame)\n\n        return (hs.transpose(1, 2),\n            hs_without_norm.transpose(1, 2),\n            memory.permute(1, 2, 0).view(bs, c, h, w))\n\n\nclass TransformerEncoder(nn.Module):\n\n    def __init__(self, encoder_layer, num_layers, norm=None):\n        super().__init__()\n        self.layers = _get_clones(encoder_layer, num_layers)\n        self.num_layers = num_layers\n        self.norm = norm\n\n    def forward(self, src,\n                mask: Optional[Tensor] = None,\n                src_key_padding_mask: Optional[Tensor] = None,\n                pos: Optional[Tensor] = None):\n        output = src\n\n        for layer in self.layers:\n            output = layer(output, src_mask=mask,\n                           src_key_padding_mask=src_key_padding_mask, pos=pos)\n\n        if self.norm is not None:\n            output = self.norm(output)\n\n        return output\n\n\nclass TransformerDecoder(nn.Module):\n\n    def __init__(self, decoder_layer, encoder_layer, num_layers,\n                 norm=None, return_intermediate=False, track_attention=False):\n        super().__init__()\n        self.layers = _get_clones(decoder_layer, num_layers)\n\n        self.num_layers = num_layers\n        self.norm = norm\n        self.return_intermediate = return_intermediate\n\n        self.track_attention = track_attention\n        if self.track_attention:\n            self.layers_track_attention = _get_clones(encoder_layer, num_layers)\n\n    def forward(self, tgt, memory,\n                tgt_mask: Optional[Tensor] = None,\n                memory_mask: Optional[Tensor] = None,\n                tgt_key_padding_mask: Optional[Tensor] = None,\n                memory_key_padding_mask: Optional[Tensor] = None,\n                pos: Optional[Tensor] = None,\n                query_pos: Optional[Tensor] = None,\n                prev_frame: Optional[dict] = None):\n        output = tgt\n\n        intermediate = []\n\n        if self.track_attention:\n            track_query_pos = query_pos[:-100].clone()\n            query_pos[:-100] = 0.0\n\n        for i, layer in enumerate(self.layers):\n            if self.track_attention:\n                track_output = output[:-100].clone()\n\n                track_output = self.layers_track_attention[i](\n                    track_output,\n                    src_mask=tgt_mask,\n                    src_key_padding_mask=tgt_key_padding_mask,\n                    pos=track_query_pos)\n\n                output = torch.cat([track_output, output[-100:]])\n\n            output = layer(output, memory, tgt_mask=tgt_mask,\n                           memory_mask=memory_mask,\n                           tgt_key_padding_mask=tgt_key_padding_mask,\n                           memory_key_padding_mask=memory_key_padding_mask,\n                           pos=pos, query_pos=query_pos)\n            if self.return_intermediate:\n                intermediate.append(output)\n\n        if self.return_intermediate:\n            output = torch.stack(intermediate)\n\n        if self.norm is not None:\n            return self.norm(output), output\n        return output, output\n\n\nclass TransformerEncoderLayer(nn.Module):\n\n    def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,\n                 activation=\"relu\", normalize_before=False):\n        super().__init__()\n        self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)\n        # Implementation of Feedforward model\n        self.linear1 = nn.Linear(d_model, dim_feedforward)\n        self.dropout = nn.Dropout(dropout)\n        self.linear2 = nn.Linear(dim_feedforward, d_model)\n\n        self.norm1 = nn.LayerNorm(d_model)\n        self.norm2 = nn.LayerNorm(d_model)\n        self.dropout1 = nn.Dropout(dropout)\n        self.dropout2 = nn.Dropout(dropout)\n\n        self.activation = _get_activation_fn(activation)\n        self.normalize_before = normalize_before\n\n    def with_pos_embed(self, tensor, pos: Optional[Tensor]):\n        return tensor if pos is None else tensor + pos\n\n    def forward_post(self,\n                     src,\n                     src_mask: Optional[Tensor] = None,\n                     src_key_padding_mask: Optional[Tensor] = None,\n                     pos: Optional[Tensor] = None):\n        q = k = self.with_pos_embed(src, pos)\n        src2 = self.self_attn(q, k, value=src, attn_mask=src_mask,\n                              key_padding_mask=src_key_padding_mask)[0]\n        src = src + self.dropout1(src2)\n        src = self.norm1(src)\n        src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))\n        src = src + self.dropout2(src2)\n        src = self.norm2(src)\n        return src\n\n    def forward_pre(self, src,\n                    src_mask: Optional[Tensor] = None,\n                    src_key_padding_mask: Optional[Tensor] = None,\n                    pos: Optional[Tensor] = None):\n        src2 = self.norm1(src)\n        q = k = self.with_pos_embed(src2, pos)\n        src2 = self.self_attn(q, k, value=src2, attn_mask=src_mask,\n                              key_padding_mask=src_key_padding_mask)[0]\n        src = src + self.dropout1(src2)\n        src2 = self.norm2(src)\n        src2 = self.linear2(self.dropout(self.activation(self.linear1(src2))))\n        src = src + self.dropout2(src2)\n        return src\n\n    def forward(self, src,\n                src_mask: Optional[Tensor] = None,\n                src_key_padding_mask: Optional[Tensor] = None,\n                pos: Optional[Tensor] = None):\n        if self.normalize_before:\n            return self.forward_pre(src, src_mask, src_key_padding_mask, pos)\n        return self.forward_post(src, src_mask, src_key_padding_mask, pos)\n\n\nclass TransformerDecoderLayer(nn.Module):\n\n    def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,\n                 activation=\"relu\", normalize_before=False):\n        super().__init__()\n        self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)\n        self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)\n        # Implementation of Feedforward model\n        self.linear1 = nn.Linear(d_model, dim_feedforward)\n        self.dropout = nn.Dropout(dropout)\n        self.linear2 = nn.Linear(dim_feedforward, d_model)\n\n        self.norm1 = nn.LayerNorm(d_model)\n        self.norm2 = nn.LayerNorm(d_model)\n        self.norm3 = nn.LayerNorm(d_model)\n        self.dropout1 = nn.Dropout(dropout)\n        self.dropout2 = nn.Dropout(dropout)\n        self.dropout3 = nn.Dropout(dropout)\n\n        self.activation = _get_activation_fn(activation)\n        self.normalize_before = normalize_before\n\n    def with_pos_embed(self, tensor, pos: Optional[Tensor]):\n        return tensor if pos is None else tensor + pos\n\n    def forward_post(self, tgt, memory,\n                     tgt_mask: Optional[Tensor] = None,\n                     memory_mask: Optional[Tensor] = None,\n                     tgt_key_padding_mask: Optional[Tensor] = None,\n                     memory_key_padding_mask: Optional[Tensor] = None,\n                     pos: Optional[Tensor] = None,\n                     query_pos: Optional[Tensor] = None):\n        q = k = self.with_pos_embed(tgt, query_pos)\n        tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask,\n                              key_padding_mask=tgt_key_padding_mask)[0]\n        tgt = tgt + self.dropout1(tgt2)\n        tgt = self.norm1(tgt)\n        tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos),\n                                   key=self.with_pos_embed(memory, pos),\n                                   value=memory, attn_mask=memory_mask,\n                                   key_padding_mask=memory_key_padding_mask)[0]\n        tgt = tgt + self.dropout2(tgt2)\n        tgt = self.norm2(tgt)\n        tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))\n        tgt = tgt + self.dropout3(tgt2)\n        tgt = self.norm3(tgt)\n        return tgt\n\n    def forward_pre(self, tgt, memory,\n                    tgt_mask: Optional[Tensor] = None,\n                    memory_mask: Optional[Tensor] = None,\n                    tgt_key_padding_mask: Optional[Tensor] = None,\n                    memory_key_padding_mask: Optional[Tensor] = None,\n                    pos: Optional[Tensor] = None,\n                    query_pos: Optional[Tensor] = None):\n        tgt2 = self.norm1(tgt)\n        q = k = self.with_pos_embed(tgt2, query_pos)\n        tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,\n                              key_padding_mask=tgt_key_padding_mask)[0]\n        tgt = tgt + self.dropout1(tgt2)\n        tgt2 = self.norm2(tgt)\n        tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos),\n                                   key=self.with_pos_embed(memory, pos),\n                                   value=memory, attn_mask=memory_mask,\n                                   key_padding_mask=memory_key_padding_mask)[0]\n        tgt = tgt + self.dropout2(tgt2)\n        tgt2 = self.norm3(tgt)\n        tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))\n        tgt = tgt + self.dropout3(tgt2)\n        return tgt\n\n    def forward(self, tgt, memory,\n                tgt_mask: Optional[Tensor] = None,\n                memory_mask: Optional[Tensor] = None,\n                tgt_key_padding_mask: Optional[Tensor] = None,\n                memory_key_padding_mask: Optional[Tensor] = None,\n                pos: Optional[Tensor] = None,\n                query_pos: Optional[Tensor] = None):\n        if self.normalize_before:\n            return self.forward_pre(tgt, memory, tgt_mask, memory_mask,\n                                    tgt_key_padding_mask, memory_key_padding_mask, pos, query_pos)\n        return self.forward_post(tgt, memory, tgt_mask, memory_mask,\n                                 tgt_key_padding_mask, memory_key_padding_mask, pos, query_pos)\n\n\ndef _get_clones(module, N):\n    return nn.ModuleList([copy.deepcopy(module) for i in range(N)])\n\n\ndef _get_activation_fn(activation):\n    \"\"\"Return an activation function given a string\"\"\"\n    if activation == \"relu\":\n        return F.relu\n    if activation == \"gelu\":\n        return F.gelu\n    if activation == \"glu\":\n        return F.glu\n    raise RuntimeError(F\"activation should be relu/gelu, not {activation}.\")\n\n\ndef build_transformer(args):\n    return Transformer(\n        d_model=args.hidden_dim,\n        dropout=args.dropout,\n        nhead=args.nheads,\n        dim_feedforward=args.dim_feedforward,\n        num_encoder_layers=args.enc_layers,\n        num_decoder_layers=args.dec_layers,\n        normalize_before=args.pre_norm,\n        return_intermediate_dec=True,\n        track_attention=args.track_attention\n    )\n"
  },
  {
    "path": "src/trackformer/util/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n"
  },
  {
    "path": "src/trackformer/util/box_ops.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nUtilities for bounding box manipulation and GIoU.\n\"\"\"\nimport torch\nfrom torchvision.ops.boxes import box_area\n\n\ndef box_cxcywh_to_xyxy(x):\n    x_c, y_c, w, h = x.unbind(-1)\n    b = [(x_c - 0.5 * w), (y_c - 0.5 * h),\n         (x_c + 0.5 * w), (y_c + 0.5 * h)]\n    return torch.stack(b, dim=-1)\n\n\ndef box_xyxy_to_cxcywh(x):\n    x0, y0, x1, y1 = x.unbind(-1)\n    b = [(x0 + x1) / 2, (y0 + y1) / 2,\n         (x1 - x0), (y1 - y0)]\n    return torch.stack(b, dim=-1)\n\n\n# modified from torchvision to also return the union\ndef box_iou(boxes1, boxes2):\n    area1 = box_area(boxes1)\n    area2 = box_area(boxes2)\n\n    lt = torch.max(boxes1[:, None, :2], boxes2[:, :2])  # [N,M,2]\n    rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:])  # [N,M,2]\n\n    wh = (rb - lt).clamp(min=0)  # [N,M,2]\n    inter = wh[:, :, 0] * wh[:, :, 1]  # [N,M]\n\n    union = area1[:, None] + area2 - inter\n\n    iou = inter / union\n    return iou, union\n\n\ndef generalized_box_iou(boxes1, boxes2):\n    \"\"\"\n    Generalized IoU from https://giou.stanford.edu/\n\n    The boxes should be in [x0, y0, x1, y1] format\n\n    Returns a [N, M] pairwise matrix, where N = len(boxes1)\n    and M = len(boxes2)\n    \"\"\"\n    # degenerate boxes gives inf / nan results\n    # so do an early check\n    assert (boxes1[:, 2:] >= boxes1[:, :2]).all()\n    assert (boxes2[:, 2:] >= boxes2[:, :2]).all()\n    iou, union = box_iou(boxes1, boxes2)\n\n    lt = torch.min(boxes1[:, None, :2], boxes2[:, :2])\n    rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])\n\n    wh = (rb - lt).clamp(min=0)  # [N,M,2]\n    area = wh[:, :, 0] * wh[:, :, 1]\n\n    return iou - (area - union) / area\n\n\ndef masks_to_boxes(masks):\n    \"\"\"Compute the bounding boxes around the provided masks\n\n    The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions.\n\n    Returns a [N, 4] tensors, with the boxes in xyxy format\n    \"\"\"\n    if masks.numel() == 0:\n        return torch.zeros((0, 4), device=masks.device)\n\n    h, w = masks.shape[-2:]\n\n    y = torch.arange(0, h, dtype=torch.float)\n    x = torch.arange(0, w, dtype=torch.float)\n    y, x = torch.meshgrid(y, x)\n\n    x_mask = (masks * x.unsqueeze(0))\n    x_max = x_mask.flatten(1).max(-1)[0]\n    x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]\n\n    y_mask = (masks * y.unsqueeze(0))\n    y_max = y_mask.flatten(1).max(-1)[0]\n    y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]\n\n    return torch.stack([x_min, y_min, x_max, y_max], 1)\n"
  },
  {
    "path": "src/trackformer/util/misc.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMisc functions, including distributed helpers.\n\nMostly copy-paste from torchvision references.\n\"\"\"\nimport datetime\nimport os\nimport pickle\nimport subprocess\nimport time\nfrom argparse import Namespace\nfrom collections import defaultdict, deque\nfrom typing import List, Optional\n\nimport torch\nimport torch.distributed as dist\nimport torch.nn.functional as F\n# needed due to empty tensor bug in pytorch and torchvision 0.5\nimport torchvision\nfrom torch import Tensor\nfrom visdom import Visdom\n\nif int(torchvision.__version__.split('.')[0]) <= 0 and int(torchvision.__version__.split('.')[1]) < 7:\n    from torchvision.ops import _new_empty_tensor\n    from torchvision.ops.misc import _output_size\n\n\nclass SmoothedValue(object):\n    \"\"\"Track a series of values and provide access to smoothed values over a\n    window or the global series average.\n    \"\"\"\n\n    def __init__(self, window_size=20, fmt=None):\n        if fmt is None:\n            fmt = \"{median:.4f} ({global_avg:.4f})\"\n        self.deque = deque(maxlen=window_size)\n        self.total = 0.0\n        self.count = 0\n        self.fmt = fmt\n\n    def update(self, value, n=1):\n        self.deque.append(value)\n        self.count += n\n        self.total += value * n\n\n    def synchronize_between_processes(self):\n        \"\"\"\n        Warning: does not synchronize the deque!\n        \"\"\"\n        if not is_dist_avail_and_initialized():\n            return\n        t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda')\n        dist.barrier()\n        dist.all_reduce(t)\n        t = t.tolist()\n        self.count = int(t[0])\n        self.total = t[1]\n\n    @property\n    def median(self):\n        d = torch.tensor(list(self.deque))\n        return d.median().item()\n\n    @property\n    def avg(self):\n        d = torch.tensor(list(self.deque), dtype=torch.float32)\n        return d.mean().item()\n\n    @property\n    def global_avg(self):\n        return self.total / self.count\n\n    @property\n    def max(self):\n        return max(self.deque)\n\n    @property\n    def value(self):\n        return self.deque[-1]\n\n    def __str__(self):\n        return self.fmt.format(\n            median=self.median,\n            avg=self.avg,\n            global_avg=self.global_avg,\n            max=self.max,\n            value=self.value)\n\n\ndef all_gather(data):\n\n    \"\"\"\n    Run all_gather on arbitrary picklable data (not necessarily tensors)\n    Args:\n        data: any picklable object\n    Returns:\n        list[data]: list of data gathered from each rank\n    \"\"\"\n    world_size = get_world_size()\n    if world_size == 1:\n        return [data]\n\n    # serialized to a Tensor\n    buffer = pickle.dumps(data)\n    storage = torch.ByteStorage.from_buffer(buffer)\n    tensor = torch.ByteTensor(storage).to(\"cuda\")\n\n    # obtain Tensor size of each rank\n    local_size = torch.tensor([tensor.numel()], device=\"cuda\")\n    size_list = [torch.tensor([0], device=\"cuda\") for _ in range(world_size)]\n    dist.all_gather(size_list, local_size)\n    size_list = [int(size.item()) for size in size_list]\n    max_size = max(size_list)\n\n    # receiving Tensor from all ranks\n    # we pad the tensor because torch all_gather does not support\n    # gathering tensors of different shapes\n    tensor_list = []\n    for _ in size_list:\n        tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=\"cuda\"))\n    if local_size != max_size:\n        padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device=\"cuda\")\n        tensor = torch.cat((tensor, padding), dim=0)\n    dist.all_gather(tensor_list, tensor)\n\n    data_list = []\n    for size, tensor in zip(size_list, tensor_list):\n        buffer = tensor.cpu().numpy().tobytes()[:size]\n        data_list.append(pickle.loads(buffer))\n\n    return data_list\n\n\ndef reduce_dict(input_dict, average=True):\n    \"\"\"\n    Args:\n        input_dict (dict): all the values will be reduced\n        average (bool): whether to do average or sum\n    Reduce the values in the dictionary from all processes so that all processes\n    have the averaged results. Returns a dict with the same fields as\n    input_dict, after reduction.\n    \"\"\"\n    world_size = get_world_size()\n    if world_size < 2:\n        return input_dict\n    with torch.no_grad():\n        names = []\n        values = []\n        # sort the keys so that they are consistent across processes\n        for k in sorted(input_dict.keys()):\n            names.append(k)\n            values.append(input_dict[k])\n        values = torch.stack(values, dim=0)\n        dist.all_reduce(values)\n        if average:\n            values /= world_size\n        reduced_dict = {k: v for k, v in zip(names, values)}\n    return reduced_dict\n\n\nclass MetricLogger(object):\n    def __init__(self, print_freq, delimiter=\"\\t\", vis=None, debug=False):\n        self.meters = defaultdict(SmoothedValue)\n        self.delimiter = delimiter\n        self.vis = vis\n        self.print_freq = print_freq\n        self.debug = debug\n\n    def update(self, **kwargs):\n        for k, v in kwargs.items():\n            if isinstance(v, torch.Tensor):\n                v = v.item()\n            assert isinstance(v, (float, int))\n            self.meters[k].update(v)\n\n    def __getattr__(self, attr):\n        if attr in self.meters:\n            return self.meters[attr]\n        if attr in self.__dict__:\n            return self.__dict__[attr]\n        raise AttributeError(\"'{}' object has no attribute '{}'\".format(\n            type(self).__name__, attr))\n\n    def __str__(self):\n        loss_str = []\n        for name, meter in self.meters.items():\n            loss_str.append(f\"{name}: {meter}\")\n        return self.delimiter.join(loss_str)\n\n    def synchronize_between_processes(self):\n        for meter in self.meters.values():\n            meter.synchronize_between_processes()\n\n    def add_meter(self, name, meter):\n        self.meters[name] = meter\n\n    def log_every(self, iterable, epoch=None, header=None):\n        i = 0\n        if header is None:\n            header = 'Epoch: [{}]'.format(epoch)\n\n        world_len_iterable = get_world_size() * len(iterable)\n\n        start_time = time.time()\n        end = time.time()\n        iter_time = SmoothedValue(fmt='{avg:.4f}')\n        data_time = SmoothedValue(fmt='{avg:.4f}')\n        space_fmt = ':' + str(len(str(world_len_iterable))) + 'd'\n        if torch.cuda.is_available():\n            log_msg = self.delimiter.join([\n                header,\n                '[{0' + space_fmt + '}/{1}]',\n                'eta: {eta}',\n                '{meters}',\n                'time: {time}',\n                'data: {data}',\n                'max mem: {memory:.0f}'\n            ])\n        else:\n            log_msg = self.delimiter.join([\n                header,\n                '[{0' + space_fmt + '}/{1}]',\n                'eta: {eta}',\n                '{meters}',\n                'time: {time}',\n                'data_time: {data}'\n            ])\n        MB = 1024.0 * 1024.0\n        for obj in iterable:\n            data_time.update(time.time() - end)\n            yield obj\n            iter_time.update(time.time() - end)\n            if i % self.print_freq == 0 or i == len(iterable) - 1:\n                eta_seconds = iter_time.global_avg * (len(iterable) - i)\n                eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))\n                if torch.cuda.is_available():\n                    print(log_msg.format(\n                        i * get_world_size(), world_len_iterable, eta=eta_string,\n                        meters=str(self),\n                        time=str(iter_time), data=str(data_time),\n                        memory=torch.cuda.max_memory_allocated() / MB))\n                else:\n                    print(log_msg.format(\n                        i * get_world_size(), world_len_iterable, eta=eta_string,\n                        meters=str(self),\n                        time=str(iter_time), data=str(data_time)))\n\n                if self.vis is not None:\n                    y_data = [self.meters[legend_name].median\n                              for legend_name in self.vis.viz_opts['legend']\n                              if legend_name in self.meters]\n                    y_data.append(iter_time.median)\n\n                    self.vis.plot(y_data, i * get_world_size() + (epoch - 1) * world_len_iterable)\n\n                # DEBUG\n                # if i != 0 and i % self.print_freq == 0:\n                if self.debug and i % self.print_freq == 0:\n                    break\n\n            i += 1\n            end = time.time()\n\n        # if self.vis is not None:\n        #     self.vis.reset()\n\n        total_time = time.time() - start_time\n        total_time_str = str(datetime.timedelta(seconds=int(total_time)))\n        print('{} Total time: {} ({:.4f} s / it)'.format(\n            header, total_time_str, total_time / len(iterable)))\n\n\ndef get_sha():\n    cwd = os.path.dirname(os.path.abspath(__file__))\n\n    def _run(command):\n        return subprocess.check_output(command, cwd=cwd).decode('ascii').strip()\n    sha = 'N/A'\n    diff = \"clean\"\n    branch = 'N/A'\n    try:\n        sha = _run(['git', 'rev-parse', 'HEAD'])\n        subprocess.check_output(['git', 'diff'], cwd=cwd)\n        diff = _run(['git', 'diff-index', 'HEAD'])\n        diff = \"has uncommited changes\" if diff else \"clean\"\n        branch = _run(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])\n    except Exception:\n        pass\n    message = f\"sha: {sha}, status: {diff}, branch: {branch}\"\n    return message\n\n\ndef collate_fn(batch):\n    batch = list(zip(*batch))\n    batch[0] = nested_tensor_from_tensor_list(batch[0])\n    return tuple(batch)\n\n\ndef _max_by_axis(the_list):\n    # type: (List[List[int]]) -> List[int]\n    maxes = the_list[0]\n    for sublist in the_list[1:]:\n        for index, item in enumerate(sublist):\n            maxes[index] = max(maxes[index], item)\n    return maxes\n\n\ndef nested_tensor_from_tensor_list(tensor_list: List[Tensor]):\n    # TODO make this more general\n    if tensor_list[0].ndim == 3:\n        # TODO make it support different-sized images\n        max_size = _max_by_axis([list(img.shape) for img in tensor_list])\n        # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))\n        batch_shape = [len(tensor_list)] + max_size\n        b, _, h, w = batch_shape\n        dtype = tensor_list[0].dtype\n        device = tensor_list[0].device\n        tensor = torch.zeros(batch_shape, dtype=dtype, device=device)\n        mask = torch.ones((b, h, w), dtype=torch.bool, device=device)\n        for img, pad_img, m in zip(tensor_list, tensor, mask):\n            pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\n            m[: img.shape[1], :img.shape[2]] = False\n    else:\n        raise ValueError('not supported')\n    return NestedTensor(tensor, mask)\n\n\nclass NestedTensor(object):\n    def __init__(self, tensors, mask: Optional[Tensor] = None):\n        self.tensors = tensors\n        self.mask = mask\n\n    def to(self, device):\n        # type: (Device) -> NestedTensor # noqa\n        cast_tensor = self.tensors.to(device)\n        mask = self.mask\n        if mask is not None:\n            assert mask is not None\n            cast_mask = mask.to(device)\n        else:\n            cast_mask = None\n        return NestedTensor(cast_tensor, cast_mask)\n\n    def decompose(self):\n        return self.tensors, self.mask\n\n    def __repr__(self):\n        return str(self.tensors)\n\n    def unmasked_tensor(self, index: int):\n        tensor = self.tensors[index]\n\n        if not self.mask[index].any():\n            return tensor\n\n        h_index = self.mask[index, 0, :].nonzero(as_tuple=True)[0]\n        if len(h_index):\n            tensor = tensor[:, :, :h_index[0]]\n\n        w_index = self.mask[index, :, 0].nonzero(as_tuple=True)[0]\n        if len(w_index):\n            tensor = tensor[:, :w_index[0], :]\n\n        return tensor\n\n\ndef setup_for_distributed(is_master):\n    \"\"\"\n    This function disables printing when not in master process\n    \"\"\"\n    import builtins as __builtin__\n    builtin_print = __builtin__.print\n\n    def print(*args, **kwargs):\n        force = kwargs.pop('force', False)\n\n        if is_master or force:\n            builtin_print(*args, **kwargs)\n\n    __builtin__.print = print\n\n    if not is_master:\n        def line(*args, **kwargs):\n            pass\n        def images(*args, **kwargs):\n            pass\n        Visdom.line = line\n        Visdom.images = images\n\n\ndef is_dist_avail_and_initialized():\n    if not dist.is_available():\n        return False\n    if not dist.is_initialized():\n        return False\n    return True\n\n\ndef get_world_size():\n    if not is_dist_avail_and_initialized():\n        return 1\n    return dist.get_world_size()\n\n\ndef get_rank():\n    if not is_dist_avail_and_initialized():\n        return 0\n    return dist.get_rank()\n\n\ndef is_main_process():\n    return get_rank() == 0\n\n\ndef save_on_master(*args, **kwargs):\n    if is_main_process():\n        torch.save(*args, **kwargs)\n\n\ndef init_distributed_mode(args):\n    if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:\n        args.rank = int(os.environ[\"RANK\"])\n        args.world_size = int(os.environ['WORLD_SIZE'])\n        args.gpu = int(os.environ['LOCAL_RANK'])\n    elif 'SLURM_PROCID' in os.environ and 'SLURM_PTY_PORT' not in os.environ:\n        # slurm process but not interactive\n        args.rank = int(os.environ['SLURM_PROCID'])\n        args.gpu = args.rank % torch.cuda.device_count()\n    else:\n        print('Not using distributed mode')\n        args.distributed = False\n        return\n\n    args.distributed = True\n\n    torch.cuda.set_device(args.gpu)\n    args.dist_backend = 'nccl'\n    print(f'| distributed init (rank {args.rank}): {args.dist_url}', flush=True)\n    torch.distributed.init_process_group(\n        backend=args.dist_backend, init_method=args.dist_url,\n        world_size=args.world_size, rank=args.rank)\n    # torch.distributed.barrier()\n    setup_for_distributed(args.rank == 0)\n\n\n@torch.no_grad()\ndef accuracy(output, target, topk=(1,)):\n    \"\"\"Computes the precision@k for the specified values of k\"\"\"\n    if target.numel() == 0:\n        return [torch.zeros([], device=output.device)]\n    maxk = max(topk)\n    batch_size = target.size(0)\n\n    _, pred = output.topk(maxk, 1, True, True)\n    pred = pred.t()\n    correct = pred.eq(target.view(1, -1).expand_as(pred))\n\n    res = []\n    for k in topk:\n        correct_k = correct[:k].view(-1).float().sum(0)\n        res.append(correct_k.mul_(100.0 / batch_size))\n    return res\n\n\ndef interpolate(input, size=None, scale_factor=None, mode=\"nearest\", align_corners=None):\n    # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor\n    \"\"\"\n    Equivalent to nn.functional.interpolate, but with support for empty batch sizes.\n    This will eventually be supported natively by PyTorch, and this\n    class can go away.\n    \"\"\"\n    if int(torchvision.__version__.split('.')[0]) <= 0 and int(torchvision.__version__.split('.')[1]) < 7:\n        if input.numel() > 0:\n            return torch.nn.functional.interpolate(\n                input, size, scale_factor, mode, align_corners\n            )\n\n        output_shape = _output_size(2, input, size, scale_factor)\n        output_shape = list(input.shape[:-2]) + list(output_shape)\n        return _new_empty_tensor(input, output_shape)\n    else:\n        return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners)\n\n\nclass DistributedWeightedSampler(torch.utils.data.DistributedSampler):\n    def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True, replacement=True):\n        super(DistributedWeightedSampler, self).__init__(dataset, num_replicas, rank, shuffle)\n\n        assert replacement\n\n        self.replacement = replacement\n\n    def __iter__(self):\n        iter_indices = super(DistributedWeightedSampler, self).__iter__()\n        if hasattr(self.dataset, 'sample_weight'):\n            indices = list(iter_indices)\n\n            weights = torch.tensor([self.dataset.sample_weight(idx) for idx in indices])\n\n            g = torch.Generator()\n            g.manual_seed(self.epoch)\n\n            weight_indices = torch.multinomial(\n                weights, self.num_samples, self.replacement, generator=g)\n            indices = torch.tensor(indices)[weight_indices]\n\n            iter_indices = iter(indices.tolist())\n        return iter_indices\n\n    def __len__(self):\n        return self.num_samples\n\n\ndef inverse_sigmoid(x, eps=1e-5):\n    x = x.clamp(min=0, max=1)\n    x1 = x.clamp(min=eps)\n    x2 = (1 - x).clamp(min=eps)\n    return torch.log(x1/x2)\n\n\ndef dice_loss(inputs, targets, num_boxes):\n    \"\"\"\n    Compute the DICE loss, similar to generalized IOU for masks\n    Args:\n        inputs: A float tensor of arbitrary shape.\n                The predictions for each example.\n        targets: A float tensor with the same shape as inputs. Stores the binary\n                 classification label for each element in inputs\n                (0 for the negative class and 1 for the positive class).\n    \"\"\"\n    inputs = inputs.sigmoid()\n    inputs = inputs.flatten(1)\n    numerator = 2 * (inputs * targets).sum(1)\n    denominator = inputs.sum(-1) + targets.sum(-1)\n    loss = 1 - (numerator + 1) / (denominator + 1)\n    return loss.sum() / num_boxes\n\n\ndef sigmoid_focal_loss(inputs, targets, num_boxes, alpha: float = 0.25, gamma: float = 2, query_mask=None, reduction=True):\n    \"\"\"\n    Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.\n    Args:\n        inputs: A float tensor of arbitrary shape.\n                The predictions for each example.\n        targets: A float tensor with the same shape as inputs. Stores the binary\n                 classification label for each element in inputs\n                (0 for the negative class and 1 for the positive class).\n        alpha: (optional) Weighting factor in range (0,1) to balance\n                positive vs negative examples. Default = -1 (no weighting).\n        gamma: Exponent of the modulating factor (1 - p_t) to\n               balance easy vs hard examples.\n    Returns:\n        Loss tensor\n    \"\"\"\n    prob = inputs.sigmoid()\n    ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction=\"none\")\n    p_t = prob * targets + (1 - prob) * (1 - targets)\n    loss = ce_loss * ((1 - p_t) ** gamma)\n\n    if alpha >= 0:\n        alpha_t = alpha * targets + (1 - alpha) * (1 - targets)\n        loss = alpha_t * loss\n\n    if not reduction:\n        return loss\n\n    if query_mask is not None:\n        loss = torch.stack([l[m].mean(0) for l, m in zip(loss, query_mask)])\n        return loss.sum() / num_boxes\n    return loss.mean(1).sum() / num_boxes\n\n\ndef nested_dict_to_namespace(dictionary):\n    namespace = dictionary\n    if isinstance(dictionary, dict):\n        namespace = Namespace(**dictionary)\n        for key, value in dictionary.items():\n            setattr(namespace, key, nested_dict_to_namespace(value))\n    return namespace\n\ndef nested_dict_to_device(dictionary, device):\n    output = {}\n    if isinstance(dictionary, dict):\n        for key, value in dictionary.items():\n            output[key] = nested_dict_to_device(value, device)\n        return output\n    return dictionary.to(device)\n"
  },
  {
    "path": "src/trackformer/util/plot_utils.py",
    "content": "\"\"\"\nPlotting utilities to visualize training logs.\n\"\"\"\nfrom pathlib import Path, PurePath\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport torch\nfrom matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas\n\n\ndef fig_to_numpy(fig):\n    w, h = fig.get_size_inches() * fig.dpi\n    w = int(w.item())\n    h = int(h.item())\n    canvas = FigureCanvas(fig)\n    canvas.draw()\n    numpy_image = np.frombuffer(canvas.tostring_rgb(), dtype='uint8').reshape(h, w, 3)\n    return np.copy(numpy_image)\n\n\ndef get_vis_win_names(vis_dict):\n    vis_win_names = {\n        outer_k: {\n            inner_k: inner_v.win\n            for inner_k, inner_v in outer_v.items()\n        }\n        for outer_k, outer_v in vis_dict.items()\n    }\n    return vis_win_names\n\n\ndef plot_logs(logs, fields=('class_error', 'loss_bbox_unscaled', 'mAP'), ewm_col=0, log_name='log.txt'):\n    '''\n    Function to plot specific fields from training log(s). Plots both training and test results.\n\n    :: Inputs - logs = list containing Path objects, each pointing to individual dir with a log file\n              - fields = which results to plot from each log file - plots both training and test for each field.\n              - ewm_col = optional, which column to use as the exponential weighted smoothing of the plots\n              - log_name = optional, name of log file if different than default 'log.txt'.\n\n    :: Outputs - matplotlib plots of results in fields, color coded for each log file.\n               - solid lines are training results, dashed lines are test results.\n\n    '''\n    func_name = \"plot_utils.py::plot_logs\"\n\n    # verify logs is a list of Paths (list[Paths]) or single Pathlib object Path,\n    # convert single Path to list to avoid 'not iterable' error\n\n    if not isinstance(logs, list):\n        if isinstance(logs, PurePath):\n            logs = [logs]\n            print(f\"{func_name} info: logs param expects a list argument, converted to list[Path].\")\n        else:\n            raise ValueError(f\"{func_name} - invalid argument for logs parameter.\\n \\\n            Expect list[Path] or single Path obj, received {type(logs)}\")\n\n    # verify valid dir(s) and that every item in list is Path object\n    for i, dir in enumerate(logs):\n        if not isinstance(dir, PurePath):\n            raise ValueError(f\"{func_name} - non-Path object in logs argument of {type(dir)}: \\n{dir}\")\n        if dir.exists():\n            continue\n        raise ValueError(f\"{func_name} - invalid directory in logs argument:\\n{dir}\")\n\n    # load log file(s) and plot\n    dfs = [pd.read_json(Path(p) / log_name, lines=True) for p in logs]\n\n    fig, axs = plt.subplots(ncols=len(fields), figsize=(16, 5))\n\n    for df, color in zip(dfs, sns.color_palette(n_colors=len(logs))):\n        for j, field in enumerate(fields):\n            if field == 'mAP':\n                coco_eval = pd.DataFrame(pd.np.stack(df.test_coco_eval.dropna().values)[:, 1]).ewm(com=ewm_col).mean()\n                axs[j].plot(coco_eval, c=color)\n            else:\n                df.interpolate().ewm(com=ewm_col).mean().plot(\n                    y=[f'train_{field}', f'test_{field}'],\n                    ax=axs[j],\n                    color=[color] * 2,\n                    style=['-', '--']\n                )\n    for ax, field in zip(axs, fields):\n        ax.legend([Path(p).name for p in logs])\n        ax.set_title(field)\n\n\ndef plot_precision_recall(files, naming_scheme='iter'):\n    if naming_scheme == 'exp_id':\n        # name becomes exp_id\n        names = [f.parts[-3] for f in files]\n    elif naming_scheme == 'iter':\n        names = [f.stem for f in files]\n    else:\n        raise ValueError(f'not supported {naming_scheme}')\n    fig, axs = plt.subplots(ncols=2, figsize=(16, 5))\n    for f, color, name in zip(files, sns.color_palette(\"Blues\", n_colors=len(files)), names):\n        data = torch.load(f)\n        # precision is n_iou, n_points, n_cat, n_area, max_det\n        precision = data['precision']\n        recall = data['params'].recThrs\n        scores = data['scores']\n        # take precision for all classes, all areas and 100 detections\n        precision = precision[0, :, :, 0, -1].mean(1)\n        scores = scores[0, :, :, 0, -1].mean(1)\n        prec = precision.mean()\n        rec = data['recall'][0, :, 0, -1].mean()\n        print(f'{naming_scheme} {name}: mAP@50={prec * 100: 05.1f}, ' +\n              f'score={scores.mean():0.3f}, ' +\n              f'f1={2 * prec * rec / (prec + rec + 1e-8):0.3f}'\n              )\n        axs[0].plot(recall, precision, c=color)\n        axs[1].plot(recall, scores, c=color)\n\n    axs[0].set_title('Precision / Recall')\n    axs[0].legend(names)\n    axs[1].set_title('Scores / Recall')\n    axs[1].legend(names)\n    return fig, axs\n"
  },
  {
    "path": "src/trackformer/util/track_utils.py",
    "content": "#########################################\n# Still ugly file with helper functions #\n#########################################\n\nimport os\nfrom collections import defaultdict\nfrom os import path as osp\n\nimport cv2\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport motmetrics as mm\nimport numpy as np\nimport torch\nimport torchvision.transforms.functional as F\nimport tqdm\nfrom cycler import cycler as cy\nfrom matplotlib import colors\nfrom scipy.interpolate import interp1d\n\nmatplotlib.use('Agg')\n\n\n# From frcnn/utils/bbox.py\ndef bbox_overlaps(boxes, query_boxes):\n    \"\"\"\n    Parameters\n    ----------\n    boxes: (N, 4) ndarray or tensor or variable\n    query_boxes: (K, 4) ndarray or tensor or variable\n    Returns\n    -------\n    overlaps: (N, K) overlap between boxes and query_boxes\n    \"\"\"\n    if isinstance(boxes, np.ndarray):\n        boxes = torch.from_numpy(boxes)\n        query_boxes = torch.from_numpy(query_boxes)\n        out_fn = lambda x: x.numpy()  # If input is ndarray, turn the overlaps back to ndarray when return\n    else:\n        out_fn = lambda x: x\n\n    box_areas = (boxes[:, 2] - boxes[:, 0] + 1) * (boxes[:, 3] - boxes[:, 1] + 1)\n    query_areas = (query_boxes[:, 2] - query_boxes[:, 0] + 1) * (query_boxes[:, 3] - query_boxes[:, 1] + 1)\n\n    iw = (torch.min(boxes[:, 2:3], query_boxes[:, 2:3].t()) - torch.max(boxes[:, 0:1],\n                                                                        query_boxes[:, 0:1].t()) + 1).clamp(min=0)\n    ih = (torch.min(boxes[:, 3:4], query_boxes[:, 3:4].t()) - torch.max(boxes[:, 1:2],\n                                                                        query_boxes[:, 1:2].t()) + 1).clamp(min=0)\n    ua = box_areas.view(-1, 1) + query_areas.view(1, -1) - iw * ih\n    overlaps = iw * ih / ua\n    return out_fn(overlaps)\n\n\ndef rand_cmap(nlabels, type='bright', first_color_black=True, last_color_black=False, verbose=False):\n    \"\"\"\n    Creates a random colormap to be used together with matplotlib. Useful for segmentation tasks\n    :param nlabels: Number of labels (size of colormap)\n    :param type: 'bright' for strong colors, 'soft' for pastel colors\n    :param first_color_black: Option to use first color as black, True or False\n    :param last_color_black: Option to use last color as black, True or False\n    :param verbose: Prints the number of labels and shows the colormap. True or False\n    :return: colormap for matplotlib\n    \"\"\"\n    import colorsys\n\n    import numpy as np\n    from matplotlib.colors import LinearSegmentedColormap\n\n\n    if type not in ('bright', 'soft'):\n        print ('Please choose \"bright\" or \"soft\" for type')\n        return\n\n    if verbose:\n        print('Number of labels: ' + str(nlabels))\n\n    # Generate color map for bright colors, based on hsv\n    if type == 'bright':\n        randHSVcolors = [(np.random.uniform(low=0.0, high=1),\n                          np.random.uniform(low=0.2, high=1),\n                          np.random.uniform(low=0.9, high=1)) for i in range(nlabels)]\n\n        # Convert HSV list to RGB\n        randRGBcolors = []\n        for HSVcolor in randHSVcolors:\n            randRGBcolors.append(colorsys.hsv_to_rgb(HSVcolor[0], HSVcolor[1], HSVcolor[2]))\n\n        if first_color_black:\n            randRGBcolors[0] = [0, 0, 0]\n\n        if last_color_black:\n            randRGBcolors[-1] = [0, 0, 0]\n\n        random_colormap = LinearSegmentedColormap.from_list('new_map', randRGBcolors, N=nlabels)\n\n    # Generate soft pastel colors, by limiting the RGB spectrum\n    if type == 'soft':\n        low = 0.6\n        high = 0.95\n        randRGBcolors = [(np.random.uniform(low=low, high=high),\n                          np.random.uniform(low=low, high=high),\n                          np.random.uniform(low=low, high=high)) for i in range(nlabels)]\n\n        if first_color_black:\n            randRGBcolors[0] = [0, 0, 0]\n\n        if last_color_black:\n            randRGBcolors[-1] = [0, 0, 0]\n        random_colormap = LinearSegmentedColormap.from_list('new_map', randRGBcolors, N=nlabels)\n\n    # Display colorbar\n    if verbose:\n        from matplotlib import colorbar, colors\n        from matplotlib import pyplot as plt\n        fig, ax = plt.subplots(1, 1, figsize=(15, 0.5))\n\n        bounds = np.linspace(0, nlabels, nlabels + 1)\n        norm = colors.BoundaryNorm(bounds, nlabels)\n\n        colorbar.ColorbarBase(ax, cmap=random_colormap, norm=norm, spacing='proportional', ticks=None,\n                              boundaries=bounds, format='%1i', orientation=u'horizontal')\n\n    return random_colormap\n\n\ndef plot_sequence(tracks, data_loader, output_dir, write_images, generate_attention_maps):\n    \"\"\"Plots a whole sequence\n\n    Args:\n        tracks (dict): The dictionary containing the track dictionaries in the form tracks[track_id][frame] = bb\n        db (torch.utils.data.Dataset): The dataset with the images belonging to the tracks (e.g. MOT_Sequence object)\n        output_dir (String): Directory where to save the resulting images\n    \"\"\"\n    if not osp.exists(output_dir):\n        os.makedirs(output_dir)\n\n    # infinite color loop\n    # cyl = cy('ec', COLORS)\n    # loop_cy_iter = cyl()\n    # styles = defaultdict(lambda: next(loop_cy_iter))\n\n    # cmap = plt.cm.get_cmap('hsv', )\n    mx = 0\n    for track_id, track_data in tracks.items():\n        mx = max(mx, track_id)\n    cmap = rand_cmap(mx, type='bright', first_color_black=False, last_color_black=False)\n\n    # if generate_attention_maps:\n    #     attention_maps_per_track = {\n    #         track_id: (np.concatenate([t['attention_map'] for t in track.values()])\n    #                    if len(track) > 1\n    #                    else list(track.values())[0]['attention_map'])\n    #         for track_id, track in tracks.items()}\n    #     attention_map_thresholds = {\n    #         track_id: np.histogram(maps, bins=2)[1][1]\n    #         for track_id, maps in attention_maps_per_track.items()}\n\n        # _, attention_maps_bin_edges = np.histogram(all_attention_maps, bins=2)\n\n    for frame_id, frame_data  in enumerate(tqdm.tqdm(data_loader)):\n        img_path = frame_data['img_path'][0]\n        img = cv2.imread(img_path)[:, :, (2, 1, 0)]\n        height, width, _ = img.shape\n\n        fig = plt.figure()\n        fig.set_size_inches(width / 96, height / 96)\n        ax = plt.Axes(fig, [0., 0., 1., 1.])\n        ax.set_axis_off()\n        fig.add_axes(ax)\n        ax.imshow(img)\n\n        if generate_attention_maps:\n            attention_map_img = np.zeros((height, width, 4))\n\n        for track_id, track_data in tracks.items():\n            if frame_id in track_data.keys():\n                bbox = track_data[frame_id]['bbox']\n\n                if 'mask' in track_data[frame_id]:\n                    mask = track_data[frame_id]['mask']\n                    mask = np.ma.masked_where(mask == 0.0, mask)\n\n                    ax.imshow(mask, alpha=0.5, cmap=colors.ListedColormap([cmap(track_id)]))\n\n                    annotate_color = 'white'\n                else:\n                    ax.add_patch(\n                        plt.Rectangle(\n                            (bbox[0], bbox[1]),\n                            bbox[2] - bbox[0],\n                            bbox[3] - bbox[1],\n                            fill=False,\n                            linewidth=2.0,\n                            color=cmap(track_id),\n                        ))\n\n                    annotate_color = cmap(track_id)\n\n                if write_images == 'debug':\n                    ax.annotate(\n                        f\"{track_id} - {track_data[frame_id]['obj_ind']} ({track_data[frame_id]['score']:.2f})\",\n                        (bbox[0] + (bbox[2] - bbox[0]) / 2.0, bbox[1] + (bbox[3] - bbox[1]) / 2.0),\n                        color=annotate_color, weight='bold', fontsize=12, ha='center', va='center')\n\n                if 'attention_map' in track_data[frame_id]:\n                    attention_map = track_data[frame_id]['attention_map']\n                    attention_map = cv2.resize(attention_map, (width, height))\n\n                    # attention_map_img = np.ones((height, width, 4)) * cmap(track_id)\n                    # # max value will be at 0.75 transparency\n                    # attention_map_img[:, :, 3] = attention_map * 0.75 / attention_map.max()\n\n                    # _, bin_edges = np.histogram(attention_map, bins=2)\n                    # attention_map_img[:, :][attention_map < bin_edges[1]] = 0.0\n\n                    # attention_map_img += attention_map_img\n\n                    # _, bin_edges = np.histogram(attention_map, bins=2)\n\n                    norm_attention_map = attention_map / attention_map.max()\n\n                    high_att_mask = norm_attention_map > 0.25 # bin_edges[1]\n                    attention_map_img[:, :][high_att_mask] = cmap(track_id)\n                    attention_map_img[:, :, 3][high_att_mask] = norm_attention_map[high_att_mask] * 0.5\n\n                    # attention_map_img[:, :] += (np.tile(attention_map[..., np.newaxis], (1,1,4)) / attention_map.max()) * cmap(track_id)\n                    # attention_map_img[:, :, 3] = 0.75\n\n        if generate_attention_maps:\n            ax.imshow(attention_map_img, vmin=0.0, vmax=1.0)\n\n        plt.axis('off')\n        # plt.tight_layout()\n        plt.draw()\n        plt.savefig(osp.join(output_dir, osp.basename(img_path)), dpi=96)\n        plt.close()\n\n\ndef interpolate_tracks(tracks):\n    for i, track in tracks.items():\n        frames = []\n        x0 = []\n        y0 = []\n        x1 = []\n        y1 = []\n\n        for f, data in track.items():\n            frames.append(f)\n            x0.append(data['bbox'][0])\n            y0.append(data['bbox'][1])\n            x1.append(data['bbox'][2])\n            y1.append(data['bbox'][3])\n\n        if frames:\n            x0_inter = interp1d(frames, x0)\n            y0_inter = interp1d(frames, y0)\n            x1_inter = interp1d(frames, x1)\n            y1_inter = interp1d(frames, y1)\n\n            for f in range(min(frames), max(frames) + 1):\n                bbox = np.array([\n                    x0_inter(f),\n                    y0_inter(f),\n                    x1_inter(f),\n                    y1_inter(f)])\n                tracks[i][f]['bbox'] = bbox\n        else:\n            tracks[i][frames[0]]['bbox'] = np.array([\n                x0[0], y0[0], x1[0], y1[0]])\n\n    return interpolated\n\n\ndef bbox_transform_inv(boxes, deltas):\n    # Input should be both tensor or both Variable and on the same device\n    if len(boxes) == 0:\n        return deltas.detach() * 0\n\n    widths = boxes[:, 2] - boxes[:, 0] + 1.0\n    heights = boxes[:, 3] - boxes[:, 1] + 1.0\n    ctr_x = boxes[:, 0] + 0.5 * widths\n    ctr_y = boxes[:, 1] + 0.5 * heights\n\n    dx = deltas[:, 0::4]\n    dy = deltas[:, 1::4]\n    dw = deltas[:, 2::4]\n    dh = deltas[:, 3::4]\n\n    pred_ctr_x = dx * widths.unsqueeze(1) + ctr_x.unsqueeze(1)\n    pred_ctr_y = dy * heights.unsqueeze(1) + ctr_y.unsqueeze(1)\n    pred_w = torch.exp(dw) * widths.unsqueeze(1)\n    pred_h = torch.exp(dh) * heights.unsqueeze(1)\n\n    pred_boxes = torch.cat(\n        [_.unsqueeze(2) for _ in [pred_ctr_x - 0.5 * pred_w,\n                                pred_ctr_y - 0.5 * pred_h,\n                                pred_ctr_x + 0.5 * pred_w,\n                                pred_ctr_y + 0.5 * pred_h]], 2).view(len(boxes), -1)\n    return pred_boxes\n\n\ndef clip_boxes(boxes, im_shape):\n    \"\"\"\n    Clip boxes to image boundaries.\n    boxes must be tensor or Variable, im_shape can be anything but Variable\n    \"\"\"\n    if not hasattr(boxes, 'data'):\n        boxes_ = boxes.numpy()\n\n    boxes = boxes.view(boxes.size(0), -1, 4)\n    boxes = torch.stack([\n        boxes[:, :, 0].clamp(0, im_shape[1] - 1),\n        boxes[:, :, 1].clamp(0, im_shape[0] - 1),\n        boxes[:, :, 2].clamp(0, im_shape[1] - 1),\n        boxes[:, :, 3].clamp(0, im_shape[0] - 1)\n    ], 2).view(boxes.size(0), -1)\n\n    return boxes\n\n\ndef get_center(pos):\n    x1 = pos[0, 0]\n    y1 = pos[0, 1]\n    x2 = pos[0, 2]\n    y2 = pos[0, 3]\n    return torch.Tensor([(x2 + x1) / 2, (y2 + y1) / 2]).cuda()\n\n\ndef get_width(pos):\n    return pos[0, 2] - pos[0, 0]\n\n\ndef get_height(pos):\n    return pos[0, 3] - pos[0, 1]\n\n\ndef make_pos(cx, cy, width, height):\n    return torch.Tensor([[\n        cx - width / 2,\n        cy - height / 2,\n        cx + width / 2,\n        cy + height / 2\n    ]]).cuda()\n\n\ndef warp_pos(pos, warp_matrix):\n    p1 = torch.Tensor([pos[0, 0], pos[0, 1], 1]).view(3, 1)\n    p2 = torch.Tensor([pos[0, 2], pos[0, 3], 1]).view(3, 1)\n    p1_n = torch.mm(warp_matrix, p1).view(1, 2)\n    p2_n = torch.mm(warp_matrix, p2).view(1, 2)\n    return torch.cat((p1_n, p2_n), 1).view(1, -1).cuda()\n\n\ndef get_mot_accum(results, seq_loader):\n    mot_accum = mm.MOTAccumulator(auto_id=True)\n\n    for frame_id, frame_data in enumerate(seq_loader):\n        gt = frame_data['gt']\n        gt_ids = []\n        if gt:\n            gt_boxes = []\n            for gt_id, gt_box in gt.items():\n                gt_ids.append(gt_id)\n                gt_boxes.append(gt_box[0])\n\n            gt_boxes = np.stack(gt_boxes, axis=0)\n            # x1, y1, x2, y2 --> x1, y1, width, height\n            gt_boxes = np.stack(\n                (gt_boxes[:, 0],\n                 gt_boxes[:, 1],\n                 gt_boxes[:, 2] - gt_boxes[:, 0],\n                 gt_boxes[:, 3] - gt_boxes[:, 1]), axis=1)\n        else:\n            gt_boxes = np.array([])\n\n        track_ids = []\n        track_boxes = []\n        for track_id, track_data in results.items():\n            if frame_id in track_data:\n                track_ids.append(track_id)\n                # frames = x1, y1, x2, y2, score\n                track_boxes.append(track_data[frame_id]['bbox'])\n\n        if track_ids:\n            track_boxes = np.stack(track_boxes, axis=0)\n            # x1, y1, x2, y2 --> x1, y1, width, height\n            track_boxes = np.stack(\n                (track_boxes[:, 0],\n                 track_boxes[:, 1],\n                 track_boxes[:, 2] - track_boxes[:, 0],\n                 track_boxes[:, 3] - track_boxes[:, 1]), axis=1)\n        else:\n            track_boxes = np.array([])\n\n        distance = mm.distances.iou_matrix(gt_boxes, track_boxes, max_iou=0.5)\n\n        mot_accum.update(\n            gt_ids,\n            track_ids,\n            distance)\n\n    return mot_accum\n\n\ndef evaluate_mot_accums(accums, names, generate_overall=True):\n    mh = mm.metrics.create()\n    summary = mh.compute_many(\n        accums,\n        metrics=mm.metrics.motchallenge_metrics,\n        names=names,\n        generate_overall=generate_overall,)\n\n    str_summary = mm.io.render_summary(\n        summary,\n        formatters=mh.formatters,\n        namemap=mm.io.motchallenge_metric_names,)\n    return summary, str_summary\n"
  },
  {
    "path": "src/trackformer/vis.py",
    "content": "import copy\nimport logging\n\nimport matplotlib.patches as mpatches\nimport numpy as np\nimport torch\nimport torchvision.transforms as T\nfrom matplotlib import colors\nfrom matplotlib import pyplot as plt\nfrom torchvision.ops.boxes import clip_boxes_to_image\nfrom visdom import Visdom\n\nfrom .util.plot_utils import fig_to_numpy\n\nlogging.getLogger('visdom').setLevel(logging.CRITICAL)\n\n\nclass BaseVis(object):\n\n    def __init__(self, viz_opts, update_mode='append', env=None, win=None,\n                 resume=False, port=8097, server='http://localhost'):\n        self.viz_opts = viz_opts\n        self.update_mode = update_mode\n        self.win = win\n        if env is None:\n            env = 'main'\n        self.viz = Visdom(env=env, port=port, server=server)\n        # if resume first plot should not update with replace\n        self.removed = not resume\n\n    def win_exists(self):\n        return self.viz.win_exists(self.win)\n\n    def close(self):\n        if self.win is not None:\n            self.viz.close(win=self.win)\n            self.win = None\n\n    def register_event_handler(self, handler):\n        self.viz.register_event_handler(handler, self.win)\n\n\nclass LineVis(BaseVis):\n    \"\"\"Visdom Line Visualization Helper Class.\"\"\"\n\n    def plot(self, y_data, x_label):\n        \"\"\"Plot given data.\n\n        Appends new data to exisiting line visualization.\n        \"\"\"\n        update = self.update_mode\n        # update mode must be None the first time or after plot data was removed\n        if self.removed:\n            update = None\n            self.removed = False\n\n        if isinstance(x_label, list):\n            Y = torch.Tensor(y_data)\n            X = torch.Tensor(x_label)\n        else:\n            y_data = [d.cpu() if torch.is_tensor(d)\n                      else torch.tensor(d)\n                      for d in y_data]\n\n            Y = torch.Tensor(y_data).unsqueeze(dim=0)\n            X = torch.Tensor([x_label])\n\n        win = self.viz.line(X=X, Y=Y, opts=self.viz_opts, win=self.win, update=update)\n\n        if self.win is None:\n            self.win = win\n        self.viz.save([self.viz.env])\n\n    def reset(self):\n        #TODO: currently reset does not empty directly only on the next plot.\n        # update='remove' is not working as expected.\n        if self.win is not None:\n            # self.viz.line(X=None, Y=None, win=self.win, update='remove')\n            self.removed = True\n\n\nclass ImgVis(BaseVis):\n    \"\"\"Visdom Image Visualization Helper Class.\"\"\"\n\n    def plot(self, images):\n        \"\"\"Plot given images.\"\"\"\n\n        # images = [img.data if isinstance(img, torch.autograd.Variable)\n        #           else img for img in images]\n        # images = [img.squeeze(dim=0) if len(img.size()) == 4\n        #           else img for img in images]\n\n        self.win = self.viz.images(\n            images,\n            nrow=1,\n            opts=self.viz_opts,\n            win=self.win, )\n        self.viz.save([self.viz.env])\n\n\ndef vis_results(visualizer, img, result, target, tracking):\n    inv_normalize = T.Normalize(\n        mean=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.255],\n        std=[1 / 0.229, 1 / 0.224, 1 / 0.255]\n    )\n\n    imgs = [inv_normalize(img).cpu()]\n    img_ids = [target['image_id'].item()]\n    for key in ['prev', 'prev_prev']:\n        if f'{key}_image' in target:\n            imgs.append(inv_normalize(target[f'{key}_image']).cpu())\n            img_ids.append(target[f'{key}_target'][f'image_id'].item())\n\n    # img.shape=[3, H, W]\n    dpi = 96\n    figure, axarr = plt.subplots(len(imgs))\n    figure.tight_layout()\n    figure.set_dpi(dpi)\n    figure.set_size_inches(\n        imgs[0].shape[2] / dpi,\n        imgs[0].shape[1] * len(imgs) / dpi)\n\n    if len(imgs) == 1:\n        axarr = [axarr]\n\n    for ax, img, img_id in zip(axarr, imgs, img_ids):\n        ax.set_axis_off()\n        ax.imshow(img.permute(1, 2, 0).clamp(0, 1))\n\n        ax.text(\n            0, 0, f'IMG_ID={img_id}',\n            fontsize=20, bbox=dict(facecolor='white', alpha=0.5))\n\n    num_track_queries = num_track_queries_with_id = 0\n    if tracking:\n        num_track_queries = len(target['track_query_boxes'])\n        num_track_queries_with_id = len(target['track_query_match_ids'])\n        track_ids = target['track_ids'][target['track_query_match_ids']]\n\n    keep = result['scores'].cpu() > result['scores_no_object'].cpu()\n\n    cmap = plt.cm.get_cmap('hsv', len(keep))\n\n    prop_i = 0\n    for box_id in range(len(keep)):\n        rect_color = 'green'\n        offset = 0\n        text = f\"{result['scores'][box_id]:0.2f}\"\n\n        if tracking:\n            if target['track_queries_fal_pos_mask'][box_id]:\n                rect_color = 'red'\n            elif target['track_queries_mask'][box_id]:\n                offset = 50\n                rect_color = 'blue'\n                text = (\n                    f\"{track_ids[prop_i]}\\n\"\n                    f\"{text}\\n\"\n                    f\"{result['track_queries_with_id_iou'][prop_i]:0.2f}\")\n                prop_i += 1\n\n        if not keep[box_id]:\n            continue\n\n        # x1, y1, x2, y2 = result['boxes'][box_id]\n        result_boxes = clip_boxes_to_image(result['boxes'], target['size'])\n        x1, y1, x2, y2 = result_boxes[box_id]\n\n        axarr[0].add_patch(plt.Rectangle(\n            (x1, y1), x2 - x1, y2 - y1,\n            fill=False, color=rect_color, linewidth=2))\n\n        axarr[0].text(\n            x1, y1 + offset, text,\n            fontsize=10, bbox=dict(facecolor='white', alpha=0.5))\n\n        if 'masks' in result:\n            mask = result['masks'][box_id][0].numpy()\n            mask = np.ma.masked_where(mask == 0.0, mask)\n\n            axarr[0].imshow(\n                mask, alpha=0.5, cmap=colors.ListedColormap([cmap(box_id)]))\n\n    query_keep = keep\n    if tracking:\n        query_keep = keep[target['track_queries_mask'] == 0]\n\n    legend_handles = [mpatches.Patch(\n        color='green',\n        label=f\"object queries ({query_keep.sum()}/{len(target['boxes']) - num_track_queries_with_id})\\n- cls_score\")]\n\n    if num_track_queries:\n        track_queries_label = (\n            f\"track queries ({keep[target['track_queries_mask']].sum() - keep[target['track_queries_fal_pos_mask']].sum()}\"\n            f\"/{num_track_queries_with_id})\\n- track_id\\n- cls_score\\n- iou\")\n\n        legend_handles.append(mpatches.Patch(\n            color='blue',\n            label=track_queries_label))\n\n    if num_track_queries_with_id != num_track_queries:\n        track_queries_fal_pos_label = (\n            f\"false track queries ({keep[target['track_queries_fal_pos_mask']].sum()}\"\n            f\"/{num_track_queries - num_track_queries_with_id})\")\n\n        legend_handles.append(mpatches.Patch(\n            color='red',\n            label=track_queries_fal_pos_label))\n\n    axarr[0].legend(handles=legend_handles)\n\n    i = 1\n    for frame_prefix in ['prev', 'prev_prev']:\n        # if f'{frame_prefix}_image_id' not in target or f'{frame_prefix}_boxes' not in target:\n        if f'{frame_prefix}_target' not in target:\n            continue\n\n        frame_target = target[f'{frame_prefix}_target']\n        cmap = plt.cm.get_cmap('hsv', len(frame_target['track_ids']))\n\n        for j, track_id in enumerate(frame_target['track_ids']):\n            x1, y1, x2, y2 = frame_target['boxes'][j]\n            axarr[i].text(\n                x1, y1, f\"track_id={track_id}\",\n                fontsize=10, bbox=dict(facecolor='white', alpha=0.5))\n            axarr[i].add_patch(plt.Rectangle(\n                (x1, y1), x2 - x1, y2 - y1,\n                fill=False, color='green', linewidth=2))\n\n            if 'masks' in frame_target:\n                mask = frame_target['masks'][j].cpu().numpy()\n                mask = np.ma.masked_where(mask == 0.0, mask)\n\n                axarr[i].imshow(\n                    mask, alpha=0.5, cmap=colors.ListedColormap([cmap(j)]))\n        i += 1\n\n    plt.subplots_adjust(wspace=0.01, hspace=0.01)\n    plt.axis('off')\n\n    img = fig_to_numpy(figure).transpose(2, 0, 1)\n    plt.close()\n\n    visualizer.plot(img)\n\n\ndef build_visualizers(args: dict, train_loss_names: list):\n    visualizers = {}\n    visualizers['train'] = {}\n    visualizers['val'] = {}\n\n    if args.eval_only or args.no_vis or not args.vis_server:\n        return visualizers\n\n    env_name = str(args.output_dir).split('/')[-1]\n\n    vis_kwargs = {\n        'env': env_name,\n        'resume': args.resume and args.resume_vis,\n        'port': args.vis_port,\n        'server': args.vis_server}\n\n    #\n    # METRICS\n    #\n\n    legend = ['loss']\n    legend.extend(train_loss_names)\n    # for i in range(len(train_loss_names)):\n    #     legend.append(f\"{train_loss_names[i]}_unscaled\")\n\n    legend.extend([\n        'class_error',\n        # 'loss',\n        # 'loss_bbox',\n        # 'loss_ce',\n        # 'loss_giou',\n        # 'loss_mask',\n        # 'loss_dice',\n        # 'cardinality_error_unscaled',\n        # 'loss_bbox_unscaled',\n        # 'loss_ce_unscaled',\n        # 'loss_giou_unscaled',\n        # 'loss_mask_unscaled',\n        # 'loss_dice_unscaled',\n        'lr',\n        'lr_backbone',\n        'iter_time'\n    ])\n\n    # if not args.masks:\n    #     legend.remove('loss_mask')\n    #     legend.remove('loss_mask_unscaled')\n    #     legend.remove('loss_dice')\n    #     legend.remove('loss_dice_unscaled')\n\n    opts = dict(\n        title=\"TRAIN METRICS ITERS\",\n        xlabel='ITERS',\n        ylabel='METRICS',\n        width=1000,\n        height=500,\n        legend=legend)\n\n    # TRAIN\n    visualizers['train']['iter_metrics'] = LineVis(opts, **vis_kwargs)\n\n    opts = copy.deepcopy(opts)\n    opts['title'] = \"TRAIN METRICS EPOCHS\"\n    opts['xlabel'] = \"EPOCHS\"\n    opts['legend'].remove('lr')\n    opts['legend'].remove('lr_backbone')\n    opts['legend'].remove('iter_time')\n    visualizers['train']['epoch_metrics'] = LineVis(opts, **vis_kwargs)\n\n    # VAL\n    opts = copy.deepcopy(opts)\n    opts['title'] = \"VAL METRICS EPOCHS\"\n    opts['xlabel'] = \"EPOCHS\"\n    visualizers['val']['epoch_metrics'] = LineVis(opts, **vis_kwargs)\n\n    #\n    # EVAL COCO\n    #\n\n    legend = [\n        'BBOX AP IoU=0.50:0.95',\n        'BBOX AP IoU=0.50',\n        'BBOX AP IoU=0.75',\n    ]\n\n    if args.masks:\n        legend.extend([\n            'MASK AP IoU=0.50:0.95',\n            'MASK AP IoU=0.50',\n            'MASK AP IoU=0.75'])\n\n    if args.tracking and args.tracking_eval:\n        legend.extend(['MOTA', 'IDF1'])\n\n    opts = dict(\n        title='TRAIN EVAL EPOCHS',\n        xlabel='EPOCHS',\n        ylabel='METRICS',\n        width=1000,\n        height=500,\n        legend=legend)\n\n    # TRAIN\n    visualizers['train']['epoch_eval'] = LineVis(opts, **vis_kwargs)\n\n    # VAL\n    opts = copy.deepcopy(opts)\n    opts['title'] = 'VAL EVAL EPOCHS'\n    visualizers['val']['epoch_eval'] = LineVis(opts, **vis_kwargs)\n\n    #\n    # EXAMPLE RESULTS\n    #\n\n    opts = dict(\n        title=\"TRAIN EXAMPLE RESULTS\",\n        width=2500,\n        height=2500)\n\n    # TRAIN\n    visualizers['train']['example_results'] = ImgVis(opts, **vis_kwargs)\n\n    # VAL\n    opts = copy.deepcopy(opts)\n    opts['title'] = 'VAL EXAMPLE RESULTS'\n    visualizers['val']['example_results'] = ImgVis(opts, **vis_kwargs)\n\n    return visualizers\n"
  },
  {
    "path": "src/train.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport datetime\nimport os\nimport random\nimport time\nfrom argparse import Namespace\nfrom pathlib import Path\n\nimport numpy as np\nimport sacred\nimport torch\nimport yaml\nfrom torch.utils.data import DataLoader, DistributedSampler\n\nimport trackformer.util.misc as utils\nfrom trackformer.datasets import build_dataset\nfrom trackformer.engine import evaluate, train_one_epoch\nfrom trackformer.models import build_model\nfrom trackformer.util.misc import nested_dict_to_namespace\nfrom trackformer.util.plot_utils import get_vis_win_names\nfrom trackformer.vis import build_visualizers\n\nex = sacred.Experiment('train')\nex.add_config('cfgs/train.yaml')\nex.add_named_config('deformable', 'cfgs/train_deformable.yaml')\nex.add_named_config('tracking', 'cfgs/train_tracking.yaml')\nex.add_named_config('crowdhuman', 'cfgs/train_crowdhuman.yaml')\nex.add_named_config('mot_coco_person', 'cfgs/train_mot_coco_person.yaml')\nex.add_named_config('mot17_crowdhuman', 'cfgs/train_mot17_crowdhuman.yaml')\nex.add_named_config('mot17', 'cfgs/train_mot17.yaml')\nex.add_named_config('mots20', 'cfgs/train_mots20.yaml')\nex.add_named_config('mot20_crowdhuman', 'cfgs/train_mot20_crowdhuman.yaml')\nex.add_named_config('coco_person_masks', 'cfgs/train_coco_person_masks.yaml')\nex.add_named_config('full_res', 'cfgs/train_full_res.yaml')\nex.add_named_config('multi_frame', 'cfgs/train_multi_frame.yaml')\n\n\ndef train(args: Namespace) -> None:\n    print(args)\n\n    utils.init_distributed_mode(args)\n    print(\"git:\\n  {}\\n\".format(utils.get_sha()))\n\n    if args.debug:\n        # args.tracking_eval = False\n        args.num_workers = 0\n\n    if not args.deformable:\n        assert args.num_feature_levels == 1\n    if args.tracking:\n        # assert args.batch_size == 1\n\n        if args.tracking_eval:\n            assert 'mot' in args.dataset\n\n    output_dir = Path(args.output_dir)\n    if args.output_dir:\n        output_dir.mkdir(parents=True, exist_ok=True)\n\n        yaml.dump(\n            vars(args),\n            open(output_dir / 'config.yaml', 'w'), allow_unicode=True)\n\n    device = torch.device(args.device)\n\n    # fix the seed for reproducibility\n    seed = args.seed + utils.get_rank()\n\n    os.environ['PYTHONHASHSEED'] = str(seed)\n    # os.environ['NCCL_DEBUG'] = 'INFO'\n    # os.environ[\"NCCL_TREE_THRESHOLD\"] = \"0\"\n\n    np.random.seed(seed)\n    random.seed(seed)\n\n    torch.manual_seed(seed)\n    torch.cuda.manual_seed(seed)\n    torch.backends.cudnn.deterministic = True\n\n    model, criterion, postprocessors = build_model(args)\n    model.to(device)\n\n    visualizers = build_visualizers(args, list(criterion.weight_dict.keys()))\n\n    model_without_ddp = model\n    if args.distributed:\n        model = torch.nn.parallel.DistributedDataParallel(\n            model, device_ids=[args.gpu], find_unused_parameters=True)\n        model_without_ddp = model.module\n    n_parameters = sum(p.numel() for p in model.parameters() if p.requires_grad)\n    print('NUM TRAINABLE MODEL PARAMS:', n_parameters)\n\n    def match_name_keywords(n, name_keywords):\n        out = False\n        for b in name_keywords:\n            if b in n:\n                out = True\n                break\n        return out\n\n    param_dicts = [\n        {\"params\": [p for n, p in model_without_ddp.named_parameters()\n                    if not match_name_keywords(n, args.lr_backbone_names + args.lr_linear_proj_names + ['layers_track_attention']) and p.requires_grad],\n         \"lr\": args.lr,},\n        {\"params\": [p for n, p in model_without_ddp.named_parameters()\n                    if match_name_keywords(n, args.lr_backbone_names) and p.requires_grad],\n         \"lr\": args.lr_backbone},\n        {\"params\": [p for n, p in model_without_ddp.named_parameters()\n                    if match_name_keywords(n, args.lr_linear_proj_names) and p.requires_grad],\n         \"lr\":  args.lr * args.lr_linear_proj_mult}]\n    if args.track_attention:\n        param_dicts.append({\n            \"params\": [p for n, p in model_without_ddp.named_parameters()\n                       if match_name_keywords(n, ['layers_track_attention']) and p.requires_grad],\n            \"lr\": args.lr_track})\n\n    optimizer = torch.optim.AdamW(param_dicts, lr=args.lr,\n                                  weight_decay=args.weight_decay)\n\n    lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [args.lr_drop])\n\n    dataset_train = build_dataset(split='train', args=args)\n    dataset_val = build_dataset(split='val', args=args)\n\n    if args.distributed:\n        sampler_train = utils.DistributedWeightedSampler(dataset_train)\n        # sampler_train = DistributedSampler(dataset_train)\n        sampler_val = DistributedSampler(dataset_val, shuffle=False)\n    else:\n        sampler_train = torch.utils.data.RandomSampler(dataset_train)\n        sampler_val = torch.utils.data.SequentialSampler(dataset_val)\n\n    batch_sampler_train = torch.utils.data.BatchSampler(\n        sampler_train, args.batch_size, drop_last=True)\n\n    data_loader_train = DataLoader(\n        dataset_train,\n        batch_sampler=batch_sampler_train,\n        collate_fn=utils.collate_fn,\n        num_workers=args.num_workers)\n    data_loader_val = DataLoader(\n        dataset_val, args.batch_size,\n        sampler=sampler_val,\n        drop_last=False,\n        collate_fn=utils.collate_fn,\n        num_workers=args.num_workers)\n\n    best_val_stats = None\n    if args.resume:\n        if args.resume.startswith('https'):\n            checkpoint = torch.hub.load_state_dict_from_url(\n                args.resume, map_location='cpu', check_hash=True)\n        else:\n            checkpoint = torch.load(args.resume, map_location='cpu')\n\n        model_state_dict = model_without_ddp.state_dict()\n        checkpoint_state_dict = checkpoint['model']\n        checkpoint_state_dict = {\n            k.replace('detr.', ''): v for k, v in checkpoint['model'].items()}\n\n        for k, v in checkpoint_state_dict.items():\n            if k not in model_state_dict:\n                print(f'Where is {k} {tuple(v.shape)}?')\n\n        resume_state_dict = {}\n        for k, v in model_state_dict.items():\n            if k not in checkpoint_state_dict:\n                resume_value = v\n                print(f'Load {k} {tuple(v.shape)} from scratch.')\n            elif v.shape != checkpoint_state_dict[k].shape:\n                checkpoint_value = checkpoint_state_dict[k]\n                num_dims = len(checkpoint_value.shape)\n\n                if 'norm' in k:\n                    resume_value = checkpoint_value.repeat(2)\n                elif 'multihead_attn' in k or 'self_attn' in k:\n                    resume_value = checkpoint_value.repeat(num_dims * (2, ))\n                elif 'reference_points' in k and checkpoint_value.shape[0] * 2 == v.shape[0]:\n                    resume_value = v\n                    resume_value[:2] = checkpoint_value.clone()\n                elif 'linear1' in k or 'query_embed' in k:\n                    resume_state_dict[k] = v\n                    print(f'Load {k} {tuple(v.shape)} from scratch.')\n                    continue\n                #     if checkpoint_value.shape[1] * 2 == v.shape[1]:\n                #         # from hidden size 256 to 512\n                #         resume_value = checkpoint_value.repeat(1, 2)\n                #     elif checkpoint_value.shape[0] * 5 == v.shape[0]:\n                #         # from 100 to 500 object queries\n                #         resume_value = checkpoint_value.repeat(5, 1)\n                #     elif checkpoint_value.shape[0] > v.shape[0]:\n                #         resume_value = checkpoint_value[:v.shape[0]]\n                #     elif checkpoint_value.shape[0] < v.shape[0]:\n                #         resume_value = v\n                #     else:\n                #         raise NotImplementedError\n                elif 'linear2' in k or 'input_proj' in k:\n                    resume_value = checkpoint_value.repeat((2,) + (num_dims - 1) * (1, ))\n                elif 'class_embed' in k:\n                    # person and no-object class\n                    # resume_value = checkpoint_value[[1, -1]]\n                    # resume_value = checkpoint_value[[0, -1]]\n                    # resume_value = checkpoint_value[[1,]]\n                    resume_value = checkpoint_value[list(range(0, 20))]\n                    # resume_value = v\n                    # print(f'Load {k} {tuple(v.shape)} from scratch.')\n                else:\n                    raise NotImplementedError(f\"No rule for {k} with shape {v.shape}.\")\n\n                print(f\"Load {k} {tuple(v.shape)} from resume model \"\n                      f\"{tuple(checkpoint_value.shape)}.\")\n            elif args.resume_shift_neuron and 'class_embed' in k:\n                checkpoint_value = checkpoint_state_dict[k]\n                # no-object class\n                resume_value = checkpoint_value.clone()\n                # no-object class\n                # resume_value[:-2] = checkpoint_value[1:-1].clone()\n                resume_value[:-1] = checkpoint_value[1:].clone()\n                resume_value[-2] = checkpoint_value[0].clone()\n                print(f\"Load {k} {tuple(v.shape)} from resume model and \"\n                      \"shift class embed neurons to start with label=0 at neuron=0.\")\n            else:\n                resume_value = checkpoint_state_dict[k]\n\n            resume_state_dict[k] = resume_value\n\n        if args.masks and args.load_mask_head_from_model is not None:\n            checkpoint_mask_head = torch.load(\n                args.load_mask_head_from_model, map_location='cpu')\n\n            for k, v in resume_state_dict.items():\n\n                if (('bbox_attention' in k or 'mask_head' in k)\n                    and v.shape == checkpoint_mask_head['model'][k].shape):\n                    print(f'Load {k} {tuple(v.shape)} from mask head model.')\n                    resume_state_dict[k] = checkpoint_mask_head['model'][k]\n\n        model_without_ddp.load_state_dict(resume_state_dict)\n\n        # RESUME OPTIM\n        if not args.eval_only and args.resume_optim:\n            if 'optimizer' in checkpoint:\n                if args.overwrite_lrs:\n                    for c_p, p in zip(checkpoint['optimizer']['param_groups'], param_dicts):\n                        c_p['lr'] = p['lr']\n\n                optimizer.load_state_dict(checkpoint['optimizer'])\n            if 'lr_scheduler' in checkpoint:\n                if args.overwrite_lr_scheduler:\n                    checkpoint['lr_scheduler'].pop('milestones')\n                lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])\n                if args.overwrite_lr_scheduler:\n                    lr_scheduler.step(checkpoint['lr_scheduler']['last_epoch'])\n            if 'epoch' in checkpoint:\n                args.start_epoch = checkpoint['epoch'] + 1\n                print(f\"RESUME EPOCH: {args.start_epoch}\")\n\n            best_val_stats = checkpoint['best_val_stats']\n\n        # RESUME VIS\n        if not args.eval_only and args.resume_vis and 'vis_win_names' in checkpoint:\n            for k, v in visualizers.items():\n                for k_inner in v.keys():\n                    visualizers[k][k_inner].win = checkpoint['vis_win_names'][k][k_inner]\n\n    if args.eval_only:\n        _, coco_evaluator = evaluate(\n            model, criterion, postprocessors, data_loader_val, device,\n            output_dir, visualizers['val'], args, 0)\n        if args.output_dir:\n            utils.save_on_master(coco_evaluator.coco_eval[\"bbox\"].eval, output_dir / \"eval.pth\")\n\n        return\n\n    print(\"Start training\")\n    start_time = time.time()\n    for epoch in range(args.start_epoch, args.epochs + 1):\n        # TRAIN\n        if args.distributed:\n            sampler_train.set_epoch(epoch)\n        train_one_epoch(\n            model, criterion, postprocessors, data_loader_train, optimizer, device, epoch,\n            visualizers['train'], args)\n\n        if args.eval_train:\n            random_transforms = data_loader_train.dataset._transforms\n            data_loader_train.dataset._transforms = data_loader_val.dataset._transforms\n            evaluate(\n                model, criterion, postprocessors, data_loader_train, device,\n                output_dir, visualizers['train'], args, epoch)\n            data_loader_train.dataset._transforms = random_transforms\n\n        lr_scheduler.step()\n\n        checkpoint_paths = [output_dir / 'checkpoint.pth']\n\n        # VAL\n        if epoch == 1 or not epoch % args.val_interval:\n            val_stats, _ = evaluate(\n                model, criterion, postprocessors, data_loader_val, device,\n                output_dir, visualizers['val'], args, epoch)\n\n            checkpoint_paths = [output_dir / 'checkpoint.pth']\n            # extra checkpoint before LR drop and every 100 epochs\n            # if (epoch + 1) % args.lr_drop == 0 or (epoch + 1) % 10 == 0:\n            #     checkpoint_paths.append(output_dir / f'checkpoint{epoch:04}.pth')\n\n            # checkpoint for best validation stats\n            stat_names = ['BBOX_AP_IoU_0_50-0_95', 'BBOX_AP_IoU_0_50', 'BBOX_AP_IoU_0_75']\n            if args.masks:\n                stat_names.extend(['MASK_AP_IoU_0_50-0_95', 'MASK_AP_IoU_0_50', 'MASK_AP_IoU_0_75'])\n            if args.tracking and args.tracking_eval:\n                stat_names.extend(['MOTA', 'IDF1'])\n\n            if best_val_stats is None:\n                best_val_stats = val_stats\n            best_val_stats = [best_stat if best_stat > stat else stat\n                              for best_stat, stat in zip(best_val_stats,\n                                                         val_stats)]\n            for b_s, s, n in zip(best_val_stats, val_stats, stat_names):\n                if b_s == s:\n                    checkpoint_paths.append(output_dir / f\"checkpoint_best_{n}.pth\")\n\n        # MODEL SAVING\n        if args.output_dir:\n            if args.save_model_interval and not epoch % args.save_model_interval:\n                checkpoint_paths.append(output_dir / f\"checkpoint_epoch_{epoch}.pth\")\n\n            for checkpoint_path in checkpoint_paths:\n                utils.save_on_master({\n                    'model': model_without_ddp.state_dict(),\n                    'optimizer': optimizer.state_dict(),\n                    'lr_scheduler': lr_scheduler.state_dict(),\n                    'epoch': epoch,\n                    'args': args,\n                    'vis_win_names': get_vis_win_names(visualizers),\n                    'best_val_stats': best_val_stats\n                }, checkpoint_path)\n\n    total_time = time.time() - start_time\n    total_time_str = str(datetime.timedelta(seconds=int(total_time)))\n    print('Training time {}'.format(total_time_str))\n\n\n@ex.main\ndef load_config(_config, _run):\n    \"\"\" We use sacred only for config loading from YAML files. \"\"\"\n    sacred.commands.print_config(_run)\n\n\nif __name__ == '__main__':\n    # TODO: hierachical Namespacing for nested dict\n    config = ex.run_commandline().config\n    args = nested_dict_to_namespace(config)\n    # args.train = Namespace(**config['train'])\n    train(args)\n"
  }
]