Full Code of timmeinhardt/trackformer for AI

main e468bf156b02 cached
91 files
458.1 KB
118.7k tokens
422 symbols
1 requests
Download .txt
Showing preview only (486K chars total). Download the full file or copy to clipboard to get everything.
Repository: timmeinhardt/trackformer
Branch: main
Commit: e468bf156b02
Files: 91
Total size: 458.1 KB

Directory structure:
gitextract_f7imb6c9/

├── .circleci/
│   └── config.yml
├── .github/
│   ├── CODE_OF_CONDUCT.md
│   ├── CONTRIBUTING.md
│   └── ISSUE_TEMPLATE/
│       ├── bugs.md
│       ├── questions-help-support.md
│       └── unexpected-problems-bugs.md
├── .gitignore
├── LICENSE
├── README.md
├── cfgs/
│   ├── submit.yaml
│   ├── track.yaml
│   ├── track_reid.yaml
│   ├── train.yaml
│   ├── train_coco_person_masks.yaml
│   ├── train_crowdhuman.yaml
│   ├── train_deformable.yaml
│   ├── train_full_res.yaml
│   ├── train_mot17.yaml
│   ├── train_mot17_crowdhuman.yaml
│   ├── train_mot20_crowdhuman.yaml
│   ├── train_mot_coco_person.yaml
│   ├── train_mots20.yaml
│   ├── train_multi_frame.yaml
│   └── train_tracking.yaml
├── data/
│   └── .gitignore
├── docs/
│   ├── INSTALL.md
│   └── TRAIN.md
├── logs/
│   ├── .gitignore
│   └── visdom/
│       └── .gitignore
├── models/
│   └── .gitignore
├── requirements.txt
├── setup.py
└── src/
    ├── combine_frames.py
    ├── compute_best_mean_epoch_from_splits.py
    ├── generate_coco_from_crowdhuman.py
    ├── generate_coco_from_mot.py
    ├── parse_mot_results_to_tex.py
    ├── run_with_submitit.py
    ├── track.py
    ├── track_param_search.py
    ├── trackformer/
    │   ├── __init__.py
    │   ├── datasets/
    │   │   ├── __init__.py
    │   │   ├── coco.py
    │   │   ├── coco_eval.py
    │   │   ├── coco_panoptic.py
    │   │   ├── crowdhuman.py
    │   │   ├── mot.py
    │   │   ├── panoptic_eval.py
    │   │   ├── tracking/
    │   │   │   ├── __init__.py
    │   │   │   ├── demo_sequence.py
    │   │   │   ├── factory.py
    │   │   │   ├── mot17_sequence.py
    │   │   │   ├── mot20_sequence.py
    │   │   │   ├── mot_wrapper.py
    │   │   │   └── mots20_sequence.py
    │   │   └── transforms.py
    │   ├── engine.py
    │   ├── models/
    │   │   ├── __init__.py
    │   │   ├── backbone.py
    │   │   ├── deformable_detr.py
    │   │   ├── deformable_transformer.py
    │   │   ├── detr.py
    │   │   ├── detr_segmentation.py
    │   │   ├── detr_tracking.py
    │   │   ├── matcher.py
    │   │   ├── ops/
    │   │   │   ├── .gitignore
    │   │   │   ├── functions/
    │   │   │   │   ├── __init__.py
    │   │   │   │   └── ms_deform_attn_func.py
    │   │   │   ├── make.sh
    │   │   │   ├── modules/
    │   │   │   │   ├── __init__.py
    │   │   │   │   └── ms_deform_attn.py
    │   │   │   ├── setup.py
    │   │   │   ├── src/
    │   │   │   │   ├── cpu/
    │   │   │   │   │   ├── ms_deform_attn_cpu.cpp
    │   │   │   │   │   └── ms_deform_attn_cpu.h
    │   │   │   │   ├── cuda/
    │   │   │   │   │   ├── ms_deform_attn_cuda.cu
    │   │   │   │   │   ├── ms_deform_attn_cuda.h
    │   │   │   │   │   └── ms_deform_im2col_cuda.cuh
    │   │   │   │   ├── ms_deform_attn.h
    │   │   │   │   └── vision.cpp
    │   │   │   ├── test.py
    │   │   │   └── test_double_precision.py
    │   │   ├── position_encoding.py
    │   │   ├── tracker.py
    │   │   └── transformer.py
    │   ├── util/
    │   │   ├── __init__.py
    │   │   ├── box_ops.py
    │   │   ├── misc.py
    │   │   ├── plot_utils.py
    │   │   └── track_utils.py
    │   └── vis.py
    └── train.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .circleci/config.yml
================================================
version: 2.1

jobs:
  python_lint:
    docker:
      - image: circleci/python:3.7
    steps:
      - checkout
      - run:
          command: |
            pip install --user --progress-bar off flake8 typing
            flake8 .

  test:
    docker:
      - image: circleci/python:3.7
    steps:
      - checkout
      - run:
          command: |
            pip install --user --progress-bar off scipy pytest
            pip install --user --progress-bar off --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
            pytest .

workflows:
  build:
    jobs:
      - python_lint
      - test


================================================
FILE: .github/CODE_OF_CONDUCT.md
================================================
# Code of Conduct

Facebook has adopted a Code of Conduct that we expect project participants to adhere to.
Please read the [full text](https://code.fb.com/codeofconduct/)
so that you can understand what actions will and will not be tolerated.


================================================
FILE: .github/CONTRIBUTING.md
================================================
# Contributing to DETR
We want to make contributing to this project as easy and transparent as
possible.

## Our Development Process
Minor changes and improvements will be released on an ongoing basis. Larger changes (e.g., changesets implementing a new paper) will be released on a more periodic basis.

## Pull Requests
We actively welcome your pull requests.

1. Fork the repo and create your branch from `master`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the documentation.
4. Ensure the test suite passes.
5. Make sure your code lints.
6. If you haven't already, complete the Contributor License Agreement ("CLA").

## Contributor License Agreement ("CLA")
In order to accept your pull request, we need you to submit a CLA. You only need
to do this once to work on any of Facebook's open source projects.

Complete your CLA here: <https://code.facebook.com/cla>

## Issues
We use GitHub issues to track public bugs. Please ensure your description is
clear and has sufficient instructions to be able to reproduce the issue.

Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
disclosure of security bugs. In those cases, please go through the process
outlined on that page and do not file a public issue.

## Coding Style  
* 4 spaces for indentation rather than tabs
* 80 character line length
* PEP8 formatting following [Black](https://black.readthedocs.io/en/stable/)

## License
By contributing to DETR, you agree that your contributions will be licensed
under the LICENSE file in the root directory of this source tree.


================================================
FILE: .github/ISSUE_TEMPLATE/bugs.md
================================================
---
name: "🐛 Bugs"
about: Report bugs in DETR
title: Please read & provide the following

---

## Instructions To Reproduce the 🐛 Bug:

1. what changes you made (`git diff`) or what code you wrote
```
<put diff or code here>
```
2. what exact command you run:
3. what you observed (including __full logs__):
```
<put logs here>
```
4. please simplify the steps as much as possible so they do not require additional resources to
	 run, such as a private dataset.

## Expected behavior:

If there are no obvious error in "what you observed" provided above,
please tell us the expected behavior.

## Environment:

Provide your environment information using the following command:
```
python -m torch.utils.collect_env
```


================================================
FILE: .github/ISSUE_TEMPLATE/questions-help-support.md
================================================
---
name: "How to do something❓"
about: How to do something using DETR?

---

## ❓ How to do something using DETR

Describe what you want to do, including:
1. what inputs you will provide, if any:
2. what outputs you are expecting:


NOTE:

1. Only general answers are provided.
   If you want to ask about "why X did not work", please use the
   [Unexpected behaviors](https://github.com/facebookresearch/detr/issues/new/choose) issue template.

2. About how to implement new models / new dataloader / new training logic, etc., check documentation first.

3. We do not answer general machine learning / computer vision questions that are not specific to DETR, such as how a model works, how to improve your training/make it converge, or what algorithm/methods can be used to achieve X.


================================================
FILE: .github/ISSUE_TEMPLATE/unexpected-problems-bugs.md
================================================
---
name: "Unexpected behaviors"
about: Run into unexpected behaviors when using DETR
title: Please read & provide the following

---

If you do not know the root cause of the problem, and wish someone to help you, please
post according to this template:

## Instructions To Reproduce the Issue:

1. what changes you made (`git diff`) or what code you wrote
```
<put diff or code here>
```
2. what exact command you run:
3. what you observed (including __full logs__):
```
<put logs here>
```
4. please simplify the steps as much as possible so they do not require additional resources to
	 run, such as a private dataset.

## Expected behavior:

If there are no obvious error in "what you observed" provided above,
please tell us the expected behavior.

If you expect the model to converge / work better, note that we do not give suggestions
on how to train a new model.
Only in one of the two conditions we will help with it:
(1) You're unable to reproduce the results in DETR model zoo.
(2) It indicates a DETR bug.

## Environment:

Provide your environment information using the following command:
```
python -m torch.utils.collect_env
```


================================================
FILE: .gitignore
================================================
.nfs*
*.ipynb
*.pyc
.dumbo.json
.DS_Store
.*.swp
*.pth
**/__pycache__/**
.ipynb_checkpoints/
datasets/data/
experiment-*
*.tmp
*.pkl
**/.mypy_cache/*
.mypy_cache/*
not_tracked_dir/
.vscode
.python-version
*.sbatch
*.egg-info
src/trackformer/models/ops/build*
src/trackformer/models/ops/dist*
src/trackformer/models/ops/lib*
src/trackformer/models/ops/temp*


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright 2020 - present, Facebook, Inc

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# TrackFormer: Multi-Object Tracking with Transformers

This repository provides the official implementation of the [TrackFormer: Multi-Object Tracking with Transformers](https://arxiv.org/abs/2101.02702) paper by [Tim Meinhardt](https://dvl.in.tum.de/team/meinhardt/), [Alexander Kirillov](https://alexander-kirillov.github.io/), [Laura Leal-Taixe](https://dvl.in.tum.de/team/lealtaixe/) and [Christoph Feichtenhofer](https://feichtenhofer.github.io/). The codebase builds upon [DETR](https://github.com/facebookresearch/detr), [Deformable DETR](https://github.com/fundamentalvision/Deformable-DETR) and [Tracktor](https://github.com/phil-bergmann/tracking_wo_bnw).

<!-- **As the paper is still under submission this repository will continuously be updated and might at times not reflect the current state of the [arXiv paper](https://arxiv.org/abs/2012.01866).** -->

<div align="center">
    <img src="docs/MOT17-03-SDP.gif" alt="MOT17-03-SDP" width="375"/>
    <img src="docs/MOTS20-07.gif" alt="MOTS20-07" width="375"/>
</div>

## Abstract

The challenging task of multi-object tracking (MOT) requires simultaneous reasoning about track initialization, identity, and spatiotemporal trajectories.
We formulate this task as a frame-to-frame set prediction problem and introduce TrackFormer, an end-to-end MOT approach based on an encoder-decoder Transformer architecture.
Our model achieves data association between frames via attention by evolving a set of track predictions through a video sequence.
The Transformer decoder initializes new tracks from static object queries and autoregressively follows existing tracks in space and time with the new concept of identity preserving track queries.
Both decoder query types benefit from self- and encoder-decoder attention on global frame-level features, thereby omitting any additional graph optimization and matching or modeling of motion and appearance.
TrackFormer represents a new tracking-by-attention paradigm and yields state-of-the-art performance on the task of multi-object tracking (MOT17) and segmentation (MOTS20).

<div align="center">
    <img src="docs/method.png" alt="TrackFormer casts multi-object tracking as a set prediction problem performing joint detection and tracking-by-attention. The architecture consists of a CNN for image feature extraction, a Transformer encoder for image feature encoding and a Transformer decoder which applies self- and encoder-decoder attention to produce output embeddings with bounding box and class information."/>
</div>

## Installation

We refer to our [docs/INSTALL.md](docs/INSTALL.md) for detailed installation instructions.

## Train TrackFormer

We refer to our [docs/TRAIN.md](docs/TRAIN.md) for detailed training instructions.

## Evaluate TrackFormer

In order to evaluate TrackFormer on a multi-object tracking dataset, we provide the `src/track.py` script which supports several datasets and splits interchangle via the `dataset_name` argument (See `src/datasets/tracking/factory.py` for an overview of all datasets.) The default tracking configuration is specified in `cfgs/track.yaml`. To facilitate the reproducibility of our results, we provide evaluation metrics for both the train and test set.

### MOT17

#### Private detections

```
python src/track.py with reid
```

<center>

| MOT17     | MOTA         | IDF1           |       MT     |     ML     |     FP       |     FN              |  ID SW.      |
|  :---:    | :---:        |     :---:      |    :---:     | :---:      |    :---:     |   :---:             |  :---:       |
| **Train** |     74.2     |     71.7       |     849      | 177        |      7431    |      78057          |  1449        |
| **Test**  |     74.1     |     68.0       |    1113      | 246        |     34602    |     108777          |  2829        |

</center>

#### Public detections (DPM, FRCNN, SDP)

```
python src/track.py with \
    reid \
    tracker_cfg.public_detections=min_iou_0_5 \
    obj_detect_checkpoint_file=models/mot17_deformable_multi_frame/checkpoint_epoch_50.pth
```

<center>

| MOT17     | MOTA         | IDF1           |       MT     |     ML     |     FP       |     FN              |  ID SW.      |
|  :---:    | :---:        |     :---:      |    :---:     | :---:      |    :---:     |   :---:             |  :---:       |
| **Train** |     64.6     |     63.7       |    621       | 675        |     4827     |     111958          |  2556        |
| **Test**  |     62.3     |     57.6       |    688       | 638        |     16591    |     192123          |  4018        |

</center>

### MOT20

#### Private detections

```
python src/track.py with \
    reid \
    dataset_name=MOT20-ALL \
    obj_detect_checkpoint_file=models/mot20_crowdhuman_deformable_multi_frame/checkpoint_epoch_50.pth
```

<center>

| MOT20     | MOTA         | IDF1           |       MT     |     ML     |     FP       |     FN              |  ID SW.      |
|  :---:    | :---:        |     :---:      |    :---:     | :---:      |    :---:     |   :---:             |  :---:       |
| **Train** |     81.0     |     73.3       |    1540      | 124        |     20807    |     192665          |  1961        |
| **Test**  |     68.6     |     65.7       |     666      | 181        |     20348    |     140373          |  1532        |

</center>

### MOTS20

```
python src/track.py with \
    dataset_name=MOTS20-ALL \
    obj_detect_checkpoint_file=models/mots20_train_masks/checkpoint.pth
```

Our tracking script only applies MOT17 metrics evaluation but outputs MOTS20 mask prediction files. To evaluate these download the official [MOTChallengeEvalKit](https://github.com/dendorferpatrick/MOTChallengeEvalKit).

<center>

| MOTS20    | sMOTSA         | IDF1           |       FP     |     FN     |     IDs      |
|  :---:    | :---:          |     :---:      |    :---:     | :---:      |    :---:     |
| **Train** |     --         |     --         |    --        |   --       |     --       |
| **Test**  |     54.9       |     63.6       |    2233      | 7195       |     278      |

</center>

### Demo

To facilitate the application of TrackFormer, we provide a demo interface which allows for a quick processing of a given video sequence.

```
ffmpeg -i data/snakeboard/snakeboard.mp4 -vf fps=30 data/snakeboard/%06d.png

python src/track.py with \
    dataset_name=DEMO \
    data_root_dir=data/snakeboard \
    output_dir=data/snakeboard \
    write_images=pretty
```

<div align="center">
    <img src="docs/snakeboard.gif" alt="Snakeboard demo" width="600"/>
</div>

## Publication
If you use this software in your research, please cite our publication:

```
@InProceedings{meinhardt2021trackformer,
    title={TrackFormer: Multi-Object Tracking with Transformers},
    author={Tim Meinhardt and Alexander Kirillov and Laura Leal-Taixe and Christoph Feichtenhofer},
    year={2022},
    month = {June},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
}
```

================================================
FILE: cfgs/submit.yaml
================================================
# Number of gpus to request on each node
num_gpus: 1
vram: 12GB
# memory allocated per GPU in GB
mem_per_gpu: 20
# Number of nodes to request
nodes: 1
# Duration of the job
timeout: 4320
# Job dir. Leave empty for automatic.
job_dir: ''
# Use to run jobs locally. ('debug', 'local', 'slurm')
cluster: debug
# Partition. Leave empty for automatic.
slurm_partition: ''
# Constraint. Leave empty for automatic.
slurm_constraint: ''
slurm_comment: ''
slurm_gres: ''
slurm_exclude: ''
cpus_per_task: 2

================================================
FILE: cfgs/track.yaml
================================================
output_dir: null
verbose: false
seed: 666

obj_detect_checkpoint_file: models/mot17_crowdhuman_deformable_multi_frame/checkpoint_epoch_40.pth

interpolate: False
# if available load tracking results and only evaluate
load_results_dir: null

# dataset (look into src/datasets/tracking/factory.py)
dataset_name: MOT17-ALL-ALL
data_root_dir: data

# [False, 'debug', 'pretty']
# compile video with: `ffmpeg -f image2 -framerate 15 -i %06d.jpg -vcodec libx264 -y movie.mp4 -vf scale=320:-1`
write_images: False
# Maps are only visualized if write_images is True
generate_attention_maps: False

# track, evaluate and write images only for a range of frames (in float fraction)
frame_range:
    start: 0.0
    end: 1.0

tracker_cfg:
    # [False, 'center_distance', 'min_iou_0_5']
    public_detections: False
    # score threshold for detections
    detection_obj_score_thresh: 0.4
    # score threshold for keeping the track alive
    track_obj_score_thresh: 0.4
    # NMS threshold for detection
    detection_nms_thresh: 0.9
    # NMS theshold while tracking
    track_nms_thresh: 0.9
    # number of consective steps a score has to be below track_obj_score_thresh for a track to be terminated
    steps_termination: 1
    # distance of previous frame for multi-frame attention
    prev_frame_dist: 1
    # How many timesteps inactive tracks are kept and cosidered for reid
    inactive_patience: -1
    # How similar do image and old track need to be to be considered the same person
    reid_sim_threshold: 0.0
    reid_sim_only: false
    reid_score_thresh: 0.4
    reid_greedy_matching: false


================================================
FILE: cfgs/track_reid.yaml
================================================
tracker_cfg:
  inactive_patience: 5


================================================
FILE: cfgs/train.yaml
================================================
lr: 0.0002
lr_backbone_names: ['backbone.0']
lr_backbone: 0.00002
lr_linear_proj_names: ['reference_points', 'sampling_offsets']
lr_linear_proj_mult: 0.1
lr_track: 0.0001
overwrite_lrs: false
overwrite_lr_scheduler: false
batch_size: 2
weight_decay: 0.0001
epochs: 50
lr_drop: 40
# gradient clipping max norm
clip_max_norm: 0.1
# Deformable DETR
deformable: false
with_box_refine: false
two_stage: false
# Model parameters
freeze_detr: false
load_mask_head_from_model: null
# Backbone
# Name of the convolutional backbone to use. ('resnet50', 'resnet101')
backbone: resnet50
# If true, we replace stride with dilation in the last convolutional block (DC5)
dilation: false
# Type of positional embedding to use on top of the image features. ('sine', 'learned')
position_embedding: sine
# Number of feature levels the encoder processes from the backbone
num_feature_levels: 1
# Transformer
# Number of encoding layers in the transformer
enc_layers: 6
# Number of decoding layers in the transformer
dec_layers: 6
# Intermediate size of the feedforward layers in the transformer blocks
dim_feedforward: 2048
# Size of the embeddings (dimension of the transformer)
hidden_dim: 256
# Dropout applied in the transformer
dropout: 0.1
# Number of attention heads inside the transformer's attentions
nheads: 8
# Number of object queries
num_queries: 100
pre_norm: false
dec_n_points: 4
enc_n_points: 4
# Tracking
tracking: false
# In addition to detection also run tracking evaluation with default configuration from `cfgs/track.yaml`
tracking_eval: true
# Range of possible random previous frames
track_prev_frame_range: 0
track_prev_frame_rnd_augs: 0.01
track_prev_prev_frame: False
track_backprop_prev_frame: False
track_query_false_positive_prob: 0.1
track_query_false_negative_prob: 0.4
# only for vanilla DETR
track_query_false_positive_eos_weight: true
track_attention: false
multi_frame_attention: false
multi_frame_encoding: true
multi_frame_attention_separate_encoder: true
merge_frame_features: false
overflow_boxes: false
# Segmentation
masks: false
# Matcher
# Class coefficient in the matching cost
set_cost_class: 1.0
# L1 box coefficient in the matching cost
set_cost_bbox: 5.0
# giou box coefficient in the matching cost
set_cost_giou: 2.0
# Loss
# Disables auxiliary decoding losses (loss at each layer)
aux_loss: true
mask_loss_coef: 1.0
dice_loss_coef: 1.0
cls_loss_coef: 1.0
bbox_loss_coef: 5.0
giou_loss_coef: 2
# Relative classification weight of the no-object class
eos_coef: 0.1
focal_loss: false
focal_alpha: 0.25
focal_gamma: 2
# Dataset
dataset: coco
train_split: train
val_split: val
coco_path: data/coco_2017
coco_panoptic_path: null
mot_path_train: data/MOT17
mot_path_val: data/MOT17
crowdhuman_path: data/CrowdHuman
# allows for joint training of mot and crowdhuman/coco_person with the `mot_crowdhuman`/`mot_coco_person` dataset
crowdhuman_train_split: null
coco_person_train_split: null
coco_and_crowdhuman_prev_frame_rnd_augs: 0.2
coco_min_num_objects: 0
img_transform:
  max_size: 1333
  val_width: 800
# Miscellaneous
# path where to save, empty for no saving
output_dir: ''
# device to use for training / testing
device: cuda
seed: 42
# resume from checkpoint
resume: ''
resume_shift_neuron: False
# resume optimization from checkpoint
resume_optim: false
# resume Visdom visualization
resume_vis: false
start_epoch: 1
eval_only: false
eval_train: false
num_workers: 2
val_interval: 5
debug: false
# epoch interval for model saving. if 0 only save last and best models
save_model_interval: 5
# distributed training parameters
# number of distributed processes
world_size: 1
# url used to set up distributed training
dist_url: env://
# Visdom params
# vis_server: http://localhost
vis_server: ''
vis_port: 8090
vis_and_log_interval: 50
no_vis: false


================================================
FILE: cfgs/train_coco_person_masks.yaml
================================================
dataset: coco_person

load_mask_head_from_model: models/detr-r50-panoptic-00ce5173.pth
freeze_detr: true
masks: true

lr: 0.0001
lr_drop: 50
epochs: 50

================================================
FILE: cfgs/train_crowdhuman.yaml
================================================
dataset: mot_crowdhuman
crowdhuman_train_split: train_val
train_split: null
val_split: mot17_train_cross_val_frame_0_5_to_1_0_coco
epochs: 80
lr_drop: 50

================================================
FILE: cfgs/train_deformable.yaml
================================================
deformable: true
num_feature_levels: 4
num_queries: 300
dim_feedforward: 1024
focal_loss: true
focal_alpha: 0.25
focal_gamma: 2
cls_loss_coef: 2.0
set_cost_class: 2.0
overflow_boxes: true
with_box_refine: true

================================================
FILE: cfgs/train_full_res.yaml
================================================
img_transform:
  max_size: 1920
  val_width: 1080

================================================
FILE: cfgs/train_mot17.yaml
================================================
dataset: mot

train_split: mot17_train_coco
val_split: mot17_train_cross_val_frame_0_5_to_1_0_coco

mot_path_train: data/MOT17
mot_path_val: data/MOT17

resume: models/r50_deformable_detr_plus_iterative_bbox_refinement-checkpoint_hidden_dim_288.pth

epochs: 50
lr_drop: 10

================================================
FILE: cfgs/train_mot17_crowdhuman.yaml
================================================
dataset: mot_crowdhuman

crowdhuman_train_split: train_val
train_split: mot17_train_coco
val_split: mot17_train_cross_val_frame_0_5_to_1_0_coco

mot_path_train: data/MOT17
mot_path_val: data/MOT17

resume: models/crowdhuman_deformable_trackformer/checkpoint_epoch_80.pth

epochs: 40
lr_drop: 10

================================================
FILE: cfgs/train_mot20_crowdhuman.yaml
================================================
dataset: mot_crowdhuman

crowdhuman_train_split: train_val
train_split: mot20_train_coco
val_split: mot20_train_cross_val_frame_0_5_to_1_0_coco

mot_path_train: data/MOT20
mot_path_val: data/MOT20

resume: models/crowdhuman_deformable_trackformer/checkpoint_epoch_80.pth

epochs: 50
lr_drop: 10

================================================
FILE: cfgs/train_mot_coco_person.yaml
================================================
dataset: mot_coco_person
coco_person_train_split: train
train_split: null
val_split: mot17_train_cross_val_frame_0_5_to_1_0_coco

================================================
FILE: cfgs/train_mots20.yaml
================================================
dataset: mot
mot_path: data/MOTS20
train_split: mots20_train_coco
val_split: mots20_train_coco

resume: models/mot17_train_pretrain_CH_deformable_with_coco_person_masks/checkpoint.pth
masks: true
lr: 0.00001
lr_backbone: 0.000001

epochs: 40
lr_drop: 40

================================================
FILE: cfgs/train_multi_frame.yaml
================================================
num_queries: 500
hidden_dim: 288
multi_frame_attention: true
multi_frame_encoding: true
multi_frame_attention_separate_encoder: true

================================================
FILE: cfgs/train_tracking.yaml
================================================
tracking: true
tracking_eval: true
track_prev_frame_range: 5
track_query_false_positive_eos_weight: true

================================================
FILE: data/.gitignore
================================================
*
!.gitignore
!snakeboard


================================================
FILE: docs/INSTALL.md
================================================
# Installation

1. Clone and enter this repository:
    ```
    git clone git@github.com:timmeinhardt/trackformer.git
    cd trackformer
    ```

2. Install packages for Python 3.7:

    1. `pip3 install -r requirements.txt`
    2. Install PyTorch 1.5 and torchvision 0.6 from [here](https://pytorch.org/get-started/previous-versions/#v150).
    3. Install pycocotools (with fixed ignore flag): `pip3 install -U 'git+https://github.com/timmeinhardt/cocoapi.git#subdirectory=PythonAPI'`
    5. Install MultiScaleDeformableAttention package: `python src/trackformer/models/ops/setup.py build --build-base=src/trackformer/models/ops/ install`

3. Download and unpack datasets in the `data` directory:

    1. [MOT17](https://motchallenge.net/data/MOT17/):

        ```
        wget https://motchallenge.net/data/MOT17.zip
        unzip MOT17.zip
        python src/generate_coco_from_mot.py
        ```

    2. (Optional) [MOT20](https://motchallenge.net/data/MOT20/):

        ```
        wget https://motchallenge.net/data/MOT20.zip
        unzip MOT20.zip
        python src/generate_coco_from_mot.py --mot20
        ```

    3. (Optional) [MOTS20](https://motchallenge.net/data/MOTS/):

        ```
        wget https://motchallenge.net/data/MOTS.zip
        unzip MOTS.zip
        python src/generate_coco_from_mot.py --mots
        ```

    4. (Optional) [CrowdHuman](https://www.crowdhuman.org/download.html):

        1. Create a `CrowdHuman` and `CrowdHuman/annotations` directory.
        2. Download and extract the `train` and `val` datasets including their corresponding `*.odgt` annotation file into the `CrowdHuman` directory.
        3. Create a `CrowdHuman/train_val` directory and merge or symlink the `train` and `val` image folders.
        4. Run `python src/generate_coco_from_crowdhuman.py`
        5. The final folder structure should resemble this:
            ~~~
            |-- data
                |-- CrowdHuman
                |   |-- train
                |   |   |-- *.jpg
                |   |-- val
                |   |   |-- *.jpg
                |   |-- train_val
                |   |   |-- *.jpg
                |   |-- annotations
                |   |   |-- annotation_train.odgt
                |   |   |-- annotation_val.odgt
                |   |   |-- train_val.json
            ~~~

3. Download and unpack pretrained TrackFormer model files in the `models` directory:

    ```
    wget https://vision.in.tum.de/webshare/u/meinhard/trackformer_models_v1.zip
    unzip trackformer_models_v1.zip
    ```

4. (optional) The evaluation of MOTS20 metrics requires two steps:
    1. Run Trackformer with `src/track.py` and output prediction files
    2. Download the official MOTChallenge [devkit](https://github.com/dendorferpatrick/MOTChallengeEvalKit) and run the MOTS evaluation on the prediction files

In order to configure, log and reproduce our computational experiments, we structure our code with the [Sacred](http://sacred.readthedocs.io/en/latest/index.html) framework. For a detailed explanation of the Sacred interface please read its documentation.


================================================
FILE: docs/TRAIN.md
================================================
# Train TrackFormer

We provide the code as well as intermediate models of our entire training pipeline for multiple datasets. Monitoring of the training/evaluation progress is possible via command line as well as [Visdom](https://github.com/fossasia/visdom.git). For the latter, a Visdom server must be running at `vis_port` and `vis_server` (see `cfgs/train.yaml`). We set `vis_server=''` by default to deactivate Visdom logging. To deactivate Visdom logging with set parameters, you can run a training with the `no_vis=True` flag.

<div align="center">
    <img src="../docs/visdom.gif" alt="Snakeboard demo" width="600"/>
</div>

The settings for each dataset are specified in the respective configuration files, e.g., `cfgs/train_crowdhuman.yaml`. The following train commands produced the pretrained model files mentioned in [docs/INSTALL.md](INSTALL.md).

## CrowdHuman pre-training

```
python src/train.py with \
    crowdhuman \
    deformable \
    multi_frame \
    tracking \
    output_dir=models/crowdhuman_deformable_multi_frame \
```

## MOT17

#### Private detections

```
python src/train.py with \
    mot17_crowdhuman \
    deformable \
    multi_frame \
    tracking \
    output_dir=models/mot17_crowdhuman_deformable_multi_frame \
```

#### Public detections

```
python src/train.py with \
    mot17 \
    deformable \
    multi_frame \
    tracking \
    output_dir=models/mot17_deformable_multi_frame \
```

## MOT20

#### Private detections

```
python src/train.py with \
    mot20_crowdhuman \
    deformable \
    multi_frame \
    tracking \
    output_dir=models/mot20_crowdhuman_deformable_multi_frame \
```

## MOTS20

For our MOTS20 test set submission, we finetune a MOT17 private detection model without deformable attention, i.e., vanilla DETR, which was pre-trained on the CrowdHuman dataset. The finetuning itself conists of two training steps: (i) the original DETR panoptic segmentation head on the COCO person segmentation data and (ii) the entire TrackFormer model (including segmentation head) on the MOTS20 training set. At this point, we only provide the final model files in [docs/INSTALL.md](INSTALL.md).

<!-- ```
python src/train.py with \
    tracking \
    coco_person_masks \
    output_dir=models/mot17_train_private_coco_person_masks_v2 \
```

```
python src/train.py with \
    tracking \
    mots20 \
    output_dir=models/mots20_train_masks \
``` -->

<!-- ### Ablation studies

Will be added after acceptance of the paper. -->

## Custom Dataset

TrackFormer can be trained on additional/new object detection or multi-object tracking datasets without changing our codebase. The `crowdhuman` or `mot` datasets merely require a [COCO style](https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch) annotation file and the following folder structure:

~~~
|-- data
    |-- custom_dataset
    |   |-- train
    |   |   |-- *.jpg
    |   |-- val
    |   |   |-- *.jpg
    |   |-- annotations
    |   |   |-- train.json
    |   |   |-- val.json
~~~

In the case of a multi-object tracking dataset, the original COCO annotations style must be extended with `seq_length`, `first_frame_image_id` and `track_id` fields. See the `src/generate_coco_from_mot.py` script for details. For example, the following command finetunes our `MOT17` private model for additional 20 epochs on a custom dataset:

```
python src/train.py with \
    mot17 \
    deformable \
    multi_frame \
    tracking \
    resume=models/mot17_crowdhuman_deformable_trackformer/checkpoint_epoch_40.pth \
    output_dir=models/custom_dataset_deformable \
    mot_path_train=data/custom_dataset \
    mot_path_val=data/custom_dataset \
    train_split=train \
    val_split=val \
    epochs=20 \
```

## Run with multipe GPUs

All reported results are obtained by training with a batch size of 2 and 7 GPUs, i.e., an effective batch size of 14. If you have less GPUs at your disposal, adjust the learning rates accordingly. To start the CrowdHuman pre-training with 7 GPUs execute:

```
python -m torch.distributed.launch --nproc_per_node=7 --use_env src/train.py with \
    crowdhuman \
    deformable \
    multi_frame \
    tracking \
    output_dir=models/crowdhuman_deformable_multi_frame \
```

## Run SLURM jobs with Submitit

Furthermore, we provide a script for starting Slurm jobs with [submitit](https://github.com/facebookincubator/submitit). This includes a convenient command line interface for Slurm options as well as preemption and resuming capabilities. The aforementioned CrowdHuman pre-training can be executed on 7 x 32 GB GPUs with the following command:

```
python src/run_with_submitit.py with \
    num_gpus=7 \
    vram=32GB \
    cluster=slurm \
    train.crowdhuman \
    train.deformable \
    train.trackformer \
    train.tracking \
    train.output_dir=models/crowdhuman_train_val_deformable \
```

================================================
FILE: logs/.gitignore
================================================
*
!visdom
!.gitignore


================================================
FILE: logs/visdom/.gitignore
================================================
*
!.gitignore

================================================
FILE: models/.gitignore
================================================
*
!.gitignore

================================================
FILE: requirements.txt
================================================
argon2-cffi==20.1.0
astroid==2.4.2
async-generator==1.10
attrs==19.3.0
backcall==0.2.0
bleach==3.2.3
certifi==2020.4.5.2
cffi==1.14.4
chardet==3.0.4
cloudpickle==1.6.0
colorama==0.4.3
cycler==0.10.0
Cython==0.29.20
decorator==4.4.2
defusedxml==0.6.0
docopt==0.6.2
entrypoints==0.3
filelock==3.0.12
flake8==3.8.3
flake8-import-order==0.18.1
future==0.18.2
gdown==3.12.2
gitdb==4.0.5
GitPython==3.1.3
idna==2.9
imageio==2.8.0
importlib-metadata==1.6.1
ipykernel==5.4.3
ipython==7.19.0
ipython-genutils==0.2.0
ipywidgets==7.6.3
isort==5.6.4
jedi==0.18.0
Jinja2==2.11.2
jsonpatch==1.25
jsonpickle==1.4.1
jsonpointer==2.0
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==6.1.11
jupyter-console==6.2.0
jupyter-core==4.7.0
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
kiwisolver==1.2.0
lap==0.4.0
lapsolver==1.1.0
lazy-object-proxy==1.4.3
MarkupSafe==1.1.1
matplotlib==3.2.1
mccabe==0.6.1
mistune==0.8.4
more-itertools==8.4.0
motmetrics==1.2.0
munch==2.5.0
nbclient==0.5.1
nbconvert==6.0.7
nbformat==5.1.2
nest-asyncio==1.5.1
networkx==2.4
ninja==1.10.0.post2
notebook==6.2.0
numpy==1.18.5
opencv-python==4.2.0.34
packaging==20.4
pandas==1.0.5
pandocfilters==1.4.3
parso==0.8.1
pexpect==4.8.0
pickleshare==0.7.5
Pillow==7.1.2
pluggy==0.13.1
prometheus-client==0.9.0
prompt-toolkit==3.0.14
ptyprocess==0.7.0
py==1.8.2
py-cpuinfo==6.0.0
pyaml==20.4.0
pycodestyle==2.6.0
pycparser==2.20
pyflakes==2.2.0
Pygments==2.7.4
pylint==2.6.0
pyparsing==2.4.7
pyrsistent==0.17.3
PySocks==1.7.1
pytest==5.4.3
pytest-benchmark==3.2.3
python-dateutil==2.8.1
pytz==2020.1
PyWavelets==1.1.1
PyYAML==5.3.1
pyzmq==19.0.1
qtconsole==5.0.2
QtPy==1.9.0
requests==2.23.0
sacred==0.8.1
scikit-image==0.17.2
scipy==1.4.1
seaborn==0.10.1
Send2Trash==1.5.0
six==1.15.0
smmap==3.0.4
submitit==1.1.5
terminado==0.9.2
testpath==0.4.4
tifffile==2020.6.3
toml==0.10.2
torchfile==0.1.0
tornado==6.1
tqdm==4.46.1
traitlets==5.0.5
typed-ast==1.4.1
typing-extensions==3.7.4.3
urllib3==1.25.9
visdom==0.1.8.9
wcwidth==0.2.5
webencodings==0.5.1
websocket-client==0.57.0
widgetsnbextension==3.5.1
wrapt==1.12.1
xmltodict==0.12.0
zipp==3.1.0


================================================
FILE: setup.py
================================================
from setuptools import setup, find_packages

setup(name='trackformer',
      packages=['trackformer'],
      package_dir={'':'src'},
      version='0.0.1',
      install_requires=[],)


================================================
FILE: src/combine_frames.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Combine two sets of frames to one.
"""
import os
import os.path as osp

from PIL import Image

OUTPUT_DIR = 'models/mot17_masks_track_rcnn_and_v3_combined'

FRAME_DIR_1 = 'models/mot17_masks_track_rcnn/MOTS20-TEST'
FRAME_DIR_2 = 'models/mot17_masks_v3/MOTS20-ALL'


if __name__ == '__main__':
    seqs_1 = os.listdir(FRAME_DIR_1)
    seqs_2 = os.listdir(FRAME_DIR_2)

    if not osp.exists(OUTPUT_DIR):
        os.makedirs(OUTPUT_DIR)

    for seq in seqs_1:
        if seq in seqs_2:
            print(seq)
            seg_output_dir = osp.join(OUTPUT_DIR, seq)
            if not osp.exists(seg_output_dir):
                os.makedirs(seg_output_dir)

            frames = os.listdir(osp.join(FRAME_DIR_1, seq))

            for frame in frames:
                img_1 = Image.open(osp.join(FRAME_DIR_1, seq, frame))
                img_2 = Image.open(osp.join(FRAME_DIR_2, seq, frame))

                width = img_1.size[0]
                height = img_2.size[1]

                combined_frame = Image.new('RGB', (width, height * 2))
                combined_frame.paste(img_1, (0, 0))
                combined_frame.paste(img_2, (0, height))

                combined_frame.save(osp.join(seg_output_dir, f'{frame}'))


================================================
FILE: src/compute_best_mean_epoch_from_splits.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import os
import json
import numpy as np


LOG_DIR = 'logs/visdom'

METRICS = ['MOTA', 'IDF1', 'BBOX AP IoU=0.50:0.95', 'MASK AP IoU=0.50:0.95']

RUNS = [
    'mot17_train_1_deformable_full_res',
    'mot17_train_2_deformable_full_res',
    'mot17_train_3_deformable_full_res',
    'mot17_train_4_deformable_full_res',
    'mot17_train_5_deformable_full_res',
    'mot17_train_6_deformable_full_res',
    'mot17_train_7_deformable_full_res',
    ]

RUNS = [
    'mot17_train_1_no_pretrain_deformable_tracking',
    'mot17_train_2_no_pretrain_deformable_tracking',
    'mot17_train_3_no_pretrain_deformable_tracking',
    'mot17_train_4_no_pretrain_deformable_tracking',
    'mot17_train_5_no_pretrain_deformable_tracking',
    'mot17_train_6_no_pretrain_deformable_tracking',
    'mot17_train_7_no_pretrain_deformable_tracking',
    ]

RUNS = [
    'mot17_train_1_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_2_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_3_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_4_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_5_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_6_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_7_coco_pretrain_deformable_tracking_lr=0.00001',
    ]

RUNS = [
    'mot17_train_1_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_2_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_3_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_4_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_5_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_6_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',
    'mot17_train_7_crowdhuman_coco_pretrain_deformable_tracking_lr=0.00001',
    ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable_tracking_eos_coef=0.2',
#     'mot17_train_2_no_pretrain_deformable_tracking_eos_coef=0.2',
#     'mot17_train_3_no_pretrain_deformable_tracking_eos_coef=0.2',
#     'mot17_train_4_no_pretrain_deformable_tracking_eos_coef=0.2',
#     'mot17_train_5_no_pretrain_deformable_tracking_eos_coef=0.2',
#     'mot17_train_6_no_pretrain_deformable_tracking_eos_coef=0.2',
#     'mot17_train_7_no_pretrain_deformable_tracking_eos_coef=0.2',
#     ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable_tracking_lr_drop=50',
#     'mot17_train_2_no_pretrain_deformable_tracking_lr_drop=50',
#     'mot17_train_3_no_pretrain_deformable_tracking_lr_drop=50',
#     'mot17_train_4_no_pretrain_deformable_tracking_lr_drop=50',
#     'mot17_train_5_no_pretrain_deformable_tracking_lr_drop=50',
#     'mot17_train_6_no_pretrain_deformable_tracking_lr_drop=50',
#     'mot17_train_7_no_pretrain_deformable_tracking_lr_drop=50',
#     ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable_tracking_save_model_interval=1',
#     'mot17_train_2_no_pretrain_deformable_tracking_save_model_interval=1',
#     'mot17_train_3_no_pretrain_deformable_tracking_save_model_interval=1',
#     'mot17_train_4_no_pretrain_deformable_tracking_save_model_interval=1',
#     'mot17_train_5_no_pretrain_deformable_tracking_save_model_interval=1',
#     'mot17_train_6_no_pretrain_deformable_tracking_save_model_interval=1',
#     'mot17_train_7_no_pretrain_deformable_tracking_save_model_interval=1',
#     ]

# RUNS = [
    # 'mot17_train_1_no_pretrain_deformable_tracking_save_model_interval=1',
    # 'mot17_train_2_no_pretrain_deformable_tracking_save_model_interval=1',
    # 'mot17_train_3_no_pretrain_deformable_tracking_save_model_interval=1',
    # 'mot17_train_4_no_pretrain_deformable_tracking_save_model_interval=1',
    # 'mot17_train_5_no_pretrain_deformable_tracking_save_model_interval=1',
    # 'mot17_train_6_no_pretrain_deformable_tracking_save_model_interval=1',
    # 'mot17_train_7_no_pretrain_deformable_tracking_save_model_interval=1',
    # ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable_full_res',
#     'mot17_train_2_no_pretrain_deformable_full_res',
#     'mot17_train_3_no_pretrain_deformable_full_res',
#     'mot17_train_4_no_pretrain_deformable_full_res',
#     'mot17_train_5_no_pretrain_deformable_full_res',
#     'mot17_train_6_no_pretrain_deformable_full_res',
#     'mot17_train_7_no_pretrain_deformable_full_res',
#     ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',
#     'mot17_train_2_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',
#     'mot17_train_3_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',
#     'mot17_train_4_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',
#     'mot17_train_5_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',
#     'mot17_train_6_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',
#     'mot17_train_7_no_pretrain_deformable_tracking_track_query_false_positive_eos_weight=False',
#     ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',
#     'mot17_train_2_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',
#     'mot17_train_3_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',
#     'mot17_train_4_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',
#     'mot17_train_5_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',
#     'mot17_train_6_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',
#     'mot17_train_7_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0',
#     ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',
#     'mot17_train_2_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',
#     'mot17_train_3_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',
#     'mot17_train_4_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',
#     'mot17_train_5_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',
#     'mot17_train_6_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',
#     'mot17_train_7_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0',
#     ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',
#     'mot17_train_2_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',
#     'mot17_train_3_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',
#     'mot17_train_4_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',
#     'mot17_train_5_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',
#     'mot17_train_6_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',
#     'mot17_train_7_no_pretrain_deformable_tracking_track_query_false_positive_prob=0_0_track_prev_frame_range=0_track_query_false_negative_prob=0_0',
#     ]

# RUNS = [
#     'mot17_train_1_no_pretrain_deformable',
#     'mot17_train_2_no_pretrain_deformable',
#     'mot17_train_3_no_pretrain_deformable',
#     'mot17_train_4_no_pretrain_deformable',
#     'mot17_train_5_no_pretrain_deformable',
#     'mot17_train_6_no_pretrain_deformable',
#     'mot17_train_7_no_pretrain_deformable',
#     ]

#
# MOTS 4-fold split
#

# RUNS = [
#     'mots20_train_1_coco_tracking',
#     'mots20_train_2_coco_tracking',
#     'mots20_train_3_coco_tracking',
#     'mots20_train_4_coco_tracking',
#     ]

# RUNS = [
#     'mots20_train_1_coco_tracking_full_res_masks=False',
#     'mots20_train_2_coco_tracking_full_res_masks=False',
#     'mots20_train_3_coco_tracking_full_res_masks=False',
#     'mots20_train_4_coco_tracking_full_res_masks=False',
#     ]

# RUNS = [
#     'mots20_train_1_coco_full_res_pretrain_masks=False_lr_0_0001',
#     'mots20_train_2_coco_full_res_pretrain_masks=False_lr_0_0001',
#     'mots20_train_3_coco_full_res_pretrain_masks=False_lr_0_0001',
#     'mots20_train_4_coco_full_res_pretrain_masks=False_lr_0_0001',
#     ]

# RUNS = [
#     'mots20_train_1_coco_tracking_full_res_masks=False_pretrain',
#     'mots20_train_2_coco_tracking_full_res_masks=False_pretrain',
#     'mots20_train_3_coco_tracking_full_res_masks=False_pretrain',
#     'mots20_train_4_coco_tracking_full_res_masks=False_pretrain',
#     ]

# RUNS = [
#     'mot17det_train_1_mots_track_bbox_proposals_pretrain_train_1_mots_vis_save_model_interval_1',
#     'mot17det_train_2_mots_track_bbox_proposals_pretrain_train_3_mots_vis_save_model_interval_1',
#     'mot17det_train_3_mots_track_bbox_proposals_pretrain_train_4_mots_vis_save_model_interval_1',
#     'mot17det_train_4_mots_track_bbox_proposals_pretrain_train_6_mots_vis_save_model_interval_1',
# ]

if __name__ == '__main__':
    results = {}

    for r in RUNS:
        print(r)
        log_file = os.path.join(LOG_DIR, f"{r}.json")

        with open(log_file) as json_file:
            data = json.load(json_file)

            window = [
                window for window in data['jsons'].values()
                if window['title'] == 'VAL EVAL EPOCHS'][0]

            for m in METRICS:
                if m not in window['legend']:
                    continue
                elif m not in results:
                    results[m] = []

                idxs = window['legend'].index(m)

                values = window['content']['data'][idxs]['y']
                results[m].append(values)

        print(f'NUM EPOCHS: {len(values)}')

    min_length = min([len(l) for l in next(iter(results.values()))])

    for metric in results.keys():
        results[metric] = [l[:min_length] for l in results[metric]]

    mean_results = {
        metric: np.array(results[metric]).mean(axis=0)
        for metric in results.keys()}

    print("* METRIC INTERVAL = BEST EPOCHS")
    for metric in results.keys():
        best_interval = mean_results[metric].argmax()
        print(mean_results[metric])
        print(
            f'{metric}: {mean_results[metric].max():.2%} at {best_interval + 1}/{len(mean_results[metric])} '
            f'{[(mmetric, f"{mean_results[mmetric][best_interval]:.2%}") for mmetric in results.keys() if not mmetric == metric]}')


================================================
FILE: src/generate_coco_from_crowdhuman.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Generates COCO data and annotation structure from CrowdHuman data.
"""
import json
import os
import cv2

from generate_coco_from_mot import check_coco_from_mot

DATA_ROOT = 'data/CrowdHuman'
VIS_THRESHOLD = 0.0


def generate_coco_from_crowdhuman(split_name='train_val', split='train_val'):
    """
    Generate COCO data from CrowdHuman.
    """
    annotations = {}
    annotations['type'] = 'instances'
    annotations['images'] = []
    annotations['categories'] = [{"supercategory": "person",
                                  "name": "person",
                                  "id": 1}]
    annotations['annotations'] = []
    annotation_file = os.path.join(DATA_ROOT, f'annotations/{split_name}.json')

    # IMAGES
    imgs_list_dir = os.listdir(os.path.join(DATA_ROOT, split))
    for i, img in enumerate(sorted(imgs_list_dir)):
        im = cv2.imread(os.path.join(DATA_ROOT, split, img))
        h, w, _ = im.shape

        annotations['images'].append({
            "file_name": img,
            "height": h,
            "width": w,
            "id": i, })

    # GT
    annotation_id = 0
    img_file_name_to_id = {
        os.path.splitext(img_dict['file_name'])[0]: img_dict['id']
        for img_dict in annotations['images']}

    for split in ['train', 'val']:
        if split not in split_name:
            continue
        odgt_annos_file = os.path.join(DATA_ROOT, f'annotations/annotation_{split}.odgt')
        with open(odgt_annos_file, 'r+') as anno_file:
            datalist = anno_file.readlines()

        ignores = 0
        for data in datalist:
            json_data = json.loads(data)
            gtboxes = json_data['gtboxes']
            for gtbox in gtboxes:
                if gtbox['tag'] == 'person':
                    bbox = gtbox['fbox']
                    area = bbox[2] * bbox[3]

                    ignore = False
                    visibility = 1.0
                    # if 'occ' in gtbox['extra']:
                    #     visibility = 1.0 - gtbox['extra']['occ']
                    # if visibility <= VIS_THRESHOLD:
                    #     ignore = True

                    if 'ignore' in gtbox['extra']:
                        ignore = ignore or bool(gtbox['extra']['ignore'])

                    ignores += int(ignore)

                    annotation = {
                        "id": annotation_id,
                        "bbox": bbox,
                        "image_id": img_file_name_to_id[json_data['ID']],
                        "segmentation": [],
                        "ignore": int(ignore),
                        "visibility": visibility,
                        "area": area,
                        "iscrowd": 0,
                        "category_id": annotations['categories'][0]['id'],}

                    annotation_id += 1
                    annotations['annotations'].append(annotation)

    # max objs per image
    num_objs_per_image = {}
    for anno in annotations['annotations']:
        image_id = anno["image_id"]
        if image_id in num_objs_per_image:
            num_objs_per_image[image_id] += 1
        else:
            num_objs_per_image[image_id] = 1

    print(f'max objs per image: {max([n for n  in num_objs_per_image.values()])}')
    print(f'ignore augs: {ignores}/{len(annotations["annotations"])}')
    print(len(annotations['images']))

    # for img_id, num_objs in num_objs_per_image.items():
    #     if num_objs > 50 or num_objs < 2:
    #         annotations['images'] = [
    #             img for img in annotations['images']
    #             if img_id != img['id']]

    #         annotations['annotations'] = [
    #             anno for anno in annotations['annotations']
    #             if img_id != anno['image_id']]

    # print(len(annotations['images']))

    with open(annotation_file, 'w') as anno_file:
        json.dump(annotations, anno_file, indent=4)


if __name__ == '__main__':
    generate_coco_from_crowdhuman(split_name='train_val', split='train_val')
    # generate_coco_from_crowdhuman(split_name='train', split='train')

    # coco_dir = os.path.join('data/CrowdHuman', 'train_val')
    # annotation_file = os.path.join('data/CrowdHuman/annotations', 'train_val.json')
    # check_coco_from_mot(coco_dir, annotation_file, img_id=9012)


================================================
FILE: src/generate_coco_from_mot.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Generates COCO data and annotation structure from MOTChallenge data.
"""
import argparse
import configparser
import csv
import json
import os
import shutil

import numpy as np
import pycocotools.mask as rletools
import skimage.io as io
import torch
from matplotlib import pyplot as plt
from pycocotools.coco import COCO
from scipy.optimize import linear_sum_assignment
from torchvision.ops.boxes import box_iou

from trackformer.datasets.tracking.mots20_sequence import load_mots_gt

MOTS_ROOT = 'data/MOTS20'
VIS_THRESHOLD = 0.25

MOT_15_SEQS_INFO = {
    'ETH-Bahnhof': {'img_width': 640, 'img_height': 480, 'seq_length': 1000},
    'ETH-Sunnyday': {'img_width': 640, 'img_height': 480, 'seq_length': 354},
    'KITTI-13': {'img_width': 1242, 'img_height': 375, 'seq_length': 340},
    'KITTI-17': {'img_width': 1224, 'img_height': 370, 'seq_length': 145},
    'PETS09-S2L1': {'img_width': 768, 'img_height': 576, 'seq_length': 795},
    'TUD-Campus': {'img_width': 640, 'img_height': 480, 'seq_length': 71},
    'TUD-Stadtmitte': {'img_width': 640, 'img_height': 480, 'seq_length': 179},}


def generate_coco_from_mot(split_name='train', seqs_names=None,
                           root_split='train', mots=False, mots_vis=False,
                           frame_range=None, data_root='data/MOT17'):
    """
    Generates COCO data from MOT.
    """
    if frame_range is None:
        frame_range = {'start': 0.0, 'end': 1.0}

    if mots:
        data_root = MOTS_ROOT
    root_split_path = os.path.join(data_root, root_split)
    root_split_mots_path = os.path.join(MOTS_ROOT, root_split)
    coco_dir = os.path.join(data_root, split_name)

    if os.path.isdir(coco_dir):
        shutil.rmtree(coco_dir)

    os.mkdir(coco_dir)

    annotations = {}
    annotations['type'] = 'instances'
    annotations['images'] = []
    annotations['categories'] = [{"supercategory": "person",
                                  "name": "person",
                                  "id": 1}]
    annotations['annotations'] = []

    annotations_dir = os.path.join(os.path.join(data_root, 'annotations'))
    if not os.path.isdir(annotations_dir):
        os.mkdir(annotations_dir)
    annotation_file = os.path.join(annotations_dir, f'{split_name}.json')

    # IMAGE FILES
    img_id = 0

    seqs = sorted(os.listdir(root_split_path))

    if seqs_names is not None:
        seqs = [s for s in seqs if s in seqs_names]
    annotations['sequences'] = seqs
    annotations['frame_range'] = frame_range
    print(split_name, seqs)

    for seq in seqs:
        # CONFIG FILE
        config = configparser.ConfigParser()
        config_file = os.path.join(root_split_path, seq, 'seqinfo.ini')

        if os.path.isfile(config_file):
            config.read(config_file)
            img_width = int(config['Sequence']['imWidth'])
            img_height = int(config['Sequence']['imHeight'])
            seq_length = int(config['Sequence']['seqLength'])
        else:
            img_width = MOT_15_SEQS_INFO[seq]['img_width']
            img_height = MOT_15_SEQS_INFO[seq]['img_height']
            seq_length = MOT_15_SEQS_INFO[seq]['seq_length']

        seg_list_dir = os.listdir(os.path.join(root_split_path, seq, 'img1'))
        start_frame = int(frame_range['start'] * seq_length)
        end_frame = int(frame_range['end'] * seq_length)
        seg_list_dir = seg_list_dir[start_frame: end_frame]

        print(f"{seq}: {len(seg_list_dir)}/{seq_length}")
        seq_length = len(seg_list_dir)

        for i, img in enumerate(sorted(seg_list_dir)):

            if i == 0:
                first_frame_image_id = img_id

            annotations['images'].append({"file_name": f"{seq}_{img}",
                                          "height": img_height,
                                          "width": img_width,
                                          "id": img_id,
                                          "frame_id": i,
                                          "seq_length": seq_length,
                                          "first_frame_image_id": first_frame_image_id})

            img_id += 1

            os.symlink(os.path.join(os.getcwd(), root_split_path, seq, 'img1', img),
                       os.path.join(coco_dir, f"{seq}_{img}"))

    # GT
    annotation_id = 0
    img_file_name_to_id = {
        img_dict['file_name']: img_dict['id']
        for img_dict in annotations['images']}
    for seq in seqs:
        # GT FILE
        gt_file_path = os.path.join(root_split_path, seq, 'gt', 'gt.txt')
        if mots:
            gt_file_path = os.path.join(
                root_split_mots_path,
                seq.replace('MOT17', 'MOTS20'),
                'gt',
                'gt.txt')
        if not os.path.isfile(gt_file_path):
            continue

        seq_annotations = []
        if mots:
            mask_objects_per_frame = load_mots_gt(gt_file_path)
            for frame_id, mask_objects in mask_objects_per_frame.items():
                for mask_object in mask_objects:
                    # class_id = 1 is car
                    # class_id = 2 is person
                    # class_id = 10 IGNORE
                    if mask_object.class_id == 1:
                        continue

                    bbox = rletools.toBbox(mask_object.mask)
                    bbox = [int(c) for c in bbox]
                    area = bbox[2] * bbox[3]
                    image_id = img_file_name_to_id.get(f"{seq}_{frame_id:06d}.jpg", None)
                    if image_id is None:
                        continue

                    segmentation = {
                        'size': mask_object.mask['size'],
                        'counts': mask_object.mask['counts'].decode(encoding='UTF-8')}

                    annotation = {
                        "id": annotation_id,
                        "bbox": bbox,
                        "image_id": image_id,
                        "segmentation": segmentation,
                        "ignore": mask_object.class_id == 10,
                        "visibility": 1.0,
                        "area": area,
                        "iscrowd": 0,
                        "seq": seq,
                        "category_id": annotations['categories'][0]['id'],
                        "track_id": mask_object.track_id}

                    seq_annotations.append(annotation)
                    annotation_id += 1

            annotations['annotations'].extend(seq_annotations)
        else:

            seq_annotations_per_frame = {}
            with open(gt_file_path, "r") as gt_file:
                reader = csv.reader(gt_file, delimiter=' ' if mots else ',')

                for row in reader:
                    if int(row[6]) == 1 and (seq in MOT_15_SEQS_INFO or int(row[7]) == 1):
                        bbox = [float(row[2]), float(row[3]), float(row[4]), float(row[5])]
                        bbox = [int(c) for c in bbox]

                        area = bbox[2] * bbox[3]
                        visibility = float(row[8])
                        frame_id = int(row[0])
                        image_id = img_file_name_to_id.get(f"{seq}_{frame_id:06d}.jpg", None)
                        if image_id is None:
                            continue
                        track_id = int(row[1])

                        annotation = {
                            "id": annotation_id,
                            "bbox": bbox,
                            "image_id": image_id,
                            "segmentation": [],
                            "ignore": 0 if visibility > VIS_THRESHOLD else 1,
                            "visibility": visibility,
                            "area": area,
                            "iscrowd": 0,
                            "seq": seq,
                            "category_id": annotations['categories'][0]['id'],
                            "track_id": track_id}

                        seq_annotations.append(annotation)
                        if frame_id not in seq_annotations_per_frame:
                            seq_annotations_per_frame[frame_id] = []
                        seq_annotations_per_frame[frame_id].append(annotation)

                        annotation_id += 1

            annotations['annotations'].extend(seq_annotations)

            #change ignore based on MOTS mask
            if mots_vis:
                gt_file_mots = os.path.join(
                    root_split_mots_path,
                    seq.replace('MOT17', 'MOTS20'),
                    'gt',
                    'gt.txt')
                if os.path.isfile(gt_file_mots):
                    mask_objects_per_frame = load_mots_gt(gt_file_mots)

                    for frame_id, frame_annotations in seq_annotations_per_frame.items():
                        mask_objects = mask_objects_per_frame[frame_id]
                        mask_object_bboxes = [rletools.toBbox(obj.mask) for obj in mask_objects]
                        mask_object_bboxes = torch.tensor(mask_object_bboxes).float()

                        frame_boxes = [a['bbox'] for a in frame_annotations]
                        frame_boxes = torch.tensor(frame_boxes).float()

                        # x,y,w,h --> x,y,x,y
                        frame_boxes[:, 2:] += frame_boxes[:, :2]
                        mask_object_bboxes[:, 2:] += mask_object_bboxes[:, :2]

                        mask_iou = box_iou(mask_object_bboxes, frame_boxes)

                        mask_indices, frame_indices = linear_sum_assignment(-mask_iou)
                        for m_i, f_i in zip(mask_indices, frame_indices):
                            if mask_iou[m_i, f_i] < 0.5:
                                continue

                            if not frame_annotations[f_i]['visibility']:
                                frame_annotations[f_i]['ignore'] = 0

    # max objs per image
    num_objs_per_image = {}
    for anno in annotations['annotations']:
        image_id = anno["image_id"]

        if image_id in num_objs_per_image:
            num_objs_per_image[image_id] += 1
        else:
            num_objs_per_image[image_id] = 1

    print(f'max objs per image: {max(list(num_objs_per_image.values()))}')

    with open(annotation_file, 'w') as anno_file:
        json.dump(annotations, anno_file, indent=4)


def check_coco_from_mot(coco_dir='data/MOT17/mot17_train_coco', annotation_file='data/MOT17/annotations/mot17_train_coco.json', img_id=None):
    """
    Visualize generated COCO data. Only used for debugging.
    """
    # coco_dir = os.path.join(data_root, split)
    # annotation_file = os.path.join(coco_dir, 'annotations.json')

    coco = COCO(annotation_file)
    cat_ids = coco.getCatIds(catNms=['person'])
    if img_id == None:
        img_ids = coco.getImgIds(catIds=cat_ids)
        index = np.random.randint(0, len(img_ids))
        img_id = img_ids[index]
    img = coco.loadImgs(img_id)[0]

    i = io.imread(os.path.join(coco_dir, img['file_name']))

    plt.imshow(i)
    plt.axis('off')
    ann_ids = coco.getAnnIds(imgIds=img['id'], catIds=cat_ids, iscrowd=None)
    anns = coco.loadAnns(ann_ids)
    coco.showAnns(anns, draw_bbox=True)
    plt.savefig('annotations.png')


if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Generate COCO from MOT.')
    parser.add_argument('--mots20', action='store_true')
    parser.add_argument('--mot20', action='store_true')
    args = parser.parse_args()

    mot15_seqs_names = list(MOT_15_SEQS_INFO.keys())

    if args.mots20:
        #
        # MOTS20
        #

        # TRAIN SET
        generate_coco_from_mot(
            'mots20_train_coco',
            seqs_names=['MOTS20-02', 'MOTS20-05', 'MOTS20-09', 'MOTS20-11'],
            mots=True)

        # TRAIN SPLITS
        for i in range(4):
            train_seqs = ['MOTS20-02', 'MOTS20-05', 'MOTS20-09', 'MOTS20-11']
            val_seqs = train_seqs.pop(i)

            generate_coco_from_mot(
                f'mots20_train_{i + 1}_coco',
                seqs_names=train_seqs, mots=True)
            generate_coco_from_mot(
                f'mots20_val_{i + 1}_coco',
                seqs_names=val_seqs, mots=True)

    elif args.mot20:
        data_root = 'data/MOT20'
        train_seqs = ['MOT20-01', 'MOT20-02', 'MOT20-03', 'MOT20-05',]
        # TRAIN SET
        generate_coco_from_mot(
            'mot20_train_coco',
            seqs_names=train_seqs,
            data_root=data_root)

        for i in range(0, len(train_seqs)):
            train_seqs_copy = train_seqs.copy()
            val_seqs = train_seqs_copy.pop(i)

            generate_coco_from_mot(
                f'mot20_train_{i + 1}_coco',
                seqs_names=train_seqs_copy,
                data_root=data_root)
            generate_coco_from_mot(
                f'mot20_val_{i + 1}_coco',
                seqs_names=val_seqs,
                data_root=data_root)

        # CROSS VAL FRAME SPLIT
        generate_coco_from_mot(
            'mot20_train_cross_val_frame_0_0_to_0_5_coco',
            seqs_names=train_seqs,
            frame_range={'start': 0, 'end': 0.5},
            data_root=data_root)
        generate_coco_from_mot(
            'mot20_train_cross_val_frame_0_5_to_1_0_coco',
            seqs_names=train_seqs,
            frame_range={'start': 0.5, 'end': 1.0},
            data_root=data_root)

    else:
        #
        # MOT17
        #

        # CROSS VAL SPLIT 1
        generate_coco_from_mot(
            'mot17_train_cross_val_1_coco',
            seqs_names=['MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-11-FRCNN'])
        generate_coco_from_mot(
            'mot17_val_cross_val_1_coco',
            seqs_names=['MOT17-02-FRCNN', 'MOT17-10-FRCNN', 'MOT17-13-FRCNN'])

        # CROSS VAL SPLIT 2
        generate_coco_from_mot(
            'mot17_train_cross_val_2_coco',
            seqs_names=['MOT17-02-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-13-FRCNN'])
        generate_coco_from_mot(
            'mot17_val_cross_val_2_coco',
            seqs_names=['MOT17-04-FRCNN', 'MOT17-11-FRCNN'])

        # CROSS VAL SPLIT 3
        generate_coco_from_mot(
            'mot17_train_cross_val_3_coco',
            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'])
        generate_coco_from_mot(
            'mot17_val_cross_val_3_coco',
            seqs_names=['MOT17-05-FRCNN', 'MOT17-09-FRCNN'])

        # CROSS VAL FRAME SPLIT
        generate_coco_from_mot(
            'mot17_train_cross_val_frame_0_0_to_0_25_coco',
            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'],
            frame_range={'start': 0, 'end': 0.25})
        generate_coco_from_mot(
            'mot17_train_cross_val_frame_0_0_to_0_5_coco',
            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'],
            frame_range={'start': 0, 'end': 0.5})
        generate_coco_from_mot(
            'mot17_train_cross_val_frame_0_5_to_1_0_coco',
            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'],
            frame_range={'start': 0.5, 'end': 1.0})

        generate_coco_from_mot(
            'mot17_train_cross_val_frame_0_75_to_1_0_coco',
            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN', 'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'],
            frame_range={'start': 0.75, 'end': 1.0})

        # TRAIN SET
        generate_coco_from_mot(
            'mot17_train_coco',
            seqs_names=['MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN',
                        'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN'])

        for i in range(0, 7):
            train_seqs = [
                'MOT17-02-FRCNN', 'MOT17-04-FRCNN', 'MOT17-05-FRCNN', 'MOT17-09-FRCNN',
                'MOT17-10-FRCNN', 'MOT17-11-FRCNN', 'MOT17-13-FRCNN']
            val_seqs = train_seqs.pop(i)

            generate_coco_from_mot(
                f'mot17_train_{i + 1}_coco',
                seqs_names=train_seqs)
            generate_coco_from_mot(
                f'mot17_val_{i + 1}_coco',
                seqs_names=val_seqs)


================================================
FILE: src/parse_mot_results_to_tex.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Parse MOT results and generate a LaTeX table.
"""

MOTS = False
MOT20 = False
# F_CONTENT = """
# 	MOTA	IDF1	MOTP	MT	ML	FP	FN	Recall	Precision	FAF	IDSW	Frag
#     MOT17-01-DPM	41.6	44.2	77.1	5	8	496	3252	49.6	86.6	1.1	22	58
#     MOT17-01-FRCNN	41.0	42.1	77.1	6	9	571	3207	50.3	85.0	1.3	25	61
#     MOT17-01-SDP	41.8	44.3	76.8	7	8	612	3112	51.8	84.5	1.4	27	65
#     MOT17-03-DPM	79.3	71.6	79.1	94	8	1142	20297	80.6	98.7	0.8	191	525
#     MOT17-03-FRCNN	79.6	72.7	79.1	93	7	1234	19945	80.9	98.6	0.8	180	508
#     MOT17-03-SDP	80.0	72.0	79.0	93	8	1223	19530	81.3	98.6	0.8	181	526
#     MOT17-06-DPM	54.8	42.0	79.5	54	63	314	4839	58.9	95.7	0.3	175	244
#     MOT17-06-FRCNN	55.6	42.9	79.3	57	59	363	4676	60.3	95.1	0.3	190	264
#     MOT17-06-SDP	55.5	43.8	79.3	56	61	354	4712	60.0	95.2	0.3	181	262
#     MOT17-07-DPM	44.8	42.0	76.6	11	16	1322	7851	53.5	87.2	2.6	147	275
#     MOT17-07-FRCNN	45.5	41.5	76.6	13	15	1263	7785	53.9	87.8	2.5	156	289
#     MOT17-07-SDP	45.2	42.4	76.6	13	15	1332	7775	54.0	87.3	2.7	147	279
#     MOT17-08-DPM	26.5	32.2	83.0	11	37	378	15066	28.7	94.1	0.6	88	146
#     MOT17-08-FRCNN	26.5	31.9	83.1	11	36	332	15113	28.5	94.8	0.5	89	141
#     MOT17-08-SDP	26.6	32.3	83.1	11	36	350	15067	28.7	94.5	0.6	91	147
#     MOT17-12-DPM	46.1	53.1	82.7	16	45	207	4434	48.8	95.3	0.2	30	50
#     MOT17-12-FRCNN	46.1	52.6	82.6	15	45	197	4443	48.7	95.5	0.2	30	48
#     MOT17-12-SDP	46.0	53.0	82.6	16	45	221	4426	48.9	95.0	0.2	30	52
#     MOT17-14-DPM	31.6	36.6	74.8	13	78	636	11812	36.1	91.3	0.8	196	331
#     MOT17-14-FRCNN	31.6	37.6	74.6	13	77	780	11653	37.0	89.8	1.0	202	350
#     MOT17-14-SDP	31.7	37.1	74.7	13	76	749	11677	36.8	90.1	1.0	205	344
#     OVERALL 61.5	59.6	78.9	621 	752	14076	200672	64.4	96.3	0.8	2583	4965
#     """

F_CONTENT = """
	MOTA	MOTP	IDF1	IDP	IDR	TP	FP	FN	Rcll	Prcn	MTR	PTR	MLR	MT	PT	ML	IDSW	FAR	FM
    MOT17-01-DPM	49.92	79.58	42.97	58.18	34.06	3518	258	2932	54.54	93.17	20.83	45.83	33.33	5	11	8	40	0.57	50
    MOT17-01-FRCNN	50.87	79.26	42.33	55.77	34.11	3637	308	2813	56.39	92.19	33.33	41.67	25.00	8	10	6	48	0.68	57
    MOT17-01-SDP	53.66	78.16	45.33	54.31	38.90	4064	556	2386	63.01	87.97	41.67	37.50	20.83	10	9	5	47	1.24	72
    MOT17-03-DPM	74.05	79.41	66.45	76.34	58.83	79279	1389	25396	75.74	98.28	57.43	30.41	12.16	85	45	18	374	0.93	420
    MOT17-03-FRCNN	75.34	79.45	66.98	76.21	59.75	80635	1434	24040	77.03	98.25	56.76	32.43	10.81	84	48	16	335	0.96	409
    MOT17-03-SDP	79.64	79.04	65.84	72.00	60.65	86043	2134	18632	82.20	97.58	64.19	27.03	8.78	95	40	13	545	1.42	522
    MOT17-06-DPM	53.62	82.55	51.83	64.47	43.33	7209	711	4575	61.18	91.02	28.38	37.84	33.78	63	84	75	180	0.60	170
    MOT17-06-FRCNN	57.21	81.73	54.75	63.67	48.02	7928	960	3856	67.28	89.20	32.88	45.50	21.62	73	101	48	226	0.80	223
    MOT17-06-SDP	56.43	81.93	54.00	62.70	47.42	7895	1017	3889	67.00	88.59	36.94	37.39	25.68	82	83	57	228	0.85	222
    MOT17-07-DPM	52.59	80.54	48.08	66.84	37.54	9230	258	7663	54.64	97.28	20.00	53.33	26.67	12	32	16	88	0.52	148
    MOT17-07-FRCNN	52.39	80.11	47.88	64.56	38.05	9456	499	7437	55.98	94.99	20.00	61.67	18.33	12	37	11	106	1.00	174
    MOT17-07-SDP	54.56	79.84	47.81	62.29	38.79	9928	590	6965	58.77	94.39	26.67	55.00	18.33	16	33	11	121	1.18	199
    MOT17-08-DPM	32.52	83.93	31.85	60.34	21.63	7286	288	13838	34.49	96.20	13.16	44.74	42.11	10	34	32	128	0.46	154
    MOT17-08-FRCNN	31.11	84.47	31.68	62.05	21.27	6958	285	14166	32.94	96.07	13.16	39.47	47.37	10	30	36	102	0.46	120
    MOT17-08-SDP	34.96	83.31	33.05	58.02	23.11	7972	443	13152	37.74	94.74	15.79	48.68	35.53	12	37	27	144	0.71	175
    MOT17-12-DPM	51.26	83.01	57.74	72.70	47.88	5102	606	3565	58.87	89.38	23.08	42.86	34.07	21	39	31	53	0.67	86
    MOT17-12-FRCNN	47.71	83.16	56.73	72.39	46.64	4882	702	3785	56.33	87.43	20.88	43.96	35.16	19	40	32	45	0.78	72
    MOT17-12-SDP	48.88	82.87	57.46	70.30	48.59	5140	850	3527	59.31	85.81	24.18	45.05	30.77	22	41	28	54	0.94	89
    MOT17-14-DPM	38.07	77.47	42.03	66.15	30.80	7978	627	10505	43.16	92.71	9.15	52.44	38.41	15	86	63	314	0.84	296
    MOT17-14-FRCNN	37.78	76.70	41.78	59.55	32.18	8688	1300	9795	47.01	86.98	10.37	55.49	34.15	17	91	56	406	1.73	382
    MOT17-14-SDP	40.40	76.40	42.38	57.96	33.40	9277	1376	9206	50.19	87.08	10.37	59.76	29.88	17	98	49	434	1.83	437
    OVERALL\t62.30\t79.77\t57.58\t70.58\t48.62\t372105\t16591	192123	65.95	95.73	29.21	43.69	27.09	688	1029	638	4018	0.93	4477
    """


# MOTS = True
# F_CONTENT = """
#     sMOTSA	MOTSA	MOTSP	IDF1	MT	ML	MTR	PTR	MLR	GT	TP	FP	FN	Rcll	Prcn	FM	FMR	IDSW	IDSWR
#     MOTS20-01	59.79	79.56	77.60	68.00	10	0	83.33	16.67	0.00	12	2742	255	364	88.28	91.49	37	41.91	16	18.1
#     MOTS20-06	63.91	78.72	82.85	65.14	115	22	60.53	27.89	11.58	190	8479	595	1335	86.40	93.44	218	252.32	158	182.9
#     MOTS20-07	43.17	58.52	76.59	53.60	15	17	25.86	44.83	29.31	58	8445	834	4433	65.58	91.01	177	269.91	75	114.4
#     MOTS20-12	62.04	74.64	84.93	76.83	41	9	60.29	26.47	13.24	68	5408	549	1063	83.57	90.78	76	90.94	29	34.7
#     OVERALL	54.86	69.92	80.62	63.58	181	48	55.18	30.18	14.63	328	25074	2233	7195	77.70	91.82	508	653.77	278	357.8
#     """


MOT20 = True
F_CONTENT = """
	MOTA	MOTP	IDF1	IDP	IDR	HOTA	DetA	AssA	DetRe	DetPr	AssRe	AssPr	LocA	TP	FP	FN	Rcll	Prcn	IDSW\tMT\tML
    MOT20-04	82.72	82.57	75.59	79.81	71.79	63.21	68.29	58.64	73.11	81.27	63.43	80.18	84.53	236919	9639	37165	86.44	96.09	566\t490\t28
    MOT20-06	55.88	79.00	53.51	68.11	44.07	43.85	45.80	42.23	49.13	75.94	45.95	74.07	81.72	80317	5582	52440	60.50	93.50	545\t96\t72
    MOT20-07	56.21	85.22	59.05	78.90	47.18	49.19	48.45	50.21	50.63	84.68	53.31	83.48	86.86	19245	547	13856	58.14	97.24	92\t41\t20
    MOT20-08	46.03	77.71	48.34	65.65	38.26	38.89	38.46	39.70	41.87	71.85	43.36	71.76	81.08	40572	4580	36912	52.36	89.86	329\t39\t61
    OVERALL\t68.64	81.42	65.70	75.63	58.08	54.67	56.68	52.97	60.84	79.22	57.39	78.50	83.69	377053	20348	140373	72.87	94.88	1532\t666\t181
"""


if __name__ == '__main__':
    # remove empty lines at start and beginning of F_CONTENT
    F_CONTENT = F_CONTENT.strip()
    F_CONTENT = F_CONTENT.splitlines()

    start_ixs = range(1, len(F_CONTENT) - 1, 3)
    if MOTS or MOT20:
        start_ixs = range(1, len(F_CONTENT) - 1)

    metrics_res = {}

    for i in range(len(['DPM', 'FRCNN', 'SDP'])):
        for start in start_ixs:
            f_list = F_CONTENT[start + i].strip().split('\t')
            metrics_res[f_list[0]] = f_list[1:]

        if MOTS or MOT20:
            break

    metrics_names = F_CONTENT[0].replace('\n', '').split()

    print(metrics_names)

    metrics_res['ALL'] = F_CONTENT[-1].strip().split('\t')[1:]

    for full_seq_name, data in metrics_res.items():
        seq_name = '-'.join(full_seq_name.split('-')[:2])
        detection_name = full_seq_name.split('-')[-1]

        if MOTS:
            print(f"{seq_name} & "
                f"{float(data[metrics_names.index('sMOTSA')]):.1f} & "
                f"{float(data[metrics_names.index('IDF1')]):.1f} & "
                f"{float(data[metrics_names.index('MOTSA')]):.1f} & "
                f"{data[metrics_names.index('FP')]} & "
                f"{data[metrics_names.index('FN')]} & "
                f"{data[metrics_names.index('IDSW')]} \\\\")
        else:
            print(f"{seq_name} & {detection_name} & "
                f"{float(data[metrics_names.index('MOTA')]):.1f} & "
                f"{float(data[metrics_names.index('IDF1')]):.1f} & "
                f"{data[metrics_names.index('MT')]} & "
                f"{data[metrics_names.index('ML')]} & "
                f"{data[metrics_names.index('FP')]} & "
                f"{data[metrics_names.index('FN')]} & "
                f"{data[metrics_names.index('IDSW')]} \\\\")


================================================
FILE: src/run_with_submitit.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
A script to run multinode training with submitit.
"""
import os
import sys
import uuid
from pathlib import Path
from argparse import Namespace

import sacred
import submitit

import train
from trackformer.util.misc import nested_dict_to_namespace

WORK_DIR = str(Path(__file__).parent.absolute())


ex = sacred.Experiment('submit', ingredients=[train.ex])
ex.add_config('cfgs/submit.yaml')


def get_shared_folder() -> Path:
    user = os.getenv("USER")
    if Path("/storage/slurm").is_dir():
        path = Path(f"/storage/slurm/{user}/runs")
        path.mkdir(exist_ok=True)
        return path
    raise RuntimeError("No shared folder available")


def get_init_file() -> Path:
    # Init file must not exist, but it's parent dir must exist.
    os.makedirs(str(get_shared_folder()), exist_ok=True)
    init_file = get_shared_folder() / f"{uuid.uuid4().hex}_init"
    if init_file.exists():
        os.remove(str(init_file))
    return init_file


class Trainer:
    def __init__(self, args: Namespace) -> None:
        self.args = args

    def __call__(self) -> None:
        sys.path.append(WORK_DIR)

        import train
        self._setup_gpu_args()
        train.train(self.args)

    def checkpoint(self) -> submitit.helpers.DelayedSubmission:
        import os

        import submitit

        self.args.dist_url = get_init_file().as_uri()
        checkpoint_file = os.path.join(self.args.output_dir, "checkpoint.pth")
        if os.path.exists(checkpoint_file):
            self.args.resume = checkpoint_file
            self.args.resume_optim = True
            self.args.resume_vis = True
            self.args.load_mask_head_from_model = None
        print("Requeuing ", self.args)
        empty_trainer = type(self)(self.args)
        return submitit.helpers.DelayedSubmission(empty_trainer)

    def _setup_gpu_args(self) -> None:
        from pathlib import Path

        import submitit

        job_env = submitit.JobEnvironment()
        self.args.output_dir = Path(str(self.args.output_dir).replace("%j", str(job_env.job_id)))
        print(self.args.output_dir)
        self.args.gpu = job_env.local_rank
        self.args.rank = job_env.global_rank
        self.args.world_size = job_env.num_tasks
        print(f"Process group: {job_env.num_tasks} tasks, rank: {job_env.global_rank}")


def main(args: Namespace):
    # Note that the folder will depend on the job_id, to easily track experiments
    if args.job_dir == "":
        args.job_dir = get_shared_folder() / "%j"

    executor = submitit.AutoExecutor(
        folder=args.job_dir, cluster=args.cluster, slurm_max_num_timeout=30)

    # cluster setup is defined by environment variables
    num_gpus_per_node = args.num_gpus
    nodes = args.nodes
    timeout_min = args.timeout

    if args.slurm_gres:
        slurm_gres = args.slurm_gres
    else:
        slurm_gres = f'gpu:{num_gpus_per_node},VRAM:{args.vram}'
        # slurm_gres = f'gpu:rtx_8000:{num_gpus_per_node}'

    executor.update_parameters(
        mem_gb=args.mem_per_gpu * num_gpus_per_node,
        gpus_per_node=num_gpus_per_node,
        tasks_per_node=num_gpus_per_node,  # one task per GPU
        cpus_per_task=args.cpus_per_task,
        nodes=nodes,
        timeout_min=timeout_min,  # max is 60 * 72,
        slurm_partition=args.slurm_partition,
        slurm_constraint=args.slurm_constraint,
        slurm_comment=args.slurm_comment,
        slurm_exclude=args.slurm_exclude,
        slurm_gres=slurm_gres
    )

    executor.update_parameters(name="fair_track")

    args.train.dist_url = get_init_file().as_uri()
    # args.output_dir = args.job_dir

    trainer = Trainer(args.train)
    job = executor.submit(trainer)

    print("Submitted job_id:", job.job_id)

    if args.cluster == 'debug':
        job.wait()


@ex.main
def load_config(_config, _run):
    """ We use sacred only for config loading from YAML files. """
    sacred.commands.print_config(_run)


if __name__ == '__main__':
    # TODO: hierachical Namespacing for nested dict
    config = ex.run_commandline().config
    args = nested_dict_to_namespace(config)
    # args.train = Namespace(**config['train'])
    main(args)


================================================
FILE: src/track.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import os
import sys
import time
from os import path as osp

import motmetrics as mm
import numpy as np
import sacred
import torch
import tqdm
import yaml
from torch.utils.data import DataLoader

from trackformer.datasets.tracking import TrackDatasetFactory
from trackformer.models import build_model
from trackformer.models.tracker import Tracker
from trackformer.util.misc import nested_dict_to_namespace
from trackformer.util.track_utils import (evaluate_mot_accums, get_mot_accum,
                                          interpolate_tracks, plot_sequence)

mm.lap.default_solver = 'lap'

ex = sacred.Experiment('track')
ex.add_config('cfgs/track.yaml')
ex.add_named_config('reid', 'cfgs/track_reid.yaml')


@ex.automain
def main(seed, dataset_name, obj_detect_checkpoint_file, tracker_cfg,
         write_images, output_dir, interpolate, verbose, load_results_dir,
         data_root_dir, generate_attention_maps, frame_range,
         _config, _log, _run, obj_detector_model=None):
    if write_images:
        assert output_dir is not None

    # obj_detector_model is only provided when run as evaluation during
    # training. in that case we omit verbose outputs.
    if obj_detector_model is None:
        sacred.commands.print_config(_run)

    # set all seeds
    if seed is not None:
        torch.manual_seed(seed)
        torch.cuda.manual_seed(seed)
        np.random.seed(seed)
        torch.backends.cudnn.deterministic = True

    if output_dir is not None:
        if not osp.exists(output_dir):
            os.makedirs(output_dir)

        yaml.dump(
            _config,
            open(osp.join(output_dir, 'track.yaml'), 'w'),
            default_flow_style=False)

    ##########################
    # Initialize the modules #
    ##########################

    # object detection
    if obj_detector_model is None:
        obj_detect_config_path = os.path.join(
            os.path.dirname(obj_detect_checkpoint_file),
            'config.yaml')
        obj_detect_args = nested_dict_to_namespace(yaml.unsafe_load(open(obj_detect_config_path)))
        img_transform = obj_detect_args.img_transform
        obj_detector, _, obj_detector_post = build_model(obj_detect_args)

        obj_detect_checkpoint = torch.load(
            obj_detect_checkpoint_file, map_location=lambda storage, loc: storage)

        obj_detect_state_dict = obj_detect_checkpoint['model']
        # obj_detect_state_dict = {
        #     k: obj_detect_state_dict[k] if k in obj_detect_state_dict
        #     else v
        #     for k, v in obj_detector.state_dict().items()}

        obj_detect_state_dict = {
            k.replace('detr.', ''): v
            for k, v in obj_detect_state_dict.items()
            if 'track_encoding' not in k}

        obj_detector.load_state_dict(obj_detect_state_dict)
        if 'epoch' in obj_detect_checkpoint:
            _log.info(f"INIT object detector [EPOCH: {obj_detect_checkpoint['epoch']}]")

        obj_detector.cuda()
    else:
        obj_detector = obj_detector_model['model']
        obj_detector_post = obj_detector_model['post']
        img_transform = obj_detector_model['img_transform']

    if hasattr(obj_detector, 'tracking'):
        obj_detector.tracking()

    track_logger = None
    if verbose:
        track_logger = _log.info
    tracker = Tracker(
        obj_detector, obj_detector_post, tracker_cfg,
        generate_attention_maps, track_logger, verbose)

    time_total = 0
    num_frames = 0
    mot_accums = []
    dataset = TrackDatasetFactory(
        dataset_name, root_dir=data_root_dir, img_transform=img_transform)

    for seq in dataset:
        tracker.reset()

        _log.info(f"------------------")
        _log.info(f"TRACK SEQ: {seq}")

        start_frame = int(frame_range['start'] * len(seq))
        end_frame = int(frame_range['end'] * len(seq))

        seq_loader = DataLoader(
            torch.utils.data.Subset(seq, range(start_frame, end_frame)))

        num_frames += len(seq_loader)

        results = seq.load_results(load_results_dir)

        if not results:
            start = time.time()

            for frame_id, frame_data in enumerate(tqdm.tqdm(seq_loader, file=sys.stdout)):
                with torch.no_grad():
                    tracker.step(frame_data)

            results = tracker.get_results()

            time_total += time.time() - start

            _log.info(f"NUM TRACKS: {len(results)} ReIDs: {tracker.num_reids}")
            _log.info(f"RUNTIME: {time.time() - start :.2f} s")

            if interpolate:
                results = interpolate_tracks(results)

            if output_dir is not None:
                _log.info(f"WRITE RESULTS")
                seq.write_results(results, output_dir)
        else:
            _log.info("LOAD RESULTS")

        if seq.no_gt:
            _log.info("NO GT AVAILBLE")
        else:
            mot_accum = get_mot_accum(results, seq_loader)
            mot_accums.append(mot_accum)

            if verbose:
                mot_events = mot_accum.mot_events
                reid_events = mot_events[mot_events['Type'] == 'SWITCH']
                match_events = mot_events[mot_events['Type'] == 'MATCH']

                switch_gaps = []
                for index, event in reid_events.iterrows():
                    frame_id, _ = index
                    match_events_oid = match_events[match_events['OId'] == event['OId']]
                    match_events_oid_earlier = match_events_oid[
                        match_events_oid.index.get_level_values('FrameId') < frame_id]

                    if not match_events_oid_earlier.empty:
                        match_events_oid_earlier_frame_ids = \
                            match_events_oid_earlier.index.get_level_values('FrameId')
                        last_occurrence = match_events_oid_earlier_frame_ids.max()
                        switch_gap = frame_id - last_occurrence
                        switch_gaps.append(switch_gap)

                switch_gaps_hist = None
                if switch_gaps:
                    switch_gaps_hist, _ = np.histogram(
                        switch_gaps, bins=list(range(0, max(switch_gaps) + 10, 10)))
                    switch_gaps_hist = switch_gaps_hist.tolist()

                _log.info(f'SWITCH_GAPS_HIST (bin_width=10): {switch_gaps_hist}')

        if output_dir is not None and write_images:
            _log.info("PLOT SEQ")
            plot_sequence(
                results, seq_loader, osp.join(output_dir, dataset_name, str(seq)),
                write_images, generate_attention_maps)

    if time_total:
        _log.info(f"RUNTIME ALL SEQS (w/o EVAL or IMG WRITE): "
                  f"{time_total:.2f} s for {num_frames} frames "
                  f"({num_frames / time_total:.2f} Hz)")

    if obj_detector_model is None:
        _log.info(f"EVAL:")

        summary, str_summary = evaluate_mot_accums(
            mot_accums,
            [str(s) for s in dataset if not s.no_gt])

        _log.info(f'\n{str_summary}')

        return summary

    return mot_accums


================================================
FILE: src/track_param_search.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
from itertools import product

import numpy as np

from track import ex


if __name__ == "__main__":


    # configs = [
    #     {'dataset_name': ["MOT17-02-FRCNN", "MOT17-10-FRCNN", "MOT17-13-FRCNN"],
    #      'obj_detect_checkpoint_file': 'models/mot17det_train_cross_val_1_mots_vis_track_bbox_proposals_track_encoding_bbox_proposals_prev_frame_5/checkpoint_best_MOTA.pth'},
    #     {'dataset_name': ["MOT17-04-FRCNN", "MOT17-11-FRCNN"],
    #      'obj_detect_checkpoint_file': 'models/mot17det_train_cross_val_2_mots_vis_track_bbox_proposals_track_encoding_bbox_proposals_prev_frame_5/checkpoint_best_MOTA.pth'},
    #     {'dataset_name': ["MOT17-05-FRCNN", "MOT17-09-FRCNN"],
    #      'obj_detect_checkpoint_file': 'models/mot17det_train_cross_val_3_mots_vis_track_bbox_proposals_track_encoding_bbox_proposals_prev_frame_5/checkpoint_best_MOTA.pth'},
    # ]

    # configs = [
    #     {'dataset_name': ["MOT17-02-FRCNN"],
    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_1_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},
    #     {'dataset_name': ["MOT17-04-FRCNN"],
    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_2_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},
    #     {'dataset_name': ["MOT17-05-FRCNN"],
    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_3_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},
    #     {'dataset_name': ["MOT17-09-FRCNN"],
    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_4_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},
    #     {'dataset_name': ["MOT17-10-FRCNN"],
    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_5_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},
    #     {'dataset_name': ["MOT17-11-FRCNN"],
    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_6_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},
    #     {'dataset_name': ["MOT17-13-FRCNN"],
    #      'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot17_train_7_no_pretrain_deformable/checkpoint_best_BBOX_AP_IoU_0_50-0_95.pth'},
    # ]

    # dataset_name = ["MOT17-02-FRCNN", "MOT17-04-FRCNN", "MOT17-05-FRCNN", "MOT17-09-FRCNN", "MOT17-10-FRCNN", "MOT17-11-FRCNN", "MOT17-13-FRCNN"]

    # general_tracker_cfg = {'public_detections': False, 'reid_sim_only': True, 'reid_greedy_matching': False}
    general_tracker_cfg = {'public_detections': 'min_iou_0_5'}
    # general_tracker_cfg = {'public_detections': False}

    # dataset_name = 'MOT17-TRAIN-FRCNN'
    dataset_name = 'MOT17-TRAIN-ALL'
    # dataset_name = 'MOT20-TRAIN'

    configs = [
        {'dataset_name': dataset_name,

         'frame_range': {'start': 0.5},
         'obj_detect_checkpoint_file': '/storage/user/meinhard/fair_track/models/mot_mot17_train_cross_val_frame_0_0_to_0_5_coco_pretrained_num_queries_500_batch_size=2_num_gpus_7_num_classes_20_AP_det_overflow_boxes_True_prev_frame_rnd_augs_0_2_uniform_false_negative_prob_multi_frame_hidden_dim_288_sep_encoders_batch_queries/checkpoint_epoch_50.pth'},
    ]

    tracker_param_grids = {
        # 'detection_obj_score_thresh': [0.3, 0.4, 0.5, 0.6],
        # 'track_obj_score_thresh': [0.3, 0.4, 0.5, 0.6],
        'detection_obj_score_thresh': [0.4],
        'track_obj_score_thresh': [0.4],
        # 'detection_nms_thresh': [0.95, 0.9, 0.0],
        # 'track_nms_thresh': [0.95, 0.9, 0.0],
        # 'detection_nms_thresh': [0.9],
        # 'track_nms_thresh': [0.9],
        # 'reid_sim_threshold': [0.0, 0.5, 1.0, 10, 50, 100, 200],
        'reid_score_thresh': [0.4],
        # 'inactive_patience': [-1, 5, 10, 20, 30, 40, 50]
        # 'reid_score_thresh': [0.8],
        # 'inactive_patience': [-1],
        # 'inactive_patience': [-1, 5, 10]
        }

    # compute all config combinations
    tracker_param_cfgs = [dict(zip(tracker_param_grids, v))
                          for v in product(*tracker_param_grids.values())]

    # add empty metric arrays
    metrics = ['mota', 'idf1']
    tracker_param_cfgs = [
        {'config': {**general_tracker_cfg, **tracker_cfg}}
        for tracker_cfg in tracker_param_cfgs]

    for m in metrics:
        for tracker_cfg in tracker_param_cfgs:
            tracker_cfg[m] = []

    total_num_experiments = len(tracker_param_cfgs) * len(configs)
    print(f'NUM experiments: {total_num_experiments}')

    # run all tracker config combinations for all experiment configurations
    exp_counter = 1
    for config in configs:
        for tracker_cfg in tracker_param_cfgs:
            print(f"EXPERIMENT: {exp_counter}/{total_num_experiments}")

            config['tracker_cfg'] = tracker_cfg['config']
            run = ex.run(config_updates=config)
            eval_summary = run.result

            for m in metrics:
                tracker_cfg[m].append(eval_summary[m]['OVERALL'])

            exp_counter += 1

    # compute mean for all metrices
    for m in metrics:
        for tracker_cfg in tracker_param_cfgs:
            tracker_cfg[m] = np.array(tracker_cfg[m]).mean()

    for cfg in tracker_param_cfgs:
        print([cfg[m] for m in metrics], cfg['config'])

    # compute and plot best metric config
    for m in metrics:
        best_metric_cfg_idx = np.array(
            [cfg[m] for cfg in tracker_param_cfgs]).argmax()

        print(f"BEST {m.upper()} CFG: {tracker_param_cfgs[best_metric_cfg_idx]['config']}")

    # TODO
    best_mota_plus_idf1_cfg_idx = np.array(
        [cfg['mota'] + cfg['idf1'] for cfg in tracker_param_cfgs]).argmax()
    print(f"BEST MOTA PLUS IDF1 CFG: {tracker_param_cfgs[best_mota_plus_idf1_cfg_idx]['config']}")


================================================
FILE: src/trackformer/__init__.py
================================================


================================================
FILE: src/trackformer/datasets/__init__.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Submodule interface.
"""
from argparse import Namespace
from pycocotools.coco import COCO
from torch.utils.data import Dataset, Subset
from torchvision.datasets import CocoDetection

from .coco import build as build_coco
from .crowdhuman import build_crowdhuman
from .mot import build_mot, build_mot_crowdhuman, build_mot_coco_person


def get_coco_api_from_dataset(dataset: Subset) -> COCO:
    """Return COCO class from PyTorch dataset for evaluation with COCO eval."""
    for _ in range(10):
        # if isinstance(dataset, CocoDetection):
        #     break
        if isinstance(dataset, Subset):
            dataset = dataset.dataset

    if not isinstance(dataset, CocoDetection):
        raise NotImplementedError

    return dataset.coco


def build_dataset(split: str, args: Namespace) -> Dataset:
    """Helper function to build dataset for different splits ('train' or 'val')."""
    if args.dataset == 'coco':
        dataset = build_coco(split, args)
    elif args.dataset == 'coco_person':
        dataset = build_coco(split, args, 'person_keypoints')
    elif args.dataset == 'mot':
        dataset = build_mot(split, args)
    elif args.dataset == 'crowdhuman':
        dataset = build_crowdhuman(split, args)
    elif args.dataset == 'mot_crowdhuman':
        dataset = build_mot_crowdhuman(split, args)
    elif args.dataset == 'mot_coco_person':
        dataset = build_mot_coco_person(split, args)
    elif args.dataset == 'coco_panoptic':
        # to avoid making panopticapi required for coco
        from .coco_panoptic import build as build_coco_panoptic
        dataset = build_coco_panoptic(split, args)
    else:
        raise ValueError(f'dataset {args.dataset} not supported')

    return dataset


================================================
FILE: src/trackformer/datasets/coco.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
COCO dataset which returns image_id for evaluation.

Mostly copy-paste from https://github.com/pytorch/vision/blob/13b35ff/references/detection/coco_utils.py
"""
import copy
import random
from pathlib import Path
from collections import Counter

import torch
import torch.nn.functional as F
import torch.utils.data
import torchvision
from pycocotools import mask as coco_mask

from . import transforms as T


class CocoDetection(torchvision.datasets.CocoDetection):

    fields = ["labels", "area", "iscrowd", "boxes", "track_ids", "masks"]

    def __init__(self,  img_folder, ann_file, transforms, norm_transforms,
                 return_masks=False, overflow_boxes=False, remove_no_obj_imgs=True,
                 prev_frame=False, prev_frame_rnd_augs=0.0, prev_prev_frame=False,
                 min_num_objects=0):
        super(CocoDetection, self).__init__(img_folder, ann_file)
        self._transforms = transforms
        self._norm_transforms = norm_transforms
        self.prepare = ConvertCocoPolysToMask(return_masks, overflow_boxes)

        annos_image_ids = [
            ann['image_id'] for ann in self.coco.loadAnns(self.coco.getAnnIds())]
        if remove_no_obj_imgs:
            self.ids = sorted(list(set(annos_image_ids)))

        if min_num_objects:
            counter = Counter(annos_image_ids)

            self.ids = [i for i in self.ids if counter[i] >= min_num_objects]

        self._prev_frame = prev_frame
        self._prev_frame_rnd_augs = prev_frame_rnd_augs
        self._prev_prev_frame = prev_prev_frame

    def _getitem_from_id(self, image_id, random_state=None, random_jitter=True):
        # if random state is given we do the data augmentation with the state
        # and then apply the random jitter. this ensures that (simulated) adjacent
        # frames have independent jitter.
        if random_state is not None:
            curr_random_state = {
                'random': random.getstate(),
                'torch': torch.random.get_rng_state()}
            random.setstate(random_state['random'])
            torch.random.set_rng_state(random_state['torch'])

        img, target = super(CocoDetection, self).__getitem__(image_id)
        image_id = self.ids[image_id]
        target = {'image_id': image_id,
                  'annotations': target}
        img, target = self.prepare(img, target)

        if 'track_ids' not in target:
            target['track_ids'] = torch.arange(len(target['labels']))

        if self._transforms is not None:
            img, target = self._transforms(img, target)

        # ignore
        ignore = target.pop("ignore").bool()
        for field in self.fields:
            if field in target:
                target[f"{field}_ignore"] = target[field][ignore]
                target[field] = target[field][~ignore]

        if random_state is not None:
            random.setstate(curr_random_state['random'])
            torch.random.set_rng_state(curr_random_state['torch'])

        if random_jitter:
            img, target = self._add_random_jitter(img, target)
        img, target = self._norm_transforms(img, target)

        return img, target

    # TODO: add to the transforms and merge norm_transforms into transforms
    def _add_random_jitter(self, img, target):
        if self._prev_frame_rnd_augs:
            orig_w, orig_h = img.size

            crop_width = random.randint(
                int((1.0 - self._prev_frame_rnd_augs) * orig_w),
                orig_w)
            crop_height = int(orig_h * crop_width / orig_w)

            transform = T.RandomCrop((crop_height, crop_width))
            img, target = transform(img, target)

            img, target = T.resize(img, target, (orig_w, orig_h))

        return img, target

    # def _add_random_jitter(self, img, target):
    #     if self._prev_frame_rnd_augs: # and random.uniform(0, 1) < 0.5:
    #         orig_w, orig_h = img.size

    #         width, height = img.size
    #         size = random.randint(
    #             int((1.0 - self._prev_frame_rnd_augs) * min(width, height)),
    #             int((1.0 + self._prev_frame_rnd_augs) * min(width, height)))
    #         img, target = T.RandomResize([size])(img, target)

    #         width, height = img.size
    #         min_size = (
    #             int((1.0 - self._prev_frame_rnd_augs) * width),
    #             int((1.0 - self._prev_frame_rnd_augs) * height))
    #         transform = T.RandomSizeCrop(min_size=min_size)
    #         img, target = transform(img, target)

    #         width, height = img.size
    #         if orig_w < width:
    #             img, target = T.RandomCrop((height, orig_w))(img, target)
    #         else:
    #             total_pad = orig_w - width
    #             pad_left = random.randint(0, total_pad)
    #             pad_right = total_pad - pad_left

    #             padding = (pad_left, 0, pad_right, 0)
    #             img, target = T.pad(img, target, padding)

    #         width, height = img.size
    #         if orig_h < height:
    #             img, target = T.RandomCrop((orig_h, width))(img, target)
    #         else:
    #             total_pad = orig_h - height
    #             pad_top = random.randint(0, total_pad)
    #             pad_bottom = total_pad - pad_top

    #             padding = (0, pad_top, 0, pad_bottom)
    #             img, target = T.pad(img, target, padding)

    #     return img, target

    def __getitem__(self, idx):
        random_state = {
            'random': random.getstate(),
            'torch': torch.random.get_rng_state()}
        img, target = self._getitem_from_id(idx, random_state, random_jitter=False)

        if self._prev_frame:
            # PREV
            prev_img, prev_target = self._getitem_from_id(idx, random_state)
            target[f'prev_image'] = prev_img
            target[f'prev_target'] = prev_target

            if self._prev_prev_frame:
                # PREV PREV
                prev_prev_img, prev_prev_target = self._getitem_from_id(idx, random_state)
                target[f'prev_prev_image'] = prev_prev_img
                target[f'prev_prev_target'] = prev_prev_target

        return img, target

    def write_result_files(self, *args):
        pass


def convert_coco_poly_to_mask(segmentations, height, width):
    masks = []
    for polygons in segmentations:
        if isinstance(polygons, dict):
            rles = {'size': polygons['size'],
                    'counts': polygons['counts'].encode(encoding='UTF-8')}
        else:
            rles = coco_mask.frPyObjects(polygons, height, width)
        mask = coco_mask.decode(rles)
        if len(mask.shape) < 3:
            mask = mask[..., None]
        mask = torch.as_tensor(mask, dtype=torch.uint8)
        mask = mask.any(dim=2)
        masks.append(mask)
    if masks:
        masks = torch.stack(masks, dim=0)
    else:
        masks = torch.zeros((0, height, width), dtype=torch.uint8)
    return masks


class ConvertCocoPolysToMask(object):
    def __init__(self, return_masks=False, overflow_boxes=False):
        self.return_masks = return_masks
        self.overflow_boxes = overflow_boxes

    def __call__(self, image, target):
        w, h = image.size

        image_id = target["image_id"]
        image_id = torch.tensor([image_id])

        anno = target["annotations"]

        anno = [obj for obj in anno if 'iscrowd' not in obj or obj['iscrowd'] == 0]

        boxes = [obj["bbox"] for obj in anno]
        # guard against no boxes via resizing
        boxes = torch.as_tensor(boxes, dtype=torch.float32).reshape(-1, 4)
        # x,y,w,h --> x,y,x,y
        boxes[:, 2:] += boxes[:, :2]
        if not self.overflow_boxes:
            boxes[:, 0::2].clamp_(min=0, max=w)
            boxes[:, 1::2].clamp_(min=0, max=h)

        classes = [obj["category_id"] for obj in anno]
        classes = torch.tensor(classes, dtype=torch.int64)

        if self.return_masks:
            segmentations = [obj["segmentation"] for obj in anno]
            masks = convert_coco_poly_to_mask(segmentations, h, w)

        keypoints = None
        if anno and "keypoints" in anno[0]:
            keypoints = [obj["keypoints"] for obj in anno]
            keypoints = torch.as_tensor(keypoints, dtype=torch.float32)
            num_keypoints = keypoints.shape[0]
            if num_keypoints:
                keypoints = keypoints.view(num_keypoints, -1, 3)

        keep = (boxes[:, 3] > boxes[:, 1]) & (boxes[:, 2] > boxes[:, 0])

        boxes = boxes[keep]
        classes = classes[keep]
        if self.return_masks:
            masks = masks[keep]
        if keypoints is not None:
            keypoints = keypoints[keep]

        target = {}
        target["boxes"] = boxes
        target["labels"] = classes - 1

        if self.return_masks:
            target["masks"] = masks
        target["image_id"] = image_id
        if keypoints is not None:
            target["keypoints"] = keypoints

        if anno and "track_id" in anno[0]:
            track_ids = torch.tensor([obj["track_id"] for obj in anno])
            target["track_ids"] = track_ids[keep]
        elif not len(boxes):
            target["track_ids"] = torch.empty(0)

        # for conversion to coco api
        area = torch.tensor([obj["area"] for obj in anno])
        iscrowd = torch.tensor([obj["iscrowd"] if "iscrowd" in obj else 0 for obj in anno])
        ignore = torch.tensor([obj["ignore"] if "ignore" in obj else 0 for obj in anno])

        target["area"] = area[keep]
        target["iscrowd"] = iscrowd[keep]
        target["ignore"] = ignore[keep]

        target["orig_size"] = torch.as_tensor([int(h), int(w)])
        target["size"] = torch.as_tensor([int(h), int(w)])

        return image, target


def make_coco_transforms(image_set, img_transform=None, overflow_boxes=False):
    normalize = T.Compose([
        T.ToTensor(),
        T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
    # default
    max_size = 1333
    val_width = 800
    scales = [480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800]
    random_resizes = [400, 500, 600]
    random_size_crop = (384, 600)

    if img_transform is not None:
        scale = img_transform.max_size / max_size
        max_size = img_transform.max_size
        val_width = img_transform.val_width

        # scale all with respect to custom max_size
        scales = [int(scale * s) for s in scales]
        random_resizes = [int(scale * s) for s in random_resizes]
        random_size_crop = [int(scale * s) for s in random_size_crop]

    if image_set == 'train':
        transforms = [
            T.RandomHorizontalFlip(),
            T.RandomSelect(
                T.RandomResize(scales, max_size=max_size),
                T.Compose([
                    T.RandomResize(random_resizes),
                    T.RandomSizeCrop(*random_size_crop, overflow_boxes=overflow_boxes),
                    T.RandomResize(scales, max_size=max_size),
                ])
            ),
        ]
    elif image_set == 'val':
        transforms = [
            T.RandomResize([val_width], max_size=max_size),
        ]
    else:
        ValueError(f'unknown {image_set}')

    # transforms.append(normalize)
    return T.Compose(transforms), normalize


def build(image_set, args, mode='instances'):
    root = Path(args.coco_path)
    assert root.exists(), f'provided COCO path {root} does not exist'

    # image_set is 'train' or 'val'
    split = getattr(args, f"{image_set}_split")

    splits = {
        "train": (root / "train2017", root / "annotations" / f'{mode}_train2017.json'),
        "val": (root / "val2017", root / "annotations" / f'{mode}_val2017.json'),
    }

    if image_set == 'train':
        prev_frame_rnd_augs = args.coco_and_crowdhuman_prev_frame_rnd_augs
    elif image_set == 'val':
        prev_frame_rnd_augs = 0.0

    transforms, norm_transforms = make_coco_transforms(image_set, args.img_transform, args.overflow_boxes)
    img_folder, ann_file = splits[split]
    dataset = CocoDetection(
        img_folder, ann_file, transforms, norm_transforms,
        return_masks=args.masks,
        prev_frame=args.tracking,
        prev_frame_rnd_augs=prev_frame_rnd_augs,
        prev_prev_frame=args.track_prev_prev_frame,
        min_num_objects=args.coco_min_num_objects)

    return dataset


================================================
FILE: src/trackformer/datasets/coco_eval.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
COCO evaluator that works in distributed mode.

Mostly copy-paste from https://github.com/pytorch/vision/blob/edfd5a7/references/detection/coco_eval.py
The difference is that there is less copy-pasting from pycocotools
in the end of the file, as python3 can suppress prints with contextlib
"""
import os
import contextlib
import copy
import numpy as np
import torch

from pycocotools.cocoeval import COCOeval
from pycocotools.coco import COCO
import pycocotools.mask as mask_util

from ..util.misc import all_gather


class CocoEvaluator(object):
    def __init__(self, coco_gt, iou_types):
        assert isinstance(iou_types, (list, tuple))
        coco_gt = copy.deepcopy(coco_gt)
        self.coco_gt = coco_gt

        self.iou_types = iou_types
        self.coco_eval = {}
        for iou_type in iou_types:
            self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type)

        self.img_ids = []
        self.eval_imgs = {k: [] for k in iou_types}

    def update(self, predictions):
        img_ids = list(np.unique(list(predictions.keys())))
        self.img_ids.extend(img_ids)

        for prediction in predictions.values():
            prediction["labels"] += 1

        for iou_type in self.iou_types:
            results = self.prepare(predictions, iou_type)

            # suppress pycocotools prints
            with open(os.devnull, 'w') as devnull:
                with contextlib.redirect_stdout(devnull):
                    coco_dt = COCO.loadRes(self.coco_gt, results) if results else COCO()
            coco_eval = self.coco_eval[iou_type]

            coco_eval.cocoDt = coco_dt
            coco_eval.params.imgIds = list(img_ids)
            img_ids, eval_imgs = evaluate(coco_eval)

            self.eval_imgs[iou_type].append(eval_imgs)

    def synchronize_between_processes(self):
        for iou_type in self.iou_types:
            self.eval_imgs[iou_type] = np.concatenate(self.eval_imgs[iou_type], 2)
            create_common_coco_eval(
                self.coco_eval[iou_type],
                self.img_ids,
                self.eval_imgs[iou_type])

    def accumulate(self):
        for coco_eval in self.coco_eval.values():
            coco_eval.accumulate()

    def summarize(self):
        for iou_type, coco_eval in self.coco_eval.items():
            print(f"IoU metric: {iou_type}")
            coco_eval.summarize()

    def prepare(self, predictions, iou_type):
        if iou_type == "bbox":
            return self.prepare_for_coco_detection(predictions)
        elif iou_type == "segm":
            return self.prepare_for_coco_segmentation(predictions)
        elif iou_type == "keypoints":
            return self.prepare_for_coco_keypoint(predictions)
        else:
            raise ValueError("Unknown iou type {}".format(iou_type))

    def prepare_for_coco_detection(self, predictions):
        coco_results = []
        for original_id, prediction in predictions.items():
            if len(prediction) == 0:
                continue

            boxes = prediction["boxes"]
            boxes = convert_to_xywh(boxes).tolist()
            scores = prediction["scores"].tolist()
            labels = prediction["labels"].tolist()

            coco_results.extend(
                [
                    {
                        "image_id": original_id,
                        "category_id": labels[k],
                        "bbox": box,
                        "score": scores[k],
                    }
                    for k, box in enumerate(boxes)
                ]
            )
        return coco_results

    def prepare_for_coco_segmentation(self, predictions):
        coco_results = []
        for original_id, prediction in predictions.items():
            if len(prediction) == 0:
                continue

            scores = prediction["scores"]
            labels = prediction["labels"]
            masks = prediction["masks"]

            masks = masks > 0.5

            scores = prediction["scores"].tolist()
            labels = prediction["labels"].tolist()

            rles = [
                mask_util.encode(np.array(mask[0, :, :, np.newaxis], dtype=np.uint8, order="F"))[0]
                for mask in masks
            ]
            for rle in rles:
                rle["counts"] = rle["counts"].decode("utf-8")

            coco_results.extend(
                [
                    {
                        "image_id": original_id,
                        "category_id": labels[k],
                        "segmentation": rle,
                        "score": scores[k],
                    }
                    for k, rle in enumerate(rles)
                ]
            )
        return coco_results

    def prepare_for_coco_keypoint(self, predictions):
        coco_results = []
        for original_id, prediction in predictions.items():
            if len(prediction) == 0:
                continue

            boxes = prediction["boxes"]
            boxes = convert_to_xywh(boxes).tolist()
            scores = prediction["scores"].tolist()
            labels = prediction["labels"].tolist()
            keypoints = prediction["keypoints"]
            keypoints = keypoints.flatten(start_dim=1).tolist()

            coco_results.extend(
                [
                    {
                        "image_id": original_id,
                        "category_id": labels[k],
                        'keypoints': keypoint,
                        "score": scores[k],
                    }
                    for k, keypoint in enumerate(keypoints)
                ]
            )
        return coco_results


def convert_to_xywh(boxes):
    xmin, ymin, xmax, ymax = boxes.unbind(1)
    return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1)


def merge(img_ids, eval_imgs):
    all_img_ids = all_gather(img_ids)
    all_eval_imgs = all_gather(eval_imgs)

    merged_img_ids = []
    for p in all_img_ids:
        merged_img_ids.extend(p)

    merged_eval_imgs = []
    for p in all_eval_imgs:
        merged_eval_imgs.append(p)

    merged_img_ids = np.array(merged_img_ids)
    merged_eval_imgs = np.concatenate(merged_eval_imgs, 2)

    # keep only unique (and in sorted order) images
    merged_img_ids, idx = np.unique(merged_img_ids, return_index=True)
    merged_eval_imgs = merged_eval_imgs[..., idx]

    return merged_img_ids, merged_eval_imgs


def create_common_coco_eval(coco_eval, img_ids, eval_imgs):
    img_ids, eval_imgs = merge(img_ids, eval_imgs)
    img_ids = list(img_ids)
    eval_imgs = list(eval_imgs.flatten())

    coco_eval.evalImgs = eval_imgs
    coco_eval.params.imgIds = img_ids
    coco_eval._paramsEval = copy.deepcopy(coco_eval.params)


#################################################################
# From pycocotools, just removed the prints and fixed
# a Python3 bug about unicode not defined
#################################################################


def evaluate(self):
    '''
    Run per image evaluation on given images and store results (a list of dict) in self.evalImgs
    :return: None
    '''
    # tic = time.time()
    # print('Running per image evaluation...')
    p = self.params
    # add backward compatibility if useSegm is specified in params
    if p.useSegm is not None:
        p.iouType = 'segm' if p.useSegm == 1 else 'bbox'
        print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType))
    # print('Evaluate annotation type *{}*'.format(p.iouType))
    p.imgIds = list(np.unique(p.imgIds))
    if p.useCats:
        p.catIds = list(np.unique(p.catIds))
    p.maxDets = sorted(p.maxDets)
    self.params = p

    self._prepare()
    # loop through images, area range, max detection number
    catIds = p.catIds if p.useCats else [-1]

    if p.iouType == 'segm' or p.iouType == 'bbox':
        computeIoU = self.computeIoU
    elif p.iouType == 'keypoints':
        computeIoU = self.computeOks
    self.ious = {
        (imgId, catId): computeIoU(imgId, catId)
        for imgId in p.imgIds
        for catId in catIds}

    evaluateImg = self.evaluateImg
    maxDet = p.maxDets[-1]
    evalImgs = [
        evaluateImg(imgId, catId, areaRng, maxDet)
        for catId in catIds
        for areaRng in p.areaRng
        for imgId in p.imgIds
    ]
    # this is NOT in the pycocotools code, but could be done outside
    evalImgs = np.asarray(evalImgs).reshape(len(catIds), len(p.areaRng), len(p.imgIds))
    self._paramsEval = copy.deepcopy(self.params)
    # toc = time.time()
    # print('DONE (t={:0.2f}s).'.format(toc-tic))
    return p.imgIds, evalImgs

#################################################################
# end of straight copy from pycocotools, just removing the prints
#################################################################


================================================
FILE: src/trackformer/datasets/coco_panoptic.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import json
from pathlib import Path

import numpy as np
import torch
from PIL import Image

from panopticapi.utils import rgb2id
from util.box_ops import masks_to_boxes

from .coco import make_coco_transforms


class CocoPanoptic:
    def __init__(self, img_folder, ann_folder, ann_file, transforms=None, norm_transforms=None, return_masks=True):
        with open(ann_file, 'r') as f:
            self.coco = json.load(f)

        # sort 'images' field so that they are aligned with 'annotations'
        # i.e., in alphabetical order
        self.coco['images'] = sorted(self.coco['images'], key=lambda x: x['id'])
        # sanity check
        if "annotations" in self.coco:
            for img, ann in zip(self.coco['images'], self.coco['annotations']):
                assert img['file_name'][:-4] == ann['file_name'][:-4]

        self.img_folder = img_folder
        self.ann_folder = ann_folder
        self.ann_file = ann_file
        self.transforms = transforms
        self.norm_transforms = norm_transforms
        self.return_masks = return_masks

    def __getitem__(self, idx):
        ann_info = self.coco['annotations'][idx] if "annotations" in self.coco else self.coco['images'][idx]
        img_path = Path(self.img_folder) / ann_info['file_name'].replace('.png', '.jpg')
        ann_path = Path(self.ann_folder) / ann_info['file_name']

        img = Image.open(img_path).convert('RGB')
        w, h = img.size
        if "segments_info" in ann_info:
            masks = np.asarray(Image.open(ann_path), dtype=np.uint32)
            masks = rgb2id(masks)

            ids = np.array([ann['id'] for ann in ann_info['segments_info']])
            masks = masks == ids[:, None, None]

            masks = torch.as_tensor(masks, dtype=torch.uint8)
            labels = torch.tensor([ann['category_id'] for ann in ann_info['segments_info']], dtype=torch.int64)

        target = {}
        target['image_id'] = torch.tensor([ann_info['image_id'] if "image_id" in ann_info else ann_info["id"]])
        if self.return_masks:
            target['masks'] = masks
        target['labels'] = labels

        target["boxes"] = masks_to_boxes(masks)

        target['size'] = torch.as_tensor([int(h), int(w)])
        target['orig_size'] = torch.as_tensor([int(h), int(w)])
        if "segments_info" in ann_info:
            for name in ['iscrowd', 'area']:
                target[name] = torch.tensor([ann[name] for ann in ann_info['segments_info']])

        if self.transforms is not None:
            img, target = self.transforms(img, target)
        if self.norm_transforms is not None:
            img, target = self.norm_transforms(img, target)

        return img, target

    def __len__(self):
        return len(self.coco['images'])

    def get_height_and_width(self, idx):
        img_info = self.coco['images'][idx]
        height = img_info['height']
        width = img_info['width']
        return height, width


def build(image_set, args):
    img_folder_root = Path(args.coco_path)
    ann_folder_root = Path(args.coco_panoptic_path)
    assert img_folder_root.exists(), f'provided COCO path {img_folder_root} does not exist'
    assert ann_folder_root.exists(), f'provided COCO path {ann_folder_root} does not exist'
    mode = 'panoptic'
    PATHS = {
        "train": ("train2017", Path("annotations") / f'{mode}_train2017.json'),
        "val": ("val2017", Path("annotations") / f'{mode}_val2017.json'),
    }

    img_folder, ann_file = PATHS[image_set]
    img_folder_path = img_folder_root / img_folder
    ann_folder = ann_folder_root / f'{mode}_{img_folder}'
    ann_file = ann_folder_root / ann_file

    transforms, norm_transforms = make_coco_transforms(image_set, args.img_transform, args.overflow_boxes)
    dataset = CocoPanoptic(img_folder_path, ann_folder, ann_file,
                           transforms=transforms, norm_transforms=norm_transforms, return_masks=args.masks)

    return dataset


================================================
FILE: src/trackformer/datasets/crowdhuman.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
CrowdHuman dataset with tracking training augmentations.
"""
from pathlib import Path

from .coco import CocoDetection, make_coco_transforms


def build_crowdhuman(image_set, args):
    root = Path(args.crowdhuman_path)
    assert root.exists(), f'provided COCO path {root} does not exist'

    split = getattr(args, f"{image_set}_split")

    img_folder = root / split
    ann_file = root / f'annotations/{split}.json'

    if image_set == 'train':
        prev_frame_rnd_augs = args.coco_and_crowdhuman_prev_frame_rnd_augs
    elif image_set == 'val':
        prev_frame_rnd_augs = 0.0

    transforms, norm_transforms = make_coco_transforms(
        image_set, args.img_transform, args.overflow_boxes)
    dataset = CocoDetection(
        img_folder, ann_file, transforms, norm_transforms,
        return_masks=args.masks,
        prev_frame=args.tracking,
        prev_frame_rnd_augs=prev_frame_rnd_augs)

    return dataset


================================================
FILE: src/trackformer/datasets/mot.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
MOT dataset with tracking training augmentations.
"""
import bisect
import copy
import csv
import os
import random
from pathlib import Path

import torch

from . import transforms as T
from .coco import CocoDetection, make_coco_transforms
from .coco import build as build_coco
from .crowdhuman import build_crowdhuman


class MOT(CocoDetection):

    def __init__(self, *args, prev_frame_range=1, **kwargs):
        super(MOT, self).__init__(*args, **kwargs)

        self._prev_frame_range = prev_frame_range

    @property
    def sequences(self):
        return self.coco.dataset['sequences']

    @property
    def frame_range(self):
        if 'frame_range' in self.coco.dataset:
            return self.coco.dataset['frame_range']
        else:
            return {'start': 0, 'end': 1.0}

    def seq_length(self, idx):
        return self.coco.imgs[idx]['seq_length']

    def sample_weight(self, idx):
        return 1.0 / self.seq_length(idx)

    def __getitem__(self, idx):
        random_state = {
            'random': random.getstate(),
            'torch': torch.random.get_rng_state()}

        img, target = self._getitem_from_id(idx, random_state, random_jitter=False)

        if self._prev_frame:
            frame_id = self.coco.imgs[idx]['frame_id']

            # PREV
            # first frame has no previous frame
            prev_frame_id = random.randint(
                max(0, frame_id - self._prev_frame_range),
                min(frame_id + self._prev_frame_range, self.seq_length(idx) - 1))
            prev_image_id = self.coco.imgs[idx]['first_frame_image_id'] + prev_frame_id

            prev_img, prev_target = self._getitem_from_id(prev_image_id, random_state)
            target[f'prev_image'] = prev_img
            target[f'prev_target'] = prev_target

            if self._prev_prev_frame:
                # PREV PREV frame equidistant as prev_frame
                prev_prev_frame_id = min(max(0, prev_frame_id + prev_frame_id - frame_id), self.seq_length(idx) - 1)
                prev_prev_image_id = self.coco.imgs[idx]['first_frame_image_id'] + prev_prev_frame_id

                prev_prev_img, prev_prev_target = self._getitem_from_id(prev_prev_image_id, random_state)
                target[f'prev_prev_image'] = prev_prev_img
                target[f'prev_prev_target'] = prev_prev_target

        return img, target

    def write_result_files(self, results, output_dir):
        """Write the detections in the format for the MOT17Det sumbission

        Each file contains these lines:
        <frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>

        """

        files = {}
        for image_id, res in results.items():
            img = self.coco.loadImgs(image_id)[0]
            file_name_without_ext = os.path.splitext(img['file_name'])[0]
            seq_name, frame = file_name_without_ext.split('_')
            frame = int(frame)

            outfile = os.path.join(output_dir, f"{seq_name}.txt")

            # check if out in keys and create empty list if not
            if outfile not in files.keys():
                files[outfile] = []

            for box, score in zip(res['boxes'], res['scores']):
                if score <= 0.7:
                    continue
                x1 = box[0].item()
                y1 = box[1].item()
                x2 = box[2].item()
                y2 = box[3].item()
                files[outfile].append(
                    [frame, -1, x1, y1, x2 - x1, y2 - y1, score.item(), -1, -1, -1])

        for k, v in files.items():
            with open(k, "w") as of:
                writer = csv.writer(of, delimiter=',')
                for d in v:
                    writer.writerow(d)


class WeightedConcatDataset(torch.utils.data.ConcatDataset):

    def sample_weight(self, idx):
        dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
        if dataset_idx == 0:
            sample_idx = idx
        else:
            sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]

        if hasattr(self.datasets[dataset_idx], 'sample_weight'):
            return self.datasets[dataset_idx].sample_weight(sample_idx)
        else:
            return 1 / len(self.datasets[dataset_idx])


def build_mot(image_set, args):
    if image_set == 'train':
        root = Path(args.mot_path_train)
        prev_frame_rnd_augs = args.track_prev_frame_rnd_augs
        prev_frame_range=args.track_prev_frame_range
    elif image_set == 'val':
        root = Path(args.mot_path_val)
        prev_frame_rnd_augs = 0.0
        prev_frame_range = 1
    else:
        ValueError(f'unknown {image_set}')

    assert root.exists(), f'provided MOT17Det path {root} does not exist'

    split = getattr(args, f"{image_set}_split")

    img_folder = root / split
    ann_file = root / f"annotations/{split}.json"

    transforms, norm_transforms = make_coco_transforms(
        image_set, args.img_transform, args.overflow_boxes)

    dataset = MOT(
        img_folder, ann_file, transforms, norm_transforms,
        prev_frame_range=prev_frame_range,
        return_masks=args.masks,
        overflow_boxes=args.overflow_boxes,
        remove_no_obj_imgs=False,
        prev_frame=args.tracking,
        prev_frame_rnd_augs=prev_frame_rnd_augs,
        prev_prev_frame=args.track_prev_prev_frame,
        )

    return dataset


def build_mot_crowdhuman(image_set, args):
    if image_set == 'train':
        args_crowdhuman = copy.deepcopy(args)
        args_crowdhuman.train_split = args.crowdhuman_train_split

        crowdhuman_dataset = build_crowdhuman('train', args_crowdhuman)

        if getattr(args, f"{image_set}_split") is None:
            return crowdhuman_dataset

    dataset = build_mot(image_set, args)

    if image_set == 'train':
        dataset = torch.utils.data.ConcatDataset(
            [dataset, crowdhuman_dataset])

    return dataset


def build_mot_coco_person(image_set, args):
    if image_set == 'train':
        args_coco_person = copy.deepcopy(args)
        args_coco_person.train_split = args.coco_person_train_split

        coco_person_dataset = build_coco('train', args_coco_person, 'person_keypoints')

        if getattr(args, f"{image_set}_split") is None:
            return coco_person_dataset

    dataset = build_mot(image_set, args)

    if image_set == 'train':
        dataset = torch.utils.data.ConcatDataset(
            [dataset, coco_person_dataset])

    return dataset


================================================
FILE: src/trackformer/datasets/panoptic_eval.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import json
import os

from ..util import misc as utils

try:
    from panopticapi.evaluation import pq_compute
except ImportError:
    pass


class PanopticEvaluator(object):
    def __init__(self, ann_file, ann_folder, output_dir="panoptic_eval"):
        self.gt_json = ann_file
        self.gt_folder = ann_folder
        if utils.is_main_process():
            if not os.path.exists(output_dir):
                os.mkdir(output_dir)
        self.output_dir = output_dir
        self.predictions = []

    def update(self, predictions):
        for p in predictions:
            with open(os.path.join(self.output_dir, p["file_name"]), "wb") as f:
                f.write(p.pop("png_string"))

        self.predictions += predictions

    def synchronize_between_processes(self):
        all_predictions = utils.all_gather(self.predictions)
        merged_predictions = []
        for p in all_predictions:
            merged_predictions += p
        self.predictions = merged_predictions

    def summarize(self):
        if utils.is_main_process():
            json_data = {"annotations": self.predictions}
            predictions_json = os.path.join(self.output_dir, "predictions.json")
            with open(predictions_json, "w") as f:
                f.write(json.dumps(json_data))
            return pq_compute(
                self.gt_json, predictions_json,
                gt_folder=self.gt_folder, pred_folder=self.output_dir)
        return None


================================================
FILE: src/trackformer/datasets/tracking/__init__.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Submodule interface.
"""
from .factory import TrackDatasetFactory


================================================
FILE: src/trackformer/datasets/tracking/demo_sequence.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
MOT17 sequence dataset.
"""
import configparser
import csv
import os
from pathlib import Path
import os.path as osp
from argparse import Namespace
from typing import Optional, Tuple, List

import numpy as np
import torch
from PIL import Image
from torch.utils.data import Dataset

from ..coco import make_coco_transforms
from ..transforms import Compose


class DemoSequence(Dataset):
    """DemoSequence (MOT17) Dataset.
    """

    def __init__(self, root_dir: str = 'data', img_transform: Namespace = None) -> None:
        """
        Args:
            seq_name (string): Sequence to take
            vis_threshold (float): Threshold of visibility of persons
                                   above which they are selected
        """
        super().__init__()

        self._data_dir = Path(root_dir)
        assert self._data_dir.is_dir(), f'data_root_dir:{root_dir} does not exist.'

        self.transforms = Compose(make_coco_transforms('val', img_transform, overflow_boxes=True))

        self.data = self._sequence()
        self.no_gt = True

    def __len__(self) -> int:
        return len(self.data)

    def __str__(self) -> str:
        return self._data_dir.name

    def __getitem__(self, idx: int) -> dict:
        """Return the ith image converted to blob"""
        data = self.data[idx]
        img = Image.open(data['im_path']).convert("RGB")
        width_orig, height_orig = img.size

        img, _ = self.transforms(img)
        width, height = img.size(2), img.size(1)

        sample = {}
        sample['img'] = img
        sample['img_path'] = data['im_path']
        sample['dets'] = torch.tensor([])
        sample['orig_size'] = torch.as_tensor([int(height_orig), int(width_orig)])
        sample['size'] = torch.as_tensor([int(height), int(width)])

        return sample

    def _sequence(self) -> List[dict]:
        total = []
        for filename in sorted(os.listdir(self._data_dir)):
            extension = os.path.splitext(filename)[1]
            if extension in ['.png', '.jpg']:
                total.append({'im_path': osp.join(self._data_dir, filename)})

        return total

    def load_results(self, results_dir: str) -> dict:
        return {}

    def write_results(self, results: dict, output_dir: str) -> None:
        """Write the tracks in the format for MOT16/MOT17 sumbission

        results: dictionary with 1 dictionary for every track with
                 {..., i:np.array([x1,y1,x2,y2]), ...} at key track_num

        Each file contains these lines:
        <frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>
        """

        # format_str = "{}, -1, {}, {}, {}, {}, {}, -1, -1, -1"
        if not os.path.exists(output_dir):
            os.makedirs(output_dir)

        result_file_path = osp.join(output_dir, self._data_dir.name)

        with open(result_file_path, "w") as r_file:
            writer = csv.writer(r_file, delimiter=',')

            for i, track in results.items():
                for frame, data in track.items():
                    x1 = data['bbox'][0]
                    y1 = data['bbox'][1]
                    x2 = data['bbox'][2]
                    y2 = data['bbox'][3]

                    writer.writerow([
                        frame + 1,
                        i + 1,
                        x1 + 1,
                        y1 + 1,
                        x2 - x1 + 1,
                        y2 - y1 + 1,
                        -1, -1, -1, -1])


================================================
FILE: src/trackformer/datasets/tracking/factory.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Factory of tracking datasets.
"""
from typing import Union

from torch.utils.data import ConcatDataset

from .demo_sequence import DemoSequence
from .mot_wrapper import MOT17Wrapper, MOT20Wrapper, MOTS20Wrapper

DATASETS = {}

# Fill all available datasets, change here to modify / add new datasets.
for split in ['TRAIN', 'TEST', 'ALL', '01', '02', '03', '04', '05',
              '06', '07', '08', '09', '10', '11', '12', '13', '14']:
    for dets in ['DPM', 'FRCNN', 'SDP', 'ALL']:
        name = f'MOT17-{split}'
        if dets:
            name = f"{name}-{dets}"
        DATASETS[name] = (
            lambda kwargs, split=split, dets=dets: MOT17Wrapper(split, dets, **kwargs))


for split in ['TRAIN', 'TEST', 'ALL', '01', '02', '03', '04', '05',
              '06', '07', '08']:
    name = f'MOT20-{split}'
    DATASETS[name] = (
        lambda kwargs, split=split: MOT20Wrapper(split, **kwargs))


for split in ['TRAIN', 'TEST', 'ALL', '01', '02', '05', '06', '07', '09', '11', '12']:
    name = f'MOTS20-{split}'
    DATASETS[name] = (
        lambda kwargs, split=split: MOTS20Wrapper(split, **kwargs))

DATASETS['DEMO'] = (lambda kwargs: [DemoSequence(**kwargs), ])


class TrackDatasetFactory:
    """A central class to manage the individual dataset loaders.

    This class contains the datasets. Once initialized the individual parts (e.g. sequences)
    can be accessed.
    """

    def __init__(self, datasets: Union[str, list], **kwargs) -> None:
        """Initialize the corresponding dataloader.

        Keyword arguments:
        datasets --  the name of the dataset or list of dataset names
        kwargs -- arguments used to call the datasets
        """
        if isinstance(datasets, str):
            datasets = [datasets]

        self._data = None
        for dataset in datasets:
            assert dataset in DATASETS, f"[!] Dataset not found: {dataset}"

            if self._data is None:
                self._data = DATASETS[dataset](kwargs)
            else:
                self._data = ConcatDataset([self._data, DATASETS[dataset](kwargs)])

    def __len__(self) -> int:
        return len(self._data)

    def __getitem__(self, idx: int):
        return self._data[idx]


================================================
FILE: src/trackformer/datasets/tracking/mot17_sequence.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
MOT17 sequence dataset.
"""
import configparser
import csv
import os
import os.path as osp
from argparse import Namespace
from typing import Optional, Tuple, List

import numpy as np
import torch
from PIL import Image
from torch.utils.data import Dataset

from ..coco import make_coco_transforms
from ..transforms import Compose


class MOT17Sequence(Dataset):
    """Multiple Object Tracking (MOT17) Dataset.

    This dataloader is designed so that it can handle only one sequence,
    if more have to be handled one should inherit from this class.
    """
    data_folder = 'MOT17'

    def __init__(self, root_dir: str = 'data', seq_name: Optional[str] = None,
                 dets: str = '', vis_threshold: float = 0.0, img_transform: Namespace = None) -> None:
        """
        Args:
            seq_name (string): Sequence to take
            vis_threshold (float): Threshold of visibility of persons
                                   above which they are selected
        """
        super().__init__()

        self._seq_name = seq_name
        self._dets = dets
        self._vis_threshold = vis_threshold

        self._data_dir = osp.join(root_dir, self.data_folder)

        self._train_folders = os.listdir(os.path.join(self._data_dir, 'train'))
        self._test_folders = os.listdir(os.path.join(self._data_dir, 'test'))

        self.transforms = Compose(make_coco_transforms('val', img_transform, overflow_boxes=True))

        self.data = []
        self.no_gt = True
        if seq_name is not None:
            full_seq_name = seq_name
            if self._dets is not None:
                full_seq_name = f"{seq_name}-{dets}"
            assert full_seq_name in self._train_folders or full_seq_name in self._test_folders, \
                'Image set does not exist: {}'.format(full_seq_name)

            self.data = self._sequence()
            self.no_gt = not osp.exists(self.get_gt_file_path())

    def __len__(self) -> int:
        return len(self.data)

    def __getitem__(self, idx: int) -> dict:
        """Return the ith image converted to blob"""
        data = self.data[idx]
        img = Image.open(data['im_path']).convert("RGB")
        width_orig, height_orig = img.size

        img, _ = self.transforms(img)
        width, height = img.size(2), img.size(1)

        sample = {}
        sample['img'] = img
        sample['dets'] = torch.tensor([det[:4] for det in data['dets']])
        sample['img_path'] = data['im_path']
        sample['gt'] = data['gt']
        sample['vis'] = data['vis']
        sample['orig_size'] = torch.as_tensor([int(height_orig), int(width_orig)])
        sample['size'] = torch.as_tensor([int(height), int(width)])

        return sample

    def _sequence(self) -> List[dict]:
        # public detections
        dets = {i: [] for i in range(1, self.seq_length + 1)}
        det_file = self.get_det_file_path()

        if osp.exists(det_file):
            with open(det_file, "r") as inf:
                reader = csv.reader(inf, delimiter=',')
                for row in reader:
                    x1 = float(row[2]) - 1
                    y1 = float(row[3]) - 1
                    # This -1 accounts for the width (width of 1 x1=x2)
                    x2 = x1 + float(row[4]) - 1
                    y2 = y1 + float(row[5]) - 1
                    score = float(row[6])
                    bbox = np.array([x1, y1, x2, y2, score], dtype=np.float32)
                    dets[int(float(row[0]))].append(bbox)

        # accumulate total
        img_dir = osp.join(
            self.get_seq_path(),
            self.config['Sequence']['imDir'])

        boxes, visibility = self.get_track_boxes_and_visbility()

        total = [
            {'gt': boxes[i],
             'im_path': osp.join(img_dir, f"{i:06d}.jpg"),
             'vis': visibility[i],
             'dets': dets[i]}
            for i in range(1, self.seq_length + 1)]

        return total

    def get_track_boxes_and_visbility(self) -> Tuple[dict, dict]:
        """ Load ground truth boxes and their visibility."""
        boxes = {}
        visibility = {}

        for i in range(1, self.seq_length + 1):
            boxes[i] = {}
            visibility[i] = {}

        gt_file = self.get_gt_file_path()
        if not osp.exists(gt_file):
            return boxes, visibility

        with open(gt_file, "r") as inf:
            reader = csv.reader(inf, delimiter=',')
            for row in reader:
                # class person, certainity 1
                if int(row[6]) == 1 and int(row[7]) == 1 and float(row[8]) >= self._vis_threshold:
                    # Make pixel indexes 0-based, should already be 0-based (or not)
                    x1 = int(row[2]) - 1
                    y1 = int(row[3]) - 1
                    # This -1 accounts for the width (width of 1 x1=x2)
                    x2 = x1 + int(row[4]) - 1
                    y2 = y1 + int(row[5]) - 1
                    bbox = np.array([x1, y1, x2, y2], dtype=np.float32)

                    frame_id = int(row[0])
                    track_id = int(row[1])

                    boxes[frame_id][track_id] = bbox
                    visibility[frame_id][track_id] = float(row[8])

        return boxes, visibility

    def get_seq_path(self) -> str:
        """ Return directory path of sequence. """
        full_seq_name = self._seq_name
        if self._dets is not None:
            full_seq_name = f"{self._seq_name}-{self._dets}"

        if full_seq_name in self._train_folders:
            return osp.join(self._data_dir, 'train', full_seq_name)
        else:
            return osp.join(self._data_dir, 'test', full_seq_name)

    def get_config_file_path(self) -> str:
        """ Return config file of sequence. """
        return osp.join(self.get_seq_path(), 'seqinfo.ini')

    def get_gt_file_path(self) -> str:
        """ Return ground truth file of sequence. """
        return osp.join(self.get_seq_path(), 'gt', 'gt.txt')

    def get_det_file_path(self) -> str:
        """ Return public detections file of sequence. """
        if self._dets is None:
            return ""

        return osp.join(self.get_seq_path(), 'det', 'det.txt')

    @property
    def config(self) -> dict:
        """ Return config of sequence. """
        config_file = self.get_config_file_path()

        assert osp.exists(config_file), \
            f'Config file does not exist: {config_file}'

        config = configparser.ConfigParser()
        config.read(config_file)
        return config

    @property
    def seq_length(self) -> int:
        """ Return sequence length, i.e, number of frames. """
        return int(self.config['Sequence']['seqLength'])

    def __str__(self) -> str:
        return f"{self._seq_name}-{self._dets}"

    @property
    def results_file_name(self) -> str:
        """ Generate file name of results file. """
        assert self._seq_name is not None, "[!] No seq_name, probably using combined database"

        if self._dets is None:
            return f"{self._seq_name}.txt"

        return f"{self}.txt"

    def write_results(self, results: dict, output_dir: str) -> None:
        """Write the tracks in the format for MOT16/MOT17 sumbission

        results: dictionary with 1 dictionary for every track with
                 {..., i:np.array([x1,y1,x2,y2]), ...} at key track_num

        Each file contains these lines:
        <frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>
        """

        # format_str = "{}, -1, {}, {}, {}, {}, {}, -1, -1, -1"
        if not os.path.exists(output_dir):
            os.makedirs(output_dir)

        result_file_path = osp.join(output_dir, self.results_file_name)

        with open(result_file_path, "w") as r_file:
            writer = csv.writer(r_file, delimiter=',')

            for i, track in results.items():
                for frame, data in track.items():
                    x1 = data['bbox'][0]
                    y1 = data['bbox'][1]
                    x2 = data['bbox'][2]
                    y2 = data['bbox'][3]

                    writer.writerow([
                        frame + 1,
                        i + 1,
                        x1 + 1,
                        y1 + 1,
                        x2 - x1 + 1,
                        y2 - y1 + 1,
                        -1, -1, -1, -1])

    def load_results(self, results_dir: str) -> dict:
        results = {}
        if results_dir is None:
            return results

        file_path = osp.join(results_dir, self.results_file_name)

        if not os.path.isfile(file_path):
            return results

        with open(file_path, "r") as file:
            csv_reader = csv.reader(file, delimiter=',')

            for row in csv_reader:
                frame_id, track_id = int(row[0]) - 1, int(row[1]) - 1

                if track_id not in results:
                    results[track_id] = {}

                x1 = float(row[2]) - 1
                y1 = float(row[3]) - 1
                x2 = float(row[4]) - 1 + x1
                y2 = float(row[5]) - 1 + y1

                results[track_id][frame_id] = {}
                results[track_id][frame_id]['bbox'] = [x1, y1, x2, y2]
                results[track_id][frame_id]['score'] = 1.0

        return results



================================================
FILE: src/trackformer/datasets/tracking/mot20_sequence.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
MOT20 sequence dataset.
"""

from .mot17_sequence import MOT17Sequence


class MOT20Sequence(MOT17Sequence):
    """Multiple Object Tracking (MOT20) Dataset.

    This dataloader is designed so that it can handle only one sequence,
    if more have to be handled one should inherit from this class.
    """
    data_folder = 'MOT20'


================================================
FILE: src/trackformer/datasets/tracking/mot_wrapper.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
MOT wrapper which combines sequences to a dataset.
"""
from torch.utils.data import Dataset

from .mot17_sequence import MOT17Sequence
from .mot20_sequence import MOT20Sequence
from .mots20_sequence import MOTS20Sequence


class MOT17Wrapper(Dataset):
    """A Wrapper for the MOT_Sequence class to return multiple sequences."""

    def __init__(self, split: str, dets: str, **kwargs) -> None:
        """Initliazes all subset of the dataset.

        Keyword arguments:
        split -- the split of the dataset to use
        kwargs -- kwargs for the MOT17Sequence dataset
        """
        train_sequences = [
            'MOT17-02', 'MOT17-04', 'MOT17-05', 'MOT17-09',
            'MOT17-10', 'MOT17-11', 'MOT17-13']
        test_sequences = [
            'MOT17-01', 'MOT17-03', 'MOT17-06', 'MOT17-07',
            'MOT17-08', 'MOT17-12', 'MOT17-14']

        if split == "TRAIN":
            sequences = train_sequences
        elif split == "TEST":
            sequences = test_sequences
        elif split == "ALL":
            sequences = train_sequences + test_sequences
            sequences = sorted(sequences)
        elif f"MOT17-{split}" in train_sequences + test_sequences:
            sequences = [f"MOT17-{split}"]
        else:
            raise NotImplementedError("MOT17 split not available.")

        self._data = []
        for seq in sequences:
            if dets == 'ALL':
                self._data.append(MOT17Sequence(seq_name=seq, dets='DPM', **kwargs))
                self._data.append(MOT17Sequence(seq_name=seq, dets='FRCNN', **kwargs))
                self._data.append(MOT17Sequence(seq_name=seq, dets='SDP', **kwargs))
            else:
                self._data.append(MOT17Sequence(seq_name=seq, dets=dets, **kwargs))

    def __len__(self) -> int:
        return len(self._data)

    def __getitem__(self, idx: int):
        return self._data[idx]


class MOT20Wrapper(Dataset):
    """A Wrapper for the MOT_Sequence class to return multiple sequences."""

    def __init__(self, split: str, **kwargs) -> None:
        """Initliazes all subset of the dataset.

        Keyword arguments:
        split -- the split of the dataset to use
        kwargs -- kwargs for the MOT20Sequence dataset
        """
        train_sequences = ['MOT20-01', 'MOT20-02', 'MOT20-03', 'MOT20-05',]
        test_sequences = ['MOT20-04', 'MOT20-06', 'MOT20-07', 'MOT20-08',]

        if split == "TRAIN":
            sequences = train_sequences
        elif split == "TEST":
            sequences = test_sequences
        elif split == "ALL":
            sequences = train_sequences + test_sequences
            sequences = sorted(sequences)
        elif f"MOT20-{split}" in train_sequences + test_sequences:
            sequences = [f"MOT20-{split}"]
        else:
            raise NotImplementedError("MOT20 split not available.")

        self._data = []
        for seq in sequences:
            self._data.append(MOT20Sequence(seq_name=seq, dets=None, **kwargs))

    def __len__(self) -> int:
        return len(self._data)

    def __getitem__(self, idx: int):
        return self._data[idx]


class MOTS20Wrapper(MOT17Wrapper):
    """A Wrapper for the MOT_Sequence class to return multiple sequences."""

    def __init__(self, split: str, **kwargs) -> None:
        """Initliazes all subset of the dataset.

        Keyword arguments:
        split -- the split of the dataset to use
        kwargs -- kwargs for the MOTS20Sequence dataset
        """
        train_sequences = ['MOTS20-02', 'MOTS20-05', 'MOTS20-09', 'MOTS20-11']
        test_sequences = ['MOTS20-01', 'MOTS20-06', 'MOTS20-07', 'MOTS20-12']

        if split == "TRAIN":
            sequences = train_sequences
        elif split == "TEST":
            sequences = test_sequences
        elif split == "ALL":
            sequences = train_sequences + test_sequences
            sequences = sorted(sequences)
        elif f"MOTS20-{split}" in train_sequences + test_sequences:
            sequences = [f"MOTS20-{split}"]
        else:
            raise NotImplementedError("MOTS20 split not available.")

        self._data = []
        for seq in sequences:
            self._data.append(MOTS20Sequence(seq_name=seq, **kwargs))


================================================
FILE: src/trackformer/datasets/tracking/mots20_sequence.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
MOTS20 sequence dataset.
"""
import csv
import os
import os.path as osp
from argparse import Namespace
from typing import Optional, Tuple

import numpy as np
import pycocotools.mask as rletools

from .mot17_sequence import MOT17Sequence


class MOTS20Sequence(MOT17Sequence):
    """Multiple Object and Segmentation Tracking (MOTS20) Dataset.

    This dataloader is designed so that it can handle only one sequence,
    if more have to be handled one should inherit from this class.
    """
    data_folder = 'MOTS20'

    def __init__(self, root_dir: str = 'data', seq_name: Optional[str] = None,
                 vis_threshold: float = 0.0, img_transform: Namespace = None) -> None:
        """
        Args:
            seq_name (string): Sequence to take
            vis_threshold (float): Threshold of visibility of persons
                                   above which they are selected
        """
        super().__init__(root_dir, seq_name, None, vis_threshold, img_transform)

    def get_track_boxes_and_visbility(self) -> Tuple[dict, dict]:
        boxes = {}
        visibility = {}

        for i in range(1, self.seq_length + 1):
            boxes[i] = {}
            visibility[i] = {}

        gt_file = self.get_gt_file_path()
        if not osp.exists(gt_file):
            return boxes, visibility

        mask_objects_per_frame = load_mots_gt(gt_file)
        for frame_id, mask_objects in mask_objects_per_frame.items():
            for mask_object in mask_objects:
                # class_id = 1 is car
                # class_id = 2 is pedestrian
                # class_id = 10 IGNORE
                if mask_object.class_id in [1, 10]:
                    continue

                bbox = rletools.toBbox(mask_object.mask)
                x1, y1, w, h = [int(c) for c in bbox]
                bbox = np.array([x1, y1, x1 + w, y1 + h], dtype=np.float32)

                # area = bbox[2] * bbox[3]
                # image_id = img_file_name_to_id[f"{seq}_{frame_id:06d}.jpg"]

                # segmentation = {
                #     'size': mask_object.mask['size'],
                #     'counts': mask_object.mask['counts'].decode(encoding='UTF-8')}

                boxes[frame_id][mask_object.track_id] = bbox
                visibility[frame_id][mask_object.track_id] = 1.0

        return boxes, visibility

    def write_results(self, results: dict, output_dir: str) -> None:
        if not os.path.exists(output_dir):
            os.makedirs(output_dir)

        result_file_path = osp.join(output_dir, f"{self._seq_name}.txt")

        with open(result_file_path, "w") as res_file:
            writer = csv.writer(res_file, delimiter=' ')
            for i, track in results.items():
                for frame, data in track.items():
                    mask = np.asfortranarray(data['mask'])
                    rle_mask = rletools.encode(mask)

                    writer.writerow([
                        frame + 1,
                        i + 1,
                        2,  # class pedestrian
                        mask.shape[0],
                        mask.shape[1],
                        rle_mask['counts'].decode(encoding='UTF-8')])

    def load_results(self, results_dir: str) -> dict:
        results = {}

        if results_dir is None:
            return results

        file_path = osp.join(results_dir, self.results_file_name)

        if not os.path.isfile(file_path):
            return results

        mask_objects_per_frame = load_mots_gt(file_path)

        for frame_id, mask_objects in mask_objects_per_frame.items():
            for mask_object in mask_objects:
                # class_id = 1 is car
                # class_id = 2 is pedestrian
                # class_id = 10 IGNORE
                if mask_object.class_id in [1, 10]:
                    continue

                bbox = rletools.toBbox(mask_object.mask)
                x1, y1, w, h = [int(c) for c in bbox]
                bbox = np.array([x1, y1, x1 + w, y1 + h], dtype=np.float32)

                # area = bbox[2] * bbox[3]
                # image_id = img_file_name_to_id[f"{seq}_{frame_id:06d}.jpg"]

                # segmentation = {
                #     'size': mask_object.mask['size'],
                #     'counts': mask_object.mask['counts'].decode(encoding='UTF-8')}

                track_id = mask_object.track_id - 1
                if track_id not in results:
                    results[track_id] = {}

                results[track_id][frame_id - 1] = {}
                results[track_id][frame_id - 1]['mask'] = rletools.decode(mask_object.mask)
                results[track_id][frame_id - 1]['bbox'] = bbox.tolist()
                results[track_id][frame_id - 1]['score'] = 1.0

        return results

    def __str__(self) -> str:
        return self._seq_name


class SegmentedObject:
    """
    Helper class for segmentation objects.
    """
    def __init__(self, mask: dict, class_id: int, track_id: int) -> None:
        self.mask = mask
        self.class_id = class_id
        self.track_id = track_id


def load_mots_gt(path: str) -> dict:
    """Load MOTS ground truth from path."""
    objects_per_frame = {}
    track_ids_per_frame = {}  # Check that no frame contains two objects with same id
    combined_mask_per_frame = {}  # Check that no frame contains overlapping masks

    with open(path, "r") as gt_file:
        for line in gt_file:
            line = line.strip()
            fields = line.split(" ")

            frame = int(fields[0])
            if frame not in objects_per_frame:
                objects_per_frame[frame] = []
            if frame not in track_ids_per_frame:
                track_ids_per_frame[frame] = set()
            if int(fields[1]) in track_ids_per_frame[frame]:
                assert False, f"Multiple objects with track id {fields[1]} in frame {fields[0]}"
            else:
                track_ids_per_frame[frame].add(int(fields[1]))

            class_id = int(fields[2])
            if not(class_id == 1 or class_id == 2 or class_id == 10):
                assert False, "Unknown object class " + fields[2]

            mask = {
                'size': [int(fields[3]), int(fields[4])],
                'counts': fields[5].encode(encoding='UTF-8')}
            if frame not in combined_mask_per_frame:
                combined_mask_per_frame[frame] = mask
            elif rletools.area(rletools.merge([
                    combined_mask_per_frame[frame], mask],
                    intersect=True)):
                assert False, "Objects with overlapping masks in frame " + fields[0]
            else:
                combined_mask_per_frame[frame] = rletools.merge(
                    [combined_mask_per_frame[frame], mask],
                    intersect=False)
            objects_per_frame[frame].append(SegmentedObject(
                mask,
                class_id,
                int(fields[1])
            ))

    return objects_per_frame


================================================
FILE: src/trackformer/datasets/transforms.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Transforms and data augmentation for both image + bbox.
"""
import random
from typing import Union

import PIL
import torch
import torchvision.transforms as T
import torchvision.transforms.functional as F

from ..util.box_ops import box_xyxy_to_cxcywh
from ..util.misc import interpolate


def crop(image, target, region, overflow_boxes=False):
    i, j, h, w = region
    target = target.copy()

    if isinstance(image, torch.Tensor):
        cropped_image = image[:, j:j + w, i:i + h]
    else:
        cropped_image = F.crop(image, *region)

    # should we do something wrt the original size?
    target["size"] = torch.tensor([h, w])

    fields = ["labels", "area", "iscrowd", "ignore", "track_ids"]

    orig_area = target["area"]

    if "boxes" in target:
        boxes = target["boxes"]
        max_size = torch.as_tensor([w, h], dtype=torch.float32)
        cropped_boxes = boxes - torch.as_tensor([j, i, j, i])

        if overflow_boxes:
            for i, box in enumerate(cropped_boxes):
                l, t, r, b = box
                if l < 0 and r < 0:
                    l = r = 0
                if l > w and r > w:
                    l = r = w
                if t < 0 and b < 0:
                    t = b = 0
                if t > h and b > h:
                    t = b = h
                cropped_boxes[i] = torch.tensor([l, t, r, b], dtype=box.dtype)
            cropped_boxes = cropped_boxes.reshape(-1, 2, 2)
        else:
            cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size)
            cropped_boxes = cropped_boxes.clamp(min=0)

        area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1)
        target["boxes"] = cropped_boxes.reshape(-1, 4)
        target["area"] = area
        fields.append("boxes")

    if "masks" in target:
        # FIXME should we update the area here if there are no boxes?
        target['masks'] = target['masks'][:, i:i + h, j:j + w]
        fields.append("masks")

    # remove elements for which the boxes or masks that have zero area
    if "boxes" in target or "masks" in target:
        # favor boxes selection when defining which elements to keep
        # this is compatible with previous implementation
        if "boxes" in target:
            cropped_boxes = target['boxes'].reshape(-1, 2, 2)
            keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1)

            # new area must be at least % of orginal area
            # keep = target["area"] >= orig_area * 0.2
        else:
            keep = target['masks'].flatten(1).any(1)

        for field in fields:
            if field in target:
                target[field] = target[field][keep]

    return cropped_image, target


def hflip(image, target):
    if isinstance(image, torch.Tensor):
        flipped_image = image.flip(-1)
        _, width, _ = image.size()
    else:
        flipped_image = F.hflip(image)
        width, _ = image.size

    target = target.copy()

    if "boxes" in target:
        boxes = target["boxes"]
        boxes = boxes[:, [2, 1, 0, 3]] \
            * torch.as_tensor([-1, 1, -1, 1]) \
            + torch.as_tensor([width, 0, width, 0])
        target["boxes"] = boxes

    if "boxes_ignore" in target:
        boxes = target["boxes_ignore"]
        boxes = boxes[:, [2, 1, 0, 3]] \
            * torch.as_tensor([-1, 1, -1, 1]) \
            + torch.as_tensor([width, 0, width, 0])
        target["boxes_ignore"] = boxes

    if "masks" in target:
        target['masks'] = target['masks'].flip(-1)

    return flipped_image, target


def resize(image, target, size, max_size=None):
    # size can be min_size (scalar) or (w, h) tuple

    def get_size_with_aspect_ratio(image_size, size, max_size=None):
        w, h = image_size
        if max_size is not None:
            min_original_size = float(min((w, h)))
            max_original_size = float(max((w, h)))
            if max_original_size / min_original_size * size > max_size:
                size = int(round(max_size * min_original_size / max_original_size))

        if (w <= h and w == size) or (h <= w and h == size):
            return (h, w)

        if w < h:
            ow = size
            oh = int(size * h / w)
        else:
            oh = size
            ow = int(size * w / h)

        return (oh, ow)

    def get_size(image_size, size, max_size=None):
        if isinstance(size, (list, tuple)):
            return size[::-1]
        else:
            return get_size_with_aspect_ratio(image_size, size, max_size)

    size = get_size(image.size, size, max_size)
    rescaled_image = F.resize(image, size)

    if target is None:
        return rescaled_image, None

    ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size))
    ratio_width, ratio_height = ratios

    target = target.copy()
    if "boxes" in target:
        boxes = target["boxes"]
        scaled_boxes = boxes \
            * torch.as_tensor([ratio_width, ratio_height, ratio_width, ratio_height])
        target["boxes"] = scaled_boxes

    if "area" in target:
        area = target["area"]
        scaled_area = area * (ratio_width * ratio_height)
        target["area"] = scaled_area

    h, w = size
    target["size"] = torch.tensor([h, w])

    if "masks" in target:
        target['masks'] = interpolate(
            target['masks'][:, None].float(), size, mode="nearest")[:, 0] > 0.5

    return rescaled_image, target


def pad(image, target, padding):
    # pad_left, pad_top, pad_right, pad_bottom
    padded_image = F.pad(image, padding)
    if target is None:
        return padded_image, None
    target = target.copy()
    # should we do something wrt the original size?
    w, h = padded_image.size

    if "boxes" in target:
        # correct xyxy from left and right paddings
        target["boxes"] += torch.tensor(
            [padding[0], padding[1], padding[0], padding[1]])

    target["size"] = torch.tensor([h, w])
    if "masks" in target:
        # padding_left, padding_right, padding_top, padding_bottom
        target['masks'] = torch.nn.functional.pad(
            target['masks'],
            (padding[0], padding[2], padding[1], padding[3]))
    return padded_image, target


class RandomCrop:
    def __init__(self, size, overflow_boxes=False):
        # in hxw
        self.size = size
        self.overflow_boxes = overflow_boxes

    def __call__(self, img, target):
        region = T.RandomCrop.get_params(img, self.size)
        return crop(img, target, region, self.overflow_boxes)


class RandomSizeCrop:
    def __init__(self,
                 min_size: Union[tuple, list, int],
                 max_size: Union[tuple, list, int] = None,
                 overflow_boxes: bool = False):
        if isinstance(min_size, int):
            min_size = (min_size, min_size)
        if isinstance(max_size, int):
            max_size = (max_size, max_size)

        self.min_size = min_size
        self.max_size = max_size
        self.overflow_boxes = overflow_boxes

    def __call__(self, img: PIL.Image.Image, target: dict):
        if self.max_size is None:
            w = random.randint(min(self.min_size[0], img.width), img.width)
            h = random.randint(min(self.min_size[1], img.height), img.height)
        else:
            w = random.randint(
                min(self.min_size[0], img.width),
                min(img.width, self.max_size[0]))
            h = random.randint(
                min(self.min_size[1], img.height),
                min(img.height, self.max_size[1]))

        region = T.RandomCrop.get_params(img, [h, w])
        return crop(img, target, region, self.overflow_boxes)


class CenterCrop:
    def __init__(self, size, overflow_boxes=False):
        self.size = size
        self.overflow_boxes = overflow_boxes

    def __call__(self, img, target):
        image_width, image_height = img.size
        crop_height, crop_width = self.size
        crop_top = int(round((image_height - crop_height) / 2.))
        crop_left = int(round((image_width - crop_width) / 2.))
        return crop(img, target, (crop_top, crop_left, crop_height, crop_width), self.overflow_boxes)


class RandomHorizontalFlip:
    def __init__(self, p=0.5):
        self.p = p

    def __call__(self, img, target):
        if random.random() < self.p:
            return hflip(img, target)
        return img, target


class RepeatUntilMaxObjects:
    def __init__(self, transforms, num_max_objects):
        self._num_max_objects = num_max_objects
        self._transforms = transforms

    def __call__(self, img, target):
        num_objects = None
        while num_objects is None or num_objects > self._num_max_objects:
            img_trans, target_trans = self._transforms(img, target)
            num_objects = len(target_trans['boxes'])
        return img_trans, target_trans


class RandomResize:
    def __init__(self, sizes, max_size=None):
        assert isinstance(sizes, (list, tuple))
        self.sizes = sizes
        self.max_size = max_size

    def __call__(self, img, target=None):
        size = random.choice(self.sizes)
        return resize(img, target, size, self.max_size)


class RandomResizeTargets:
    def __init__(self, scale=0.5):
        self.scalce = scale

    def __call__(self, img, target=None):
        img = F.to_tensor(img)
        img_c, img_w, img_h = img.shape

        rescaled_boxes = []
        rescaled_box_images = []
        for box in target['boxes']:
            y1, x1, y2, x2 = box.int().tolist()
            w = x2 - x1
            h = y2 - y1

            box_img = img[:, x1:x2, y1:y2]
            random_scale = random.uniform(0.5, 2.0)
            scaled_width = int(random_scale * w)
            scaled_height = int(random_scale * h)

            box_img = F.to_pil_image(box_img)
            rescaled_box_image = F.resize(
                box_img,
                (scaled_width, scaled_height))
            rescaled_box_images.append(F.to_tensor(rescaled_box_image))
            rescaled_boxes.append([y1, x1, y1 + scaled_height, x1 + scaled_width])

        for box in target['boxes']:
            y1, x1, y2, x2 = box.int().tolist()
            w = x2 - x1
            h = y2 - y1

            erase_value = torch.empty(
                [img_c, w, h],
                dtype=torch.float32).normal_()

            img = F.erase(
                img, x1, y1, w, h, erase_value, True)

        for box, rescaled_box_image in zip(target['boxes'], rescaled_box_images):
            y1, x1, y2, x2 = box.int().tolist()
            w = x2 - x1
            h = y2 - y1
            _, scaled_width, scaled_height = rescaled_box_image.shape

            rescaled_box_image = rescaled_box_image[
                :,
                :scaled_width - max(x1 + scaled_width - img_w, 0),
                :scaled_height - max(y1 + scaled_height - img_h, 0)]

            img[:, x1:x1 + scaled_width, y1:y1 + scaled_height] = rescaled_box_image

        target['boxes'] = torch.tensor(rescaled_boxes).float()
        img = F.to_pil_image(img)
        return img, target


class RandomPad:
    def __init__(self, max_size):
        if isinstance(max_size, int):
            max_size = (max_size, max_size)

        self.max_size = max_size

    def __call__(self, img, target):
        w, h = img.size
        pad_width = random.randint(0, max(self.max_size[0] - w, 0))
        pad_height = random.randint(0, max(self.max_size[1] - h, 0))

        pad_left = random.randint(0, pad_width)
        pad_right = pad_width - pad_left
        pad_top = random.randint(0, pad_height)
        pad_bottom = pad_height - pad_top

        padding = (pad_left, pad_top, pad_right, pad_bottom)

        return pad(img, target, padding)


class RandomSelect:
    """
    Randomly selects between transforms1 and transforms2,
    with probability p for transforms1 and (1 - p) for transforms2
    """
    def __init__(self, transforms1, transforms2, p=0.5):
        self.transforms1 = transforms1
        self.transforms2 = transforms2
        self.p = p

    def __call__(self, img, target):
        if random.random() < self.p:
            return self.transforms1(img, target)
        return self.transforms2(img, target)


class ToTensor:
    def __call__(self, img, target=None):
        return F.to_tensor(img), target


class RandomErasing:

    def __init__(self, p=0.5, scale=(0.02, 0.33), ratio=(0.3, 3.3), value=0, inplace=False):
        self.eraser = T.RandomErasing()
        self.p = p
        self.scale = scale
        self.ratio = ratio
        self.value = value
        self.inplace = inplace

    def __call__(self, img, target):
        if random.uniform(0, 1) < self.p:
            img = F.to_tensor(img)

            x, y, h, w, v = self.eraser.get_params(
                img, scale=self.scale, ratio=self.ratio, value=self.value)

            img = F.erase(img, x, y, h, w, v, self.inplace)
            img = F.to_pil_image(img)

            # target
            fields = ['boxes', "labels", "area", "iscrowd", "ignore", "track_ids"]

            if 'boxes' in target:
                erased_box = torch.tensor([[y, x, y + w, x + h]]).float()

                lt = torch.max(erased_box[:, None, :2], target['boxes'][:, :2])  # [N,M,2]
                rb = torch.min(erased_box[:, None, 2:], target['boxes'][:, 2:])  # [N,M,2]
                wh = (rb - lt).clamp(min=0)  # [N,M,2]
                inter = wh[:, :, 0] * wh[:, :, 1]  # [N,M]

                keep = inter[0] <= 0.7 * target['area']

                left = torch.logical_and(
                    target['boxes'][:, 0] < erased_box[:, 0],
                    target['boxes'][:, 2] > erased_box[:, 0])
                left = torch.logical_and(left, inter[0].bool())

                right = torch.logical_and(
                    target['boxes'][:, 0] < erased_box[:, 2],
                    target['boxes'][:, 2] > erased_box[:, 2])
                right = torch.logical_and(right, inter[0].bool())

                top = torch.logical_and(
                    target['boxes'][:, 1] < erased_box[:, 1],
                    target['boxes'][:, 3] > erased_box[:, 1])
                top = torch.logical_and(top, inter[0].bool())

                bottom = torch.logical_and(
                    target['boxes'][:, 1] < erased_box[:, 3],
                    target['boxes'][:, 3] > erased_box[:, 3])
                bottom = torch.logical_and(bottom, inter[0].bool())

                only_one_crop = (top.float() + bottom.float() + left.float() + right.float()) > 1
                left[only_one_crop] = False
                right[only_one_crop] = False
                top[only_one_crop] = False
                bottom[only_one_crop] = False

                target['boxes'][:, 2][left] = erased_box[:, 0]
                target['boxes'][:, 0][right] = erased_box[:, 2]
                target['boxes'][:, 3][top] = erased_box[:, 1]
                target['boxes'][:, 1][bottom] = erased_box[:, 3]

                for field in fields:
                    if field in target:
                        target[field] = target[field][keep]

        return img, target


class Normalize:
    def __init__(self, mean, std):
        self.mean = mean
        self.std = std

    def __call__(self, image, target=None):
        image = F.normalize(image, mean=self.mean, std=self.std)
        if target is None:
            return image, None
        target = target.copy()
        h, w = image.shape[-2:]
        if "boxes" in target:
            boxes = target["boxes"]
            boxes = box_xyxy_to_cxcywh(boxes)
            boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32)
            target["boxes"] = boxes
        return image, target


class Compose:
    def __init__(self, transforms):
        self.transforms = transforms

    def __call__(self, image, target=None):
        for t in self.transforms:
            image, target = t(image, target)
        return image, target

    def __repr__(self):
        format_string = self.__class__.__name__ + "("
        for t in self.transforms:
            format_string += "\n"
            format_string += "    {0}".format(t)
        format_string += "\n)"
        return format_string


================================================
FILE: src/trackformer/engine.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Train and eval functions used in main.py
"""
import logging
import math
import os
import sys
from typing import Iterable

import torch
from track import ex

from .datasets import get_coco_api_from_dataset
from .datasets.coco_eval import CocoEvaluator
from .datasets.panoptic_eval import PanopticEvaluator
from .models.detr_segmentation import DETRSegm
from .util import misc as utils
from .util.box_ops import box_iou
from .util.track_utils import evaluate_mot_accums
from .vis import vis_results


def make_results(outputs, targets, postprocessors, tracking, return_only_orig=True):
    target_sizes = torch.stack([t["size"] for t in targets], dim=0)
    orig_target_sizes = torch.stack([t["orig_size"] for t in targets], dim=0)

    # remove placeholder track queries
    # results_mask = None
    # if tracking:
    #     results_mask = [~t['track_queries_placeholder_mask'] for t in targets]
    #     for target, res_mask in zip(targets, results_mask):
    #         target['track_queries_mask'] = target['track_queries_mask'][res_mask]
    #         target['track_queries_fal_pos_mask'] = target['track_queries_fal_pos_mask'][res_mask]

    # results = None
    # if not return_only_orig:
    #     results = postprocessors['bbox'](outputs, target_sizes, results_mask)
    # results_orig = postprocessors['bbox'](outputs, orig_target_sizes, results_mask)

    # if 'segm' in postprocessors:
    #     results_orig = postprocessors['segm'](
    #         results_orig, outputs, orig_target_sizes, target_sizes, results_mask)
    #     if not return_only_orig:
    #         results = postprocessors['segm'](
    #             results, outputs, target_sizes, target_sizes, results_mask)

    results = None
    if not return_only_orig:
        results = postprocessors['bbox'](outputs, target_sizes)
    results_orig = postprocessors['bbox'](outputs, orig_target_sizes)

    if 'segm' in postprocessors:
        results_orig = postprocessors['segm'](
            results_orig, outputs, orig_target_sizes, target_sizes)
        if not return_only_orig:
            results = postprocessors['segm'](
                results, outputs, target_sizes, target_sizes)

    if results is None:
        return results_orig, results

    for i, result in enumerate(results):
        target = targets[i]
        target_size = target_sizes[i].unsqueeze(dim=0)

        result['target'] = {}
        result['boxes'] = result['boxes'].cpu()

        # revert boxes for visualization
        for key in ['boxes', 'track_query_boxes']:
            if key in target:
                target[key] = postprocessors['bbox'].process_boxes(
                    target[key], target_size)[0].cpu()

        if tracking and 'prev_target' in target:
            if 'prev_prev_target' in target:
                target['prev_prev_target']['boxes'] = postprocessors['bbox'].process_boxes(
                    target['prev_prev_target']['boxes'],
                    target['prev_prev_target']['size'].unsqueeze(dim=0))[0].cpu()

            target['prev_target']['boxes'] = postprocessors['bbox'].process_boxes(
                target['prev_target']['boxes'],
                target['prev_target']['size'].unsqueeze(dim=0))[0].cpu()

            if 'track_query_match_ids' in target and len(target['track_query_match_ids']):
                track_queries_iou, _ = box_iou(
                    target['boxes'][target['track_query_match_ids']],
                    result['boxes'])

                box_ids = [box_id
                    for box_id, (is_track_query, is_fals_pos_track_query)
                    in enumerate(zip(target['track_queries_mask'], target['track_queries_fal_pos_mask']))
                    if is_track_query and not is_fals_pos_track_query]

                result['track_queries_with_id_iou'] = torch.diagonal(track_queries_iou[:, box_ids])

    return results_orig, results


def train_one_epoch(model: torch.nn.Module, criterion: torch.nn.Module, postprocessors,
                    data_loader: Iterable, optimizer: torch.optim.Optimizer,
                    device: torch.device, epoch: int, visualizers: dict, args):

    vis_iter_metrics = None
    if visualizers:
        vis_iter_metrics = visualizers['iter_metrics']

    model.train()
    criterion.train()
    metric_logger = utils.MetricLogger(
        args.vis_and_log_interval,
        delimiter="  ",
        vis=vis_iter_metrics,
        debug=args.debug)
    metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
    metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))

    for i, (samples, targets) in enumerate(metric_logger.log_every(data_loader, epoch)):
        samples = samples.to(device)
        targets = [utils.nested_dict_to_device(t, device) for t in targets]

        # in order to be able to modify targets inside the forward call we need
        # to pass it through as torch.nn.parallel.DistributedDataParallel only
        # passes copies
        outputs, targets, *_ = model(samples, targets)

        loss_dict = criterion(outputs, targets)
        weight_dict = criterion.weight_dict
        losses = sum(loss_dict[k] * weight_dict[k] for k in loss_dict.keys() if k in weight_dict)

        # reduce losses over all GPUs for logging purposes
        loss_dict_reduced = utils.reduce_dict(loss_dict)
        loss_dict_reduced_unscaled = {
            f'{k}_unscaled': v for k, v in loss_dict_reduced.items()}
        loss_dict_reduced_scaled = {
            k: v * weight_dict[k] for k, v in loss_dict_reduced.items() if k in weight_dict}
        losses_reduced_scaled = sum(loss_dict_reduced_scaled.values())

        loss_value = losses_reduced_scaled.item()

        if not math.isfinite(loss_value):
            print(f"Loss is {loss_value}, stopping training")
            print(loss_dict_reduced)
            sys.exit(1)

        optimizer.zero_grad()
        losses.backward()
        if args.clip_max_norm > 0:
            torch.nn.utils.clip_grad_norm_(model.parameters(), args.clip_max_norm)
        optimizer.step()

        metric_logger.update(loss=loss_value,
                             **loss_dict_reduced_scaled,
                             **loss_dict_reduced_unscaled)
        metric_logger.update(class_error=loss_dict_reduced['class_error'])
        metric_logger.update(lr=optimizer.param_groups[0]["lr"],
                             lr_backbone=optimizer.param_groups[1]["lr"])

        if visualizers and (i == 0 or not i % args.vis_and_log_interval):
            _, results = make_results(
                outputs, targets, postprocessors, args.tracking, return_only_orig=False)

            vis_results(
                visualizers['example_results'],
                samples.unmasked_tensor(0),
                results[0],
                targets[0],
                args.tracking)

    # gather the stats from all processes
    metric_logger.synchronize_between_processes()
    print("Averaged stats:", metric_logger)

    return {k: meter.global_avg for k, meter in metric_logger.meters.items()}


@torch.no_grad()
def evaluate(model, criterion, postprocessors, data_loader, device,
             output_dir: str, visualizers: dict, args, epoch: int = None):
    model.eval()
    criterion.eval()

    metric_logger = utils.MetricLogger(
        args.vis_and_log_interval,
        delimiter="  ",
        debug=args.debug)
    metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))

    base_ds = get_coco_api_from_dataset(data_loader.dataset)
    iou_types = tuple(k for k in ('bbox', 'segm') if k in postprocessors.keys())
    coco_evaluator = CocoEvaluator(base_ds, iou_types)
    # coco_evaluator.coco_eval[iou_types[0]].params.iouThrs = [0, 0.1, 0.5, 0.75]

    panoptic_evaluator = None
    if 'panoptic' in postprocessors.keys():
        panoptic_evaluator = PanopticEvaluator(
            data_loader.dataset.ann_file,
            data_loader.dataset.ann_folder,
            output_dir=os.path.join(output_dir, "panoptic_eval"),
        )

    for i, (samples, targets) in enumerate(metric_logger.log_every(data_loader, 'Test:')):
        samples = samples.to(device)
        targets = [utils.nested_dict_to_device(t, device) for t in targets]

        outputs, targets, *_ = model(samples, targets)

        loss_dict = criterion(outputs, targets)
        weight_dict = criterion.weight_dict

        # reduce losses over all GPUs for logging purposes
        loss_dict_reduced = utils.reduce_dict(loss_dict)
        loss_dict_reduced_scaled = {k: v * weight_dict[k]
                                    for k, v in loss_dict_reduced.items() if k in weight_dict}
        loss_dict_reduced_unscaled = {f'{k}_unscaled': v
                                      for k, v in loss_dict_reduced.items()}
        metric_logger.update(loss=sum(loss_dict_reduced_scaled.values()),
                             **loss_dict_reduced_scaled,
                             **loss_dict_reduced_unscaled)
        metric_logger.update(class_error=loss_dict_reduced['class_error'])

        if visualizers and (i == 0 or not i % args.vis_and_log_interval):
            results_orig, results = make_results(
                outputs, targets, postprocessors, args.tracking, return_only_orig=False)

            vis_results(
                visualizers['example_results'],
                samples.unmasked_tensor(0),
                results[0],
                targets[0],
                args.tracking)
        else:
            results_orig, _ = make_results(outputs, targets, postprocessors, args.tracking)

        # TODO. remove cocoDts from coco eval and change example results output
        if coco_evaluator is not None:
            results_orig = {
                target['image_id'].item(): output
                for target, output in zip(targets, results_orig)}

            coco_evaluator.update(results_orig)

        if panoptic_evaluator is not None:
            target_sizes = torch.stack([t["size"] for t in targets], dim=0)
            orig_target_sizes = torch.stack([t["orig_size"] for t in targets], dim=0)

            res_pano = postprocessors["panoptic"](outputs, target_sizes, orig_target_sizes)
            for j, target in enumerate(targets):
                image_id = target["image_id"].item()
                file_name = f"{image_id:012d}.png"
                res_pano[j]["image_id"] = image_id
                res_pano[j]["file_name"] = file_name

            panoptic_evaluator.update(res_pano)

    # gather the stats from all processes
    metric_logger.synchronize_between_processes()
    print("Averaged stats:", metric_logger)
    if coco_evaluator is not None:
        coco_evaluator.synchronize_between_processes()
    if panoptic_evaluator is not None:
        panoptic_evaluator.synchronize_between_processes()

    # accumulate predictions from all images
    if coco_evaluator is not None:
        coco_evaluator.accumulate()
        coco_evaluator.summarize()
    panoptic_res = None
    if panoptic_evaluator is not None:
        panoptic_res = panoptic_evaluator.summarize()
    stats = {k: meter.global_avg for k, meter in metric_logger.meters.items()}
    if coco_evaluator is not None:
        if 'bbox' in coco_evaluator.coco_eval:
            stats['coco_eval_bbox'] = coco_evaluator.coco_eval['bbox'].stats.tolist()
        if 'segm' in coco_evaluator.coco_eval:
            stats['coco_eval_masks'] = coco_evaluator.coco_eval['segm'].stats.tolist()
    if panoptic_res is not None:
        stats['PQ_all'] = panoptic_res["All"]
        stats['PQ_th'] = panoptic_res["Things"]
        stats['PQ_st'] = panoptic_res["Stuff"]

    # TRACK EVAL
    if args.tracking and args.tracking_eval:
        stats['track_bbox'] = []

        ex.logger = logging.getLogger("submitit")

        # distribute evaluation of seqs to processes
        seqs = data_loader.dataset.sequences
        seqs_per_rank = {i: [] for i in range(utils.get_world_size())}
        for i, seq in enumerate(seqs):
            rank = i % utils.get_world_size()
            seqs_per_rank[rank].append(seq)

        # only evaluate one seq in debug mode
        i
Download .txt
gitextract_f7imb6c9/

├── .circleci/
│   └── config.yml
├── .github/
│   ├── CODE_OF_CONDUCT.md
│   ├── CONTRIBUTING.md
│   └── ISSUE_TEMPLATE/
│       ├── bugs.md
│       ├── questions-help-support.md
│       └── unexpected-problems-bugs.md
├── .gitignore
├── LICENSE
├── README.md
├── cfgs/
│   ├── submit.yaml
│   ├── track.yaml
│   ├── track_reid.yaml
│   ├── train.yaml
│   ├── train_coco_person_masks.yaml
│   ├── train_crowdhuman.yaml
│   ├── train_deformable.yaml
│   ├── train_full_res.yaml
│   ├── train_mot17.yaml
│   ├── train_mot17_crowdhuman.yaml
│   ├── train_mot20_crowdhuman.yaml
│   ├── train_mot_coco_person.yaml
│   ├── train_mots20.yaml
│   ├── train_multi_frame.yaml
│   └── train_tracking.yaml
├── data/
│   └── .gitignore
├── docs/
│   ├── INSTALL.md
│   └── TRAIN.md
├── logs/
│   ├── .gitignore
│   └── visdom/
│       └── .gitignore
├── models/
│   └── .gitignore
├── requirements.txt
├── setup.py
└── src/
    ├── combine_frames.py
    ├── compute_best_mean_epoch_from_splits.py
    ├── generate_coco_from_crowdhuman.py
    ├── generate_coco_from_mot.py
    ├── parse_mot_results_to_tex.py
    ├── run_with_submitit.py
    ├── track.py
    ├── track_param_search.py
    ├── trackformer/
    │   ├── __init__.py
    │   ├── datasets/
    │   │   ├── __init__.py
    │   │   ├── coco.py
    │   │   ├── coco_eval.py
    │   │   ├── coco_panoptic.py
    │   │   ├── crowdhuman.py
    │   │   ├── mot.py
    │   │   ├── panoptic_eval.py
    │   │   ├── tracking/
    │   │   │   ├── __init__.py
    │   │   │   ├── demo_sequence.py
    │   │   │   ├── factory.py
    │   │   │   ├── mot17_sequence.py
    │   │   │   ├── mot20_sequence.py
    │   │   │   ├── mot_wrapper.py
    │   │   │   └── mots20_sequence.py
    │   │   └── transforms.py
    │   ├── engine.py
    │   ├── models/
    │   │   ├── __init__.py
    │   │   ├── backbone.py
    │   │   ├── deformable_detr.py
    │   │   ├── deformable_transformer.py
    │   │   ├── detr.py
    │   │   ├── detr_segmentation.py
    │   │   ├── detr_tracking.py
    │   │   ├── matcher.py
    │   │   ├── ops/
    │   │   │   ├── .gitignore
    │   │   │   ├── functions/
    │   │   │   │   ├── __init__.py
    │   │   │   │   └── ms_deform_attn_func.py
    │   │   │   ├── make.sh
    │   │   │   ├── modules/
    │   │   │   │   ├── __init__.py
    │   │   │   │   └── ms_deform_attn.py
    │   │   │   ├── setup.py
    │   │   │   ├── src/
    │   │   │   │   ├── cpu/
    │   │   │   │   │   ├── ms_deform_attn_cpu.cpp
    │   │   │   │   │   └── ms_deform_attn_cpu.h
    │   │   │   │   ├── cuda/
    │   │   │   │   │   ├── ms_deform_attn_cuda.cu
    │   │   │   │   │   ├── ms_deform_attn_cuda.h
    │   │   │   │   │   └── ms_deform_im2col_cuda.cuh
    │   │   │   │   ├── ms_deform_attn.h
    │   │   │   │   └── vision.cpp
    │   │   │   ├── test.py
    │   │   │   └── test_double_precision.py
    │   │   ├── position_encoding.py
    │   │   ├── tracker.py
    │   │   └── transformer.py
    │   ├── util/
    │   │   ├── __init__.py
    │   │   ├── box_ops.py
    │   │   ├── misc.py
    │   │   ├── plot_utils.py
    │   │   └── track_utils.py
    │   └── vis.py
    └── train.py
Download .txt
SYMBOL INDEX (422 symbols across 44 files)

FILE: src/generate_coco_from_crowdhuman.py
  function generate_coco_from_crowdhuman (line 15) | def generate_coco_from_crowdhuman(split_name='train_val', split='train_v...

FILE: src/generate_coco_from_mot.py
  function generate_coco_from_mot (line 36) | def generate_coco_from_mot(split_name='train', seqs_names=None,
  function check_coco_from_mot (line 266) | def check_coco_from_mot(coco_dir='data/MOT17/mot17_train_coco', annotati...

FILE: src/run_with_submitit.py
  function get_shared_folder (line 24) | def get_shared_folder() -> Path:
  function get_init_file (line 33) | def get_init_file() -> Path:
  class Trainer (line 42) | class Trainer:
    method __init__ (line 43) | def __init__(self, args: Namespace) -> None:
    method __call__ (line 46) | def __call__(self) -> None:
    method checkpoint (line 53) | def checkpoint(self) -> submitit.helpers.DelayedSubmission:
    method _setup_gpu_args (line 69) | def _setup_gpu_args(self) -> None:
  function main (line 83) | def main(args: Namespace):
  function load_config (line 131) | def load_config(_config, _run):

FILE: src/track.py
  function main (line 30) | def main(seed, dataset_name, obj_detect_checkpoint_file, tracker_cfg,

FILE: src/trackformer/datasets/__init__.py
  function get_coco_api_from_dataset (line 15) | def get_coco_api_from_dataset(dataset: Subset) -> COCO:
  function build_dataset (line 29) | def build_dataset(split: str, args: Namespace) -> Dataset:

FILE: src/trackformer/datasets/coco.py
  class CocoDetection (line 21) | class CocoDetection(torchvision.datasets.CocoDetection):
    method __init__ (line 25) | def __init__(self,  img_folder, ann_file, transforms, norm_transforms,
    method _getitem_from_id (line 48) | def _getitem_from_id(self, image_id, random_state=None, random_jitter=...
    method _add_random_jitter (line 89) | def _add_random_jitter(self, img, target):
    method __getitem__ (line 146) | def __getitem__(self, idx):
    method write_result_files (line 166) | def write_result_files(self, *args):
  function convert_coco_poly_to_mask (line 170) | def convert_coco_poly_to_mask(segmentations, height, width):
  class ConvertCocoPolysToMask (line 191) | class ConvertCocoPolysToMask(object):
    method __init__ (line 192) | def __init__(self, return_masks=False, overflow_boxes=False):
    method __call__ (line 196) | def __call__(self, image, target):
  function make_coco_transforms (line 270) | def make_coco_transforms(image_set, img_transform=None, overflow_boxes=F...
  function build (line 315) | def build(image_set, args, mode='instances'):

FILE: src/trackformer/datasets/coco_eval.py
  class CocoEvaluator (line 22) | class CocoEvaluator(object):
    method __init__ (line 23) | def __init__(self, coco_gt, iou_types):
    method update (line 36) | def update(self, predictions):
    method synchronize_between_processes (line 58) | def synchronize_between_processes(self):
    method accumulate (line 66) | def accumulate(self):
    method summarize (line 70) | def summarize(self):
    method prepare (line 75) | def prepare(self, predictions, iou_type):
    method prepare_for_coco_detection (line 85) | def prepare_for_coco_detection(self, predictions):
    method prepare_for_coco_segmentation (line 109) | def prepare_for_coco_segmentation(self, predictions):
    method prepare_for_coco_keypoint (line 144) | def prepare_for_coco_keypoint(self, predictions):
  function convert_to_xywh (line 171) | def convert_to_xywh(boxes):
  function merge (line 176) | def merge(img_ids, eval_imgs):
  function create_common_coco_eval (line 198) | def create_common_coco_eval(coco_eval, img_ids, eval_imgs):
  function evaluate (line 214) | def evaluate(self):

FILE: src/trackformer/datasets/coco_panoptic.py
  class CocoPanoptic (line 15) | class CocoPanoptic:
    method __init__ (line 16) | def __init__(self, img_folder, ann_folder, ann_file, transforms=None, ...
    method __getitem__ (line 35) | def __getitem__(self, idx):
    method __len__ (line 73) | def __len__(self):
    method get_height_and_width (line 76) | def get_height_and_width(self, idx):
  function build (line 83) | def build(image_set, args):

FILE: src/trackformer/datasets/crowdhuman.py
  function build_crowdhuman (line 10) | def build_crowdhuman(image_set, args):

FILE: src/trackformer/datasets/mot.py
  class MOT (line 20) | class MOT(CocoDetection):
    method __init__ (line 22) | def __init__(self, *args, prev_frame_range=1, **kwargs):
    method sequences (line 28) | def sequences(self):
    method frame_range (line 32) | def frame_range(self):
    method seq_length (line 38) | def seq_length(self, idx):
    method sample_weight (line 41) | def sample_weight(self, idx):
    method __getitem__ (line 44) | def __getitem__(self, idx):
    method write_result_files (line 76) | def write_result_files(self, results, output_dir):
  class WeightedConcatDataset (line 114) | class WeightedConcatDataset(torch.utils.data.ConcatDataset):
    method sample_weight (line 116) | def sample_weight(self, idx):
  function build_mot (line 129) | def build_mot(image_set, args):
  function build_mot_crowdhuman (line 165) | def build_mot_crowdhuman(image_set, args):
  function build_mot_coco_person (line 184) | def build_mot_coco_person(image_set, args):

FILE: src/trackformer/datasets/panoptic_eval.py
  class PanopticEvaluator (line 13) | class PanopticEvaluator(object):
    method __init__ (line 14) | def __init__(self, ann_file, ann_folder, output_dir="panoptic_eval"):
    method update (line 23) | def update(self, predictions):
    method synchronize_between_processes (line 30) | def synchronize_between_processes(self):
    method summarize (line 37) | def summarize(self):

FILE: src/trackformer/datasets/tracking/demo_sequence.py
  class DemoSequence (line 22) | class DemoSequence(Dataset):
    method __init__ (line 26) | def __init__(self, root_dir: str = 'data', img_transform: Namespace = ...
    method __len__ (line 43) | def __len__(self) -> int:
    method __str__ (line 46) | def __str__(self) -> str:
    method __getitem__ (line 49) | def __getitem__(self, idx: int) -> dict:
    method _sequence (line 67) | def _sequence(self) -> List[dict]:
    method load_results (line 76) | def load_results(self, results_dir: str) -> dict:
    method write_results (line 79) | def write_results(self, results: dict, output_dir: str) -> None:

FILE: src/trackformer/datasets/tracking/factory.py
  class TrackDatasetFactory (line 40) | class TrackDatasetFactory:
    method __init__ (line 47) | def __init__(self, datasets: Union[str, list], **kwargs) -> None:
    method __len__ (line 66) | def __len__(self) -> int:
    method __getitem__ (line 69) | def __getitem__(self, idx: int):

FILE: src/trackformer/datasets/tracking/mot17_sequence.py
  class MOT17Sequence (line 21) | class MOT17Sequence(Dataset):
    method __init__ (line 29) | def __init__(self, root_dir: str = 'data', seq_name: Optional[str] = N...
    method __len__ (line 62) | def __len__(self) -> int:
    method __getitem__ (line 65) | def __getitem__(self, idx: int) -> dict:
    method _sequence (line 85) | def _sequence(self) -> List[dict]:
    method get_track_boxes_and_visbility (line 119) | def get_track_boxes_and_visbility(self) -> Tuple[dict, dict]:
    method get_seq_path (line 153) | def get_seq_path(self) -> str:
    method get_config_file_path (line 164) | def get_config_file_path(self) -> str:
    method get_gt_file_path (line 168) | def get_gt_file_path(self) -> str:
    method get_det_file_path (line 172) | def get_det_file_path(self) -> str:
    method config (line 180) | def config(self) -> dict:
    method seq_length (line 192) | def seq_length(self) -> int:
    method __str__ (line 196) | def __str__(self) -> str:
    method results_file_name (line 200) | def results_file_name(self) -> str:
    method write_results (line 209) | def write_results(self, results: dict, output_dir: str) -> None:
    method load_results (line 244) | def load_results(self, results_dir: str) -> dict:

FILE: src/trackformer/datasets/tracking/mot20_sequence.py
  class MOT20Sequence (line 9) | class MOT20Sequence(MOT17Sequence):

FILE: src/trackformer/datasets/tracking/mot_wrapper.py
  class MOT17Wrapper (line 12) | class MOT17Wrapper(Dataset):
    method __init__ (line 15) | def __init__(self, split: str, dets: str, **kwargs) -> None:
    method __len__ (line 50) | def __len__(self) -> int:
    method __getitem__ (line 53) | def __getitem__(self, idx: int):
  class MOT20Wrapper (line 57) | class MOT20Wrapper(Dataset):
    method __init__ (line 60) | def __init__(self, split: str, **kwargs) -> None:
    method __len__ (line 86) | def __len__(self) -> int:
    method __getitem__ (line 89) | def __getitem__(self, idx: int):
  class MOTS20Wrapper (line 93) | class MOTS20Wrapper(MOT17Wrapper):
    method __init__ (line 96) | def __init__(self, split: str, **kwargs) -> None:

FILE: src/trackformer/datasets/tracking/mots20_sequence.py
  class MOTS20Sequence (line 17) | class MOTS20Sequence(MOT17Sequence):
    method __init__ (line 25) | def __init__(self, root_dir: str = 'data', seq_name: Optional[str] = N...
    method get_track_boxes_and_visbility (line 35) | def get_track_boxes_and_visbility(self) -> Tuple[dict, dict]:
    method write_results (line 72) | def write_results(self, results: dict, output_dir: str) -> None:
    method load_results (line 93) | def load_results(self, results_dir: str) -> dict:
    method __str__ (line 136) | def __str__(self) -> str:
  class SegmentedObject (line 140) | class SegmentedObject:
    method __init__ (line 144) | def __init__(self, mask: dict, class_id: int, track_id: int) -> None:
  function load_mots_gt (line 150) | def load_mots_gt(path: str) -> dict:

FILE: src/trackformer/datasets/transforms.py
  function crop (line 17) | def crop(image, target, region, overflow_boxes=False):
  function hflip (line 85) | def hflip(image, target):
  function resize (line 115) | def resize(image, target, size, max_size=None):
  function pad (line 175) | def pad(image, target, padding):
  class RandomCrop (line 198) | class RandomCrop:
    method __init__ (line 199) | def __init__(self, size, overflow_boxes=False):
    method __call__ (line 204) | def __call__(self, img, target):
  class RandomSizeCrop (line 209) | class RandomSizeCrop:
    method __init__ (line 210) | def __init__(self,
    method __call__ (line 223) | def __call__(self, img: PIL.Image.Image, target: dict):
  class CenterCrop (line 239) | class CenterCrop:
    method __init__ (line 240) | def __init__(self, size, overflow_boxes=False):
    method __call__ (line 244) | def __call__(self, img, target):
  class RandomHorizontalFlip (line 252) | class RandomHorizontalFlip:
    method __init__ (line 253) | def __init__(self, p=0.5):
    method __call__ (line 256) | def __call__(self, img, target):
  class RepeatUntilMaxObjects (line 262) | class RepeatUntilMaxObjects:
    method __init__ (line 263) | def __init__(self, transforms, num_max_objects):
    method __call__ (line 267) | def __call__(self, img, target):
  class RandomResize (line 275) | class RandomResize:
    method __init__ (line 276) | def __init__(self, sizes, max_size=None):
    method __call__ (line 281) | def __call__(self, img, target=None):
  class RandomResizeTargets (line 286) | class RandomResizeTargets:
    method __init__ (line 287) | def __init__(self, scale=0.5):
    method __call__ (line 290) | def __call__(self, img, target=None):
  class RandomPad (line 343) | class RandomPad:
    method __init__ (line 344) | def __init__(self, max_size):
    method __call__ (line 350) | def __call__(self, img, target):
  class RandomSelect (line 365) | class RandomSelect:
    method __init__ (line 370) | def __init__(self, transforms1, transforms2, p=0.5):
    method __call__ (line 375) | def __call__(self, img, target):
  class ToTensor (line 381) | class ToTensor:
    method __call__ (line 382) | def __call__(self, img, target=None):
  class RandomErasing (line 386) | class RandomErasing:
    method __init__ (line 388) | def __init__(self, p=0.5, scale=(0.02, 0.33), ratio=(0.3, 3.3), value=...
    method __call__ (line 396) | def __call__(self, img, target):
  class Normalize (line 457) | class Normalize:
    method __init__ (line 458) | def __init__(self, mean, std):
    method __call__ (line 462) | def __call__(self, image, target=None):
  class Compose (line 476) | class Compose:
    method __init__ (line 477) | def __init__(self, transforms):
    method __call__ (line 480) | def __call__(self, image, target=None):
    method __repr__ (line 485) | def __repr__(self):

FILE: src/trackformer/engine.py
  function make_results (line 24) | def make_results(outputs, targets, postprocessors, tracking, return_only...
  function train_one_epoch (line 101) | def train_one_epoch(model: torch.nn.Module, criterion: torch.nn.Module, ...
  function evaluate (line 179) | def evaluate(model, criterion, postprocessors, data_loader, device,

FILE: src/trackformer/models/__init__.py
  function build_model (line 16) | def build_model(args):

FILE: src/trackformer/models/backbone.py
  class FrozenBatchNorm2d (line 19) | class FrozenBatchNorm2d(torch.nn.Module):
    method __init__ (line 28) | def __init__(self, n):
    method _load_from_state_dict (line 35) | def _load_from_state_dict(self, state_dict, prefix, local_metadata, st...
    method forward (line 45) | def forward(self, x):
  class BackboneBase (line 58) | class BackboneBase(nn.Module):
    method __init__ (line 60) | def __init__(self, backbone: nn.Module, train_backbone: bool,
    method forward (line 80) | def forward(self, tensor_list: NestedTensor):
  class Backbone (line 91) | class Backbone(BackboneBase):
    method __init__ (line 93) | def __init__(self, name: str,
  class Joiner (line 107) | class Joiner(nn.Sequential):
    method __init__ (line 108) | def __init__(self, backbone, position_embedding):
    method forward (line 113) | def forward(self, tensor_list: NestedTensor):
  function build_backbone (line 125) | def build_backbone(args):

FILE: src/trackformer/models/deformable_detr.py
  function _get_clones (line 25) | def _get_clones(module, N):
  class DeformableDETR (line 29) | class DeformableDETR(DETR):
    method __init__ (line 31) | def __init__(self, backbone, transformer, num_classes, num_queries, nu...
    method forward (line 124) | def forward(self, samples: NestedTensor, targets: list = None, prev_fe...
    method _set_aux_loss (line 278) | def _set_aux_loss(self, outputs_class, outputs_coord):
  class DeformablePostProcess (line 286) | class DeformablePostProcess(PostProcess):
    method forward (line 290) | def forward(self, outputs, target_sizes, results_mask=None):

FILE: src/trackformer/models/deformable_transformer.py
  class DeformableTransformer (line 21) | class DeformableTransformer(nn.Module):
    method __init__ (line 22) | def __init__(self, d_model=256, nhead=8,
    method _reset_parameters (line 65) | def _reset_parameters(self):
    method get_proposal_pos_embed (line 77) | def get_proposal_pos_embed(self, proposals):
    method gen_encoder_output_proposals (line 92) | def gen_encoder_output_proposals(self, memory, memory_padding_mask, sp...
    method get_valid_ratio (line 124) | def get_valid_ratio(self, mask):
    method forward (line 133) | def forward(self, srcs, masks, pos_embeds, query_embed=None, targets=N...
  class DeformableTransformerEncoderLayer (line 258) | class DeformableTransformerEncoderLayer(nn.Module):
    method __init__ (line 259) | def __init__(self,
    method with_pos_embed (line 279) | def with_pos_embed(tensor, pos):
    method forward_ffn (line 282) | def forward_ffn(self, src):
    method forward (line 288) | def forward(self, src, pos, reference_points, spatial_shapes, padding_...
  class DeformableTransformerEncoder (line 300) | class DeformableTransformerEncoder(nn.Module):
    method __init__ (line 301) | def __init__(self, encoder_layer, num_layers):
    method get_reference_points (line 307) | def get_reference_points(spatial_shapes, valid_ratios, device):
    method forward (line 321) | def forward(self, src, spatial_shapes, valid_ratios, pos=None, padding...
  class DeformableTransformerDecoderLayer (line 330) | class DeformableTransformerDecoderLayer(nn.Module):
    method __init__ (line 331) | def __init__(self, d_model=256, d_ffn=1024,
    method with_pos_embed (line 355) | def with_pos_embed(tensor, pos):
    method forward_ffn (line 358) | def forward_ffn(self, tgt):
    method forward (line 364) | def forward(self, tgt, query_pos, reference_points, src, src_spatial_s...
  class DeformableTransformerDecoder (line 386) | class DeformableTransformerDecoder(nn.Module):
    method __init__ (line 387) | def __init__(self, decoder_layer, num_layers, return_intermediate=False):
    method forward (line 396) | def forward(self, tgt, reference_points, src, src_spatial_shapes, src_...
  function build_deforamble_transformer (line 434) | def build_deforamble_transformer(args):

FILE: src/trackformer/models/detr.py
  class DETR (line 17) | class DETR(nn.Module):
    method __init__ (line 20) | def __init__(self, backbone, transformer, num_classes, num_queries,
    method hidden_dim (line 52) | def hidden_dim(self):
    method fpn_channels (line 57) | def fpn_channels(self):
    method forward (line 62) | def forward(self, samples: NestedTensor, targets: list = None):
    method _set_aux_loss (line 131) | def _set_aux_loss(self, outputs_class, outputs_coord):
  class SetCriterion (line 139) | class SetCriterion(nn.Module):
    method __init__ (line 145) | def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses,
    method loss_labels (line 172) | def loss_labels(self, outputs, targets, indices, _, log=True):
    method loss_labels_focal (line 213) | def loss_labels_focal(self, outputs, targets, indices, num_boxes, log=...
    method loss_cardinality (line 276) | def loss_cardinality(self, outputs, targets, indices, num_boxes):
    method loss_boxes (line 290) | def loss_boxes(self, outputs, targets, indices, num_boxes):
    method loss_masks (line 330) | def loss_masks(self, outputs, targets, indices, num_boxes):
    method _get_src_permutation_idx (line 360) | def _get_src_permutation_idx(self, indices):
    method _get_tgt_permutation_idx (line 366) | def _get_tgt_permutation_idx(self, indices):
    method get_loss (line 372) | def get_loss(self, loss, outputs, targets, indices, num_boxes, **kwargs):
    method forward (line 382) | def forward(self, outputs, targets):
  class PostProcess (line 446) | class PostProcess(nn.Module):
    method process_boxes (line 449) | def process_boxes(self, boxes, target_sizes):
    method forward (line 460) | def forward(self, outputs, target_sizes, results_mask=None):
  class MLP (line 493) | class MLP(nn.Module):
    method __init__ (line 496) | def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
    method forward (line 504) | def forward(self, x):

FILE: src/trackformer/models/detr_segmentation.py
  class DETRSegmBase (line 29) | class DETRSegmBase(nn.Module):
    method __init__ (line 30) | def __init__(self, freeze_detr=False):
    method forward (line 41) | def forward(self, samples: NestedTensor, targets: list = None):
  class DETRSegm (line 75) | class DETRSegm(DETRSegmBase, DETR):
    method __init__ (line 76) | def __init__(self, mask_kwargs, detr_kwargs):
  class DeformableDETRSegm (line 81) | class DeformableDETRSegm(DETRSegmBase, DeformableDETR):
    method __init__ (line 82) | def __init__(self, mask_kwargs, detr_kwargs):
  class DETRSegmTracking (line 87) | class DETRSegmTracking(DETRSegmBase, DETRTrackingBase, DETR):
    method __init__ (line 88) | def __init__(self, mask_kwargs, tracking_kwargs, detr_kwargs):
  class DeformableDETRSegmTracking (line 94) | class DeformableDETRSegmTracking(DETRSegmBase, DETRTrackingBase, Deforma...
    method __init__ (line 95) | def __init__(self, mask_kwargs, tracking_kwargs, detr_kwargs):
  function _expand (line 101) | def _expand(tensor, length: int):
  class MaskHeadSmallConv (line 105) | class MaskHeadSmallConv(nn.Module):
    method __init__ (line 111) | def __init__(self, dim, fpn_dims, context_dim):
    method forward (line 143) | def forward(self, x: Tensor, bbox_mask: Tensor, fpns: List[Tensor]):
  class MHAttentionMap (line 181) | class MHAttentionMap(nn.Module):
    method __init__ (line 185) | def __init__(self, query_dim, hidden_dim, num_heads, dropout=0.0, bias...
    method forward (line 200) | def forward(self, q, k, mask: Optional[Tensor] = None):
  class PostProcessSegm (line 219) | class PostProcessSegm(nn.Module):
    method __init__ (line 220) | def __init__(self, threshold=0.5):
    method forward (line 225) | def forward(self, results, outputs, orig_target_sizes, max_target_size...
  class PostProcessPanoptic (line 256) | class PostProcessPanoptic(nn.Module):
    method __init__ (line 260) | def __init__(self, is_thing_map, threshold=0.85):
    method forward (line 273) | def forward(self, outputs, processed_sizes, target_sizes=None):

FILE: src/trackformer/models/detr_tracking.py
  class DETRTrackingBase (line 15) | class DETRTrackingBase(nn.Module):
    method __init__ (line 17) | def __init__(self,
    method train (line 29) | def train(self, mode: bool = True):
    method tracking (line 34) | def tracking(self):
    method add_track_queries_to_targets (line 39) | def add_track_queries_to_targets(self, targets, prev_indices, prev_out...
    method forward (line 219) | def forward(self, samples: NestedTensor, targets: list = None, prev_fe...
  class DETRTracking (line 281) | class DETRTracking(DETRTrackingBase, DETR):
    method __init__ (line 282) | def __init__(self, tracking_kwargs, detr_kwargs):
  class DeformableDETRTracking (line 287) | class DeformableDETRTracking(DETRTrackingBase, DeformableDETR):
    method __init__ (line 288) | def __init__(self, tracking_kwargs, detr_kwargs):

FILE: src/trackformer/models/matcher.py
  class HungarianMatcher (line 13) | class HungarianMatcher(nn.Module):
    method __init__ (line 21) | def __init__(self, cost_class: float = 1, cost_bbox: float = 1, cost_g...
    method forward (line 42) | def forward(self, outputs, targets):
  function build_matcher (line 134) | def build_matcher(args):

FILE: src/trackformer/models/ops/functions/ms_deform_attn_func.py
  class MSDeformAttnFunction (line 14) | class MSDeformAttnFunction(Function):
    method forward (line 16) | def forward(ctx, value, value_spatial_shapes, sampling_locations, atte...
    method backward (line 25) | def backward(ctx, grad_output):
  function ms_deform_attn_core_pytorch (line 34) | def ms_deform_attn_core_pytorch(value, value_spatial_shapes, sampling_lo...
  function ms_deform_attn_core_pytorch_mot (line 56) | def ms_deform_attn_core_pytorch_mot(query, value, value_spatial_shapes, ...

FILE: src/trackformer/models/ops/modules/ms_deform_attn.py
  class MSDeformAttn (line 15) | class MSDeformAttn(nn.Module):
    method __init__ (line 16) | def __init__(self, d_model=256, n_levels=4, n_heads=8, n_points=4, im2...
    method _reset_parameters (line 34) | def _reset_parameters(self):
    method forward (line 49) | def forward(self, query, reference_points, input_flatten, input_spatia...

FILE: src/trackformer/models/ops/setup.py
  function get_extensions (line 17) | def get_extensions():

FILE: src/trackformer/models/ops/src/cpu/ms_deform_attn_cpu.cpp
  function ms_deform_attn_cpu_forward (line 7) | at::Tensor
  function ms_deform_attn_cpu_backward (line 18) | std::vector<at::Tensor>

FILE: src/trackformer/models/ops/src/ms_deform_attn.h
  function im2col_step (line 16) | int im2col_step)

FILE: src/trackformer/models/ops/src/vision.cpp
  function PYBIND11_MODULE (line 4) | PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {

FILE: src/trackformer/models/ops/test.py
  function check_forward_equal_with_pytorch (line 23) | def check_forward_equal_with_pytorch():
  function check_backward_equal_with_pytorch (line 38) | def check_backward_equal_with_pytorch():
  function check_gradient_ms_deform_attn (line 98) | def check_gradient_ms_deform_attn(

FILE: src/trackformer/models/ops/test_double_precision.py
  function check_forward_equal_with_pytorch (line 22) | def check_forward_equal_with_pytorch():
  function check_backward_equal_with_pytorch (line 37) | def check_backward_equal_with_pytorch():
  function check_gradient_ms_deform_attn (line 97) | def check_gradient_ms_deform_attn(

FILE: src/trackformer/models/position_encoding.py
  class PositionEmbeddingSine3D (line 12) | class PositionEmbeddingSine3D(nn.Module):
    method __init__ (line 18) | def __init__(self, num_pos_feats=64, num_frames=2, temperature=10000, ...
    method forward (line 31) | def forward(self, tensor_list: NestedTensor):
  class PositionEmbeddingSine (line 84) | class PositionEmbeddingSine(nn.Module):
    method __init__ (line 89) | def __init__(self, num_pos_feats=64, temperature=10000, normalize=Fals...
    method forward (line 100) | def forward(self, tensor_list: NestedTensor):
  class PositionEmbeddingLearned (line 123) | class PositionEmbeddingLearned(nn.Module):
    method __init__ (line 127) | def __init__(self, num_pos_feats=256):
    method reset_parameters (line 133) | def reset_parameters(self):
    method forward (line 137) | def forward(self, tensor_list: NestedTensor):
  function build_position_encoding (line 151) | def build_position_encoding(args):

FILE: src/trackformer/models/tracker.py
  class Tracker (line 16) | class Tracker:
    method __init__ (line 19) | def __init__(self, obj_detector, obj_detector_post, tracker_cfg,
    method num_object_queries (line 68) | def num_object_queries(self):
    method reset (line 71) | def reset(self, hard=True):
    method device (line 83) | def device(self):
    method tracks_to_inactive (line 86) | def tracks_to_inactive(self, tracks):
    method add_tracks (line 93) | def add_tracks(self, pos, scores, hs_embeds, indices, masks=None, atte...
    method public_detections_mask (line 124) | def public_detections_mask(self, new_det_boxes, public_det_boxes):
    method reid (line 167) | def reid(self, new_det_boxes, new_det_scores, new_det_hs_embeds,
    method step (line 266) | def step(self, blob):
    method get_results (line 552) | def get_results(self):
  class Track (line 557) | class Track(object):
    method __init__ (line 560) | def __init__(self, pos, score, track_id, hs_embed, obj_ind,
    method has_positive_area (line 575) | def has_positive_area(self) -> bool:
    method reset_last_pos (line 580) | def reset_last_pos(self) -> None:

FILE: src/trackformer/models/transformer.py
  class Transformer (line 18) | class Transformer(nn.Module):
    method __init__ (line 20) | def __init__(self, d_model=512, nhead=8, num_encoder_layers=6,
    method _reset_parameters (line 45) | def _reset_parameters(self):
    method forward (line 50) | def forward(self, src, mask, query_embed, pos_embed, tgt=None, prev_fr...
  class TransformerEncoder (line 83) | class TransformerEncoder(nn.Module):
    method __init__ (line 85) | def __init__(self, encoder_layer, num_layers, norm=None):
    method forward (line 91) | def forward(self, src,
  class TransformerDecoder (line 107) | class TransformerDecoder(nn.Module):
    method __init__ (line 109) | def __init__(self, decoder_layer, encoder_layer, num_layers,
    method forward (line 122) | def forward(self, tgt, memory,
  class TransformerEncoderLayer (line 166) | class TransformerEncoderLayer(nn.Module):
    method __init__ (line 168) | def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
    method with_pos_embed (line 185) | def with_pos_embed(self, tensor, pos: Optional[Tensor]):
    method forward_post (line 188) | def forward_post(self,
    method forward_pre (line 203) | def forward_pre(self, src,
    method forward (line 217) | def forward(self, src,
  class TransformerDecoderLayer (line 226) | class TransformerDecoderLayer(nn.Module):
    method __init__ (line 228) | def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
    method with_pos_embed (line 248) | def with_pos_embed(self, tensor, pos: Optional[Tensor]):
    method forward_post (line 251) | def forward_post(self, tgt, memory,
    method forward_pre (line 274) | def forward_pre(self, tgt, memory,
    method forward (line 297) | def forward(self, tgt, memory,
  function _get_clones (line 311) | def _get_clones(module, N):
  function _get_activation_fn (line 315) | def _get_activation_fn(activation):
  function build_transformer (line 326) | def build_transformer(args):

FILE: src/trackformer/util/box_ops.py
  function box_cxcywh_to_xyxy (line 9) | def box_cxcywh_to_xyxy(x):
  function box_xyxy_to_cxcywh (line 16) | def box_xyxy_to_cxcywh(x):
  function box_iou (line 24) | def box_iou(boxes1, boxes2):
  function generalized_box_iou (line 40) | def generalized_box_iou(boxes1, boxes2):
  function masks_to_boxes (line 64) | def masks_to_boxes(masks):

FILE: src/trackformer/util/misc.py
  class SmoothedValue (line 29) | class SmoothedValue(object):
    method __init__ (line 34) | def __init__(self, window_size=20, fmt=None):
    method update (line 42) | def update(self, value, n=1):
    method synchronize_between_processes (line 47) | def synchronize_between_processes(self):
    method median (line 61) | def median(self):
    method avg (line 66) | def avg(self):
    method global_avg (line 71) | def global_avg(self):
    method max (line 75) | def max(self):
    method value (line 79) | def value(self):
    method __str__ (line 82) | def __str__(self):
  function all_gather (line 91) | def all_gather(data):
  function reduce_dict (line 135) | def reduce_dict(input_dict, average=True):
  class MetricLogger (line 162) | class MetricLogger(object):
    method __init__ (line 163) | def __init__(self, print_freq, delimiter="\t", vis=None, debug=False):
    method update (line 170) | def update(self, **kwargs):
    method __getattr__ (line 177) | def __getattr__(self, attr):
    method __str__ (line 185) | def __str__(self):
    method synchronize_between_processes (line 191) | def synchronize_between_processes(self):
    method add_meter (line 195) | def add_meter(self, name, meter):
    method log_every (line 198) | def log_every(self, iterable, epoch=None, header=None):
  function get_sha (line 274) | def get_sha():
  function collate_fn (line 294) | def collate_fn(batch):
  function _max_by_axis (line 300) | def _max_by_axis(the_list):
  function nested_tensor_from_tensor_list (line 309) | def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
  class NestedTensor (line 329) | class NestedTensor(object):
    method __init__ (line 330) | def __init__(self, tensors, mask: Optional[Tensor] = None):
    method to (line 334) | def to(self, device):
    method decompose (line 345) | def decompose(self):
    method __repr__ (line 348) | def __repr__(self):
    method unmasked_tensor (line 351) | def unmasked_tensor(self, index: int):
  function setup_for_distributed (line 368) | def setup_for_distributed(is_master):
  function is_dist_avail_and_initialized (line 392) | def is_dist_avail_and_initialized():
  function get_world_size (line 400) | def get_world_size():
  function get_rank (line 406) | def get_rank():
  function is_main_process (line 412) | def is_main_process():
  function save_on_master (line 416) | def save_on_master(*args, **kwargs):
  function init_distributed_mode (line 421) | def init_distributed_mode(args):
  function accuracy (line 448) | def accuracy(output, target, topk=(1,)):
  function interpolate (line 466) | def interpolate(input, size=None, scale_factor=None, mode="nearest", ali...
  class DistributedWeightedSampler (line 486) | class DistributedWeightedSampler(torch.utils.data.DistributedSampler):
    method __init__ (line 487) | def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True...
    method __iter__ (line 494) | def __iter__(self):
    method __len__ (line 511) | def __len__(self):
  function inverse_sigmoid (line 515) | def inverse_sigmoid(x, eps=1e-5):
  function dice_loss (line 522) | def dice_loss(inputs, targets, num_boxes):
  function sigmoid_focal_loss (line 540) | def sigmoid_focal_loss(inputs, targets, num_boxes, alpha: float = 0.25, ...
  function nested_dict_to_namespace (line 574) | def nested_dict_to_namespace(dictionary):
  function nested_dict_to_device (line 582) | def nested_dict_to_device(dictionary, device):

FILE: src/trackformer/util/plot_utils.py
  function fig_to_numpy (line 14) | def fig_to_numpy(fig):
  function get_vis_win_names (line 24) | def get_vis_win_names(vis_dict):
  function plot_logs (line 35) | def plot_logs(logs, fields=('class_error', 'loss_bbox_unscaled', 'mAP'),...
  function plot_precision_recall (line 91) | def plot_precision_recall(files, naming_scheme='iter'):

FILE: src/trackformer/util/track_utils.py
  function bbox_overlaps (line 25) | def bbox_overlaps(boxes, query_boxes):
  function rand_cmap (line 54) | def rand_cmap(nlabels, type='bright', first_color_black=True, last_color...
  function plot_sequence (line 126) | def plot_sequence(tracks, data_loader, output_dir, write_images, generat...
  function interpolate_tracks (line 239) | def interpolate_tracks(tracks):
  function bbox_transform_inv (line 274) | def bbox_transform_inv(boxes, deltas):
  function clip_boxes (line 302) | def clip_boxes(boxes, im_shape):
  function get_center (line 321) | def get_center(pos):
  function get_width (line 329) | def get_width(pos):
  function get_height (line 333) | def get_height(pos):
  function make_pos (line 337) | def make_pos(cx, cy, width, height):
  function warp_pos (line 346) | def warp_pos(pos, warp_matrix):
  function get_mot_accum (line 354) | def get_mot_accum(results, seq_loader):
  function evaluate_mot_accums (line 405) | def evaluate_mot_accums(accums, names, generate_overall=True):

FILE: src/trackformer/vis.py
  class BaseVis (line 18) | class BaseVis(object):
    method __init__ (line 20) | def __init__(self, viz_opts, update_mode='append', env=None, win=None,
    method win_exists (line 31) | def win_exists(self):
    method close (line 34) | def close(self):
    method register_event_handler (line 39) | def register_event_handler(self, handler):
  class LineVis (line 43) | class LineVis(BaseVis):
    method plot (line 46) | def plot(self, y_data, x_label):
    method reset (line 74) | def reset(self):
  class ImgVis (line 82) | class ImgVis(BaseVis):
    method plot (line 85) | def plot(self, images):
  function vis_results (line 101) | def vis_results(visualizer, img, result, target, tracking):
  function build_visualizers (line 247) | def build_visualizers(args: dict, train_loss_names: list):

FILE: src/train.py
  function train (line 38) | def train(args: Namespace) -> None:
  function load_config (line 346) | def load_config(_config, _run):
Condensed preview — 91 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (491K chars).
[
  {
    "path": ".circleci/config.yml",
    "chars": 636,
    "preview": "version: 2.1\n\njobs:\n  python_lint:\n    docker:\n      - image: circleci/python:3.7\n    steps:\n      - checkout\n      - ru"
  },
  {
    "path": ".github/CODE_OF_CONDUCT.md",
    "chars": 244,
    "preview": "# Code of Conduct\n\nFacebook has adopted a Code of Conduct that we expect project participants to adhere to.\nPlease read "
  },
  {
    "path": ".github/CONTRIBUTING.md",
    "chars": 1611,
    "preview": "# Contributing to DETR\nWe want to make contributing to this project as easy and transparent as\npossible.\n\n## Our Develop"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bugs.md",
    "chars": 719,
    "preview": "---\nname: \"🐛 Bugs\"\nabout: Report bugs in DETR\ntitle: Please read & provide the following\n\n---\n\n## Instructions To Reprod"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/questions-help-support.md",
    "chars": 787,
    "preview": "---\nname: \"How to do something❓\"\nabout: How to do something using DETR?\n\n---\n\n## ❓ How to do something using DETR\n\nDescr"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/unexpected-problems-bugs.md",
    "chars": 1145,
    "preview": "---\nname: \"Unexpected behaviors\"\nabout: Run into unexpected behaviors when using DETR\ntitle: Please read & provide the f"
  },
  {
    "path": ".gitignore",
    "chars": 357,
    "preview": ".nfs*\n*.ipynb\n*.pyc\n.dumbo.json\n.DS_Store\n.*.swp\n*.pth\n**/__pycache__/**\n.ipynb_checkpoints/\ndatasets/data/\nexperiment-*"
  },
  {
    "path": "LICENSE",
    "chars": 11354,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 6986,
    "preview": "# TrackFormer: Multi-Object Tracking with Transformers\n\nThis repository provides the official implementation of the [Tra"
  },
  {
    "path": "cfgs/submit.yaml",
    "chars": 496,
    "preview": "# Number of gpus to request on each node\nnum_gpus: 1\nvram: 12GB\n# memory allocated per GPU in GB\nmem_per_gpu: 20\n# Numbe"
  },
  {
    "path": "cfgs/track.yaml",
    "chars": 1595,
    "preview": "output_dir: null\nverbose: false\nseed: 666\n\nobj_detect_checkpoint_file: models/mot17_crowdhuman_deformable_multi_frame/ch"
  },
  {
    "path": "cfgs/track_reid.yaml",
    "chars": 36,
    "preview": "tracker_cfg:\n  inactive_patience: 5\n"
  },
  {
    "path": "cfgs/train.yaml",
    "chars": 3778,
    "preview": "lr: 0.0002\nlr_backbone_names: ['backbone.0']\nlr_backbone: 0.00002\nlr_linear_proj_names: ['reference_points', 'sampling_o"
  },
  {
    "path": "cfgs/train_coco_person_masks.yaml",
    "chars": 151,
    "preview": "dataset: coco_person\n\nload_mask_head_from_model: models/detr-r50-panoptic-00ce5173.pth\nfreeze_detr: true\nmasks: true\n\nlr"
  },
  {
    "path": "cfgs/train_crowdhuman.yaml",
    "chars": 153,
    "preview": "dataset: mot_crowdhuman\ncrowdhuman_train_split: train_val\ntrain_split: null\nval_split: mot17_train_cross_val_frame_0_5_t"
  },
  {
    "path": "cfgs/train_deformable.yaml",
    "chars": 209,
    "preview": "deformable: true\nnum_feature_levels: 4\nnum_queries: 300\ndim_feedforward: 1024\nfocal_loss: true\nfocal_alpha: 0.25\nfocal_g"
  },
  {
    "path": "cfgs/train_full_res.yaml",
    "chars": 49,
    "preview": "img_transform:\n  max_size: 1920\n  val_width: 1080"
  },
  {
    "path": "cfgs/train_mot17.yaml",
    "chars": 272,
    "preview": "dataset: mot\n\ntrain_split: mot17_train_coco\nval_split: mot17_train_cross_val_frame_0_5_to_1_0_coco\n\nmot_path_train: data"
  },
  {
    "path": "cfgs/train_mot17_crowdhuman.yaml",
    "chars": 294,
    "preview": "dataset: mot_crowdhuman\n\ncrowdhuman_train_split: train_val\ntrain_split: mot17_train_coco\nval_split: mot17_train_cross_va"
  },
  {
    "path": "cfgs/train_mot20_crowdhuman.yaml",
    "chars": 294,
    "preview": "dataset: mot_crowdhuman\n\ncrowdhuman_train_split: train_val\ntrain_split: mot20_train_coco\nval_split: mot20_train_cross_va"
  },
  {
    "path": "cfgs/train_mot_coco_person.yaml",
    "chars": 128,
    "preview": "dataset: mot_coco_person\ncoco_person_train_split: train\ntrain_split: null\nval_split: mot17_train_cross_val_frame_0_5_to_"
  },
  {
    "path": "cfgs/train_mots20.yaml",
    "chars": 253,
    "preview": "dataset: mot\nmot_path: data/MOTS20\ntrain_split: mots20_train_coco\nval_split: mots20_train_coco\n\nresume: models/mot17_tra"
  },
  {
    "path": "cfgs/train_multi_frame.yaml",
    "chars": 132,
    "preview": "num_queries: 500\nhidden_dim: 288\nmulti_frame_attention: true\nmulti_frame_encoding: true\nmulti_frame_attention_separate_e"
  },
  {
    "path": "cfgs/train_tracking.yaml",
    "chars": 104,
    "preview": "tracking: true\ntracking_eval: true\ntrack_prev_frame_range: 5\ntrack_query_false_positive_eos_weight: true"
  },
  {
    "path": "data/.gitignore",
    "chars": 26,
    "preview": "*\n!.gitignore\n!snakeboard\n"
  },
  {
    "path": "docs/INSTALL.md",
    "chars": 3101,
    "preview": "# Installation\n\n1. Clone and enter this repository:\n    ```\n    git clone git@github.com:timmeinhardt/trackformer.git\n  "
  },
  {
    "path": "docs/TRAIN.md",
    "chars": 4872,
    "preview": "# Train TrackFormer\n\nWe provide the code as well as intermediate models of our entire training pipeline for multiple dat"
  },
  {
    "path": "logs/.gitignore",
    "chars": 22,
    "preview": "*\n!visdom\n!.gitignore\n"
  },
  {
    "path": "logs/visdom/.gitignore",
    "chars": 13,
    "preview": "*\n!.gitignore"
  },
  {
    "path": "models/.gitignore",
    "chars": 13,
    "preview": "*\n!.gitignore"
  },
  {
    "path": "requirements.txt",
    "chars": 2102,
    "preview": "argon2-cffi==20.1.0\nastroid==2.4.2\nasync-generator==1.10\nattrs==19.3.0\nbackcall==0.2.0\nbleach==3.2.3\ncertifi==2020.4.5.2"
  },
  {
    "path": "setup.py",
    "chars": 184,
    "preview": "from setuptools import setup, find_packages\n\nsetup(name='trackformer',\n      packages=['trackformer'],\n      package_dir"
  },
  {
    "path": "src/combine_frames.py",
    "chars": 1298,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nCombine two sets of frames to one.\n\"\"\"\nimport"
  },
  {
    "path": "src/compute_best_mean_epoch_from_splits.py",
    "chars": 10984,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport os\nimport json\nimport numpy as np\n\n\nLOG_DI"
  },
  {
    "path": "src/generate_coco_from_crowdhuman.py",
    "chars": 4356,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nGenerates COCO data and annotation structure "
  },
  {
    "path": "src/generate_coco_from_mot.py",
    "chars": 16520,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nGenerates COCO data and annotation structure "
  },
  {
    "path": "src/parse_mot_results_to_tex.py",
    "chars": 7703,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nParse MOT results and generate a LaTeX table."
  },
  {
    "path": "src/run_with_submitit.py",
    "chars": 4241,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nA script to run multinode training with submi"
  },
  {
    "path": "src/track.py",
    "chars": 7150,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport os\nimport sys\nimport time\nfrom os import p"
  },
  {
    "path": "src/track_param_search.py",
    "chars": 5969,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom itertools import product\n\nimport numpy as np"
  },
  {
    "path": "src/trackformer/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/trackformer/datasets/__init__.py",
    "chars": 1806,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nSubmodule interface.\n\"\"\"\nfrom argparse import"
  },
  {
    "path": "src/trackformer/datasets/coco.py",
    "chars": 12431,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nCOCO dataset which returns image_id for evalu"
  },
  {
    "path": "src/trackformer/datasets/coco_eval.py",
    "chars": 8865,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nCOCO evaluator that works in distributed mode"
  },
  {
    "path": "src/trackformer/datasets/coco_panoptic.py",
    "chars": 4016,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport json\nfrom pathlib import Path\n\nimport nump"
  },
  {
    "path": "src/trackformer/datasets/crowdhuman.py",
    "chars": 1004,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nCrowdHuman dataset with tracking training aug"
  },
  {
    "path": "src/trackformer/datasets/mot.py",
    "chars": 6542,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT dataset with tracking training augmentati"
  },
  {
    "path": "src/trackformer/datasets/panoptic_eval.py",
    "chars": 1533,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport json\nimport os\n\nfrom ..util import misc as"
  },
  {
    "path": "src/trackformer/datasets/tracking/__init__.py",
    "chars": 141,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nSubmodule interface.\n\"\"\"\nfrom .factory import"
  },
  {
    "path": "src/trackformer/datasets/tracking/demo_sequence.py",
    "chars": 3567,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT17 sequence dataset.\n\"\"\"\nimport configpars"
  },
  {
    "path": "src/trackformer/datasets/tracking/factory.py",
    "chars": 2290,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nFactory of tracking datasets.\n\"\"\"\nfrom typing"
  },
  {
    "path": "src/trackformer/datasets/tracking/mot17_sequence.py",
    "chars": 9392,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT17 sequence dataset.\n\"\"\"\nimport configpars"
  },
  {
    "path": "src/trackformer/datasets/tracking/mot20_sequence.py",
    "chars": 408,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT20 sequence dataset.\n\"\"\"\n\nfrom .mot17_sequ"
  },
  {
    "path": "src/trackformer/datasets/tracking/mot_wrapper.py",
    "chars": 4307,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOT wrapper which combines sequences to a dat"
  },
  {
    "path": "src/trackformer/datasets/tracking/mots20_sequence.py",
    "chars": 7053,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMOTS20 sequence dataset.\n\"\"\"\nimport csv\nimpor"
  },
  {
    "path": "src/trackformer/datasets/transforms.py",
    "chars": 16327,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nTransforms and data augmentation for both ima"
  },
  {
    "path": "src/trackformer/engine.py",
    "chars": 14395,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nTrain and eval functions used in main.py\n\"\"\"\n"
  },
  {
    "path": "src/trackformer/models/__init__.py",
    "chars": 4804,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport torch\n\nfrom .backbone import build_backbon"
  },
  {
    "path": "src/trackformer/models/backbone.py",
    "chars": 4987,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nBackbone modules.\n\"\"\"\nfrom typing import Dict"
  },
  {
    "path": "src/trackformer/models/deformable_detr.py",
    "chars": 15257,
    "preview": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseT"
  },
  {
    "path": "src/trackformer/models/deformable_transformer.py",
    "chars": 20692,
    "preview": "# ------------------------------------------------------------------------\n# Deformable DETR\n# Copyright (c) 2020 SenseT"
  },
  {
    "path": "src/trackformer/models/detr.py",
    "chars": 22979,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nDETR model and criterion classes.\n\"\"\"\nimport "
  },
  {
    "path": "src/trackformer/models/detr_segmentation.py",
    "chars": 15405,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nThis file provides the definition of the conv"
  },
  {
    "path": "src/trackformer/models/detr_tracking.py",
    "chars": 13729,
    "preview": "import math\nimport random\nfrom contextlib import nullcontext\n\nimport torch\nimport torch.nn as nn\n\nfrom ..util import box"
  },
  {
    "path": "src/trackformer/models/matcher.py",
    "chars": 6311,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nModules to compute the matching cost and solv"
  },
  {
    "path": "src/trackformer/models/ops/.gitignore",
    "chars": 30,
    "preview": "build\ndist\n*egg-info\n*.linux*\n"
  },
  {
    "path": "src/trackformer/models/ops/functions/__init__.py",
    "chars": 119,
    "preview": "from .ms_deform_attn_func import MSDeformAttnFunction, ms_deform_attn_core_pytorch, ms_deform_attn_core_pytorch_mot\r\n\r\n"
  },
  {
    "path": "src/trackformer/models/ops/functions/ms_deform_attn_func.py",
    "chars": 4380,
    "preview": "#!/usr/bin/env python\r\nfrom __future__ import absolute_import\r\nfrom __future__ import print_function\r\nfrom __future__ im"
  },
  {
    "path": "src/trackformer/models/ops/make.sh",
    "chars": 31,
    "preview": "python setup.py build install\r\n"
  },
  {
    "path": "src/trackformer/models/ops/modules/__init__.py",
    "chars": 42,
    "preview": "from .ms_deform_attn import MSDeformAttn\r\n"
  },
  {
    "path": "src/trackformer/models/ops/modules/ms_deform_attn.py",
    "chars": 4643,
    "preview": "#!/usr/bin/env python\r\nfrom __future__ import absolute_import\r\nfrom __future__ import print_function\r\nfrom __future__ im"
  },
  {
    "path": "src/trackformer/models/ops/setup.py",
    "chars": 2015,
    "preview": "#!/usr/bin/env python\n\nimport os\nimport glob\n\nimport torch\n\nfrom torch.utils.cpp_extension import CUDA_HOME\nfrom torch.u"
  },
  {
    "path": "src/trackformer/models/ops/src/cpu/ms_deform_attn_cpu.cpp",
    "chars": 653,
    "preview": "#include <vector>\r\n\r\n#include <ATen/ATen.h>\r\n#include <ATen/cuda/CUDAContext.h>\r\n\r\n\r\nat::Tensor\r\nms_deform_attn_cpu_forw"
  },
  {
    "path": "src/trackformer/models/ops/src/cpu/ms_deform_attn_cpu.h",
    "chars": 528,
    "preview": "#pragma once\r\n#include <torch/extension.h>\r\n\r\nat::Tensor\r\nms_deform_attn_cpu_forward(\r\n    const at::Tensor &value, \r\n  "
  },
  {
    "path": "src/trackformer/models/ops/src/cuda/ms_deform_attn_cuda.cu",
    "chars": 7793,
    "preview": "#include <vector>\r\n#include \"cuda/ms_deform_im2col_cuda.cuh\"\r\n\r\n#include <ATen/ATen.h>\r\n#include <ATen/cuda/CUDAContext."
  },
  {
    "path": "src/trackformer/models/ops/src/cuda/ms_deform_attn_cuda.h",
    "chars": 526,
    "preview": "#pragma once\r\n#include <torch/extension.h>\r\n\r\nat::Tensor ms_deform_attn_cuda_forward(\r\n    const at::Tensor &value, \r\n  "
  },
  {
    "path": "src/trackformer/models/ops/src/cuda/ms_deform_im2col_cuda.cuh",
    "chars": 22460,
    "preview": "#include <cstdio>\r\n#include <algorithm>\r\n#include <cstring>\r\n\r\n#include <ATen/ATen.h>\r\n#include <ATen/cuda/CUDAContext.h"
  },
  {
    "path": "src/trackformer/models/ops/src/ms_deform_attn.h",
    "chars": 1218,
    "preview": "#pragma once\r\n\r\n#include \"cpu/ms_deform_attn_cpu.h\"\r\n\r\n#ifdef WITH_CUDA\r\n#include \"cuda/ms_deform_attn_cuda.h\"\r\n#endif\r\n"
  },
  {
    "path": "src/trackformer/models/ops/src/vision.cpp",
    "chars": 257,
    "preview": "\r\n#include \"ms_deform_attn.h\"\r\n\r\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\r\n  m.def(\"ms_deform_attn_forward\", &ms_defor"
  },
  {
    "path": "src/trackformer/models/ops/test.py",
    "chars": 7051,
    "preview": "#!/usr/bin/env python\r\nfrom __future__ import absolute_import\r\nfrom __future__ import print_function\r\nfrom __future__ im"
  },
  {
    "path": "src/trackformer/models/ops/test_double_precision.py",
    "chars": 7186,
    "preview": "#!/usr/bin/env python\r\nfrom __future__ import absolute_import\r\nfrom __future__ import print_function\r\nfrom __future__ im"
  },
  {
    "path": "src/trackformer/models/position_encoding.py",
    "chars": 6845,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nVarious positional encodings for the transfor"
  },
  {
    "path": "src/trackformer/models/tracker.py",
    "chars": 23373,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nTracker which achieves MOT with the provided "
  },
  {
    "path": "src/trackformer/models/transformer.py",
    "chars": 13716,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nDETR Transformer class.\n\nCopy-paste from torc"
  },
  {
    "path": "src/trackformer/util/__init__.py",
    "chars": 71,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n"
  },
  {
    "path": "src/trackformer/util/box_ops.py",
    "chars": 2561,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nUtilities for bounding box manipulation and G"
  },
  {
    "path": "src/trackformer/util/misc.py",
    "chars": 19295,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMisc functions, including distributed helpers"
  },
  {
    "path": "src/trackformer/util/plot_utils.py",
    "chars": 4780,
    "preview": "\"\"\"\nPlotting utilities to visualize training logs.\n\"\"\"\nfrom pathlib import Path, PurePath\n\nimport matplotlib.pyplot as p"
  },
  {
    "path": "src/trackformer/util/track_utils.py",
    "chars": 14682,
    "preview": "#########################################\n# Still ugly file with helper functions #\n####################################"
  },
  {
    "path": "src/trackformer/vis.py",
    "chars": 11159,
    "preview": "import copy\nimport logging\n\nimport matplotlib.patches as mpatches\nimport numpy as np\nimport torch\nimport torchvision.tra"
  },
  {
    "path": "src/train.py",
    "chars": 15061,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport datetime\nimport os\nimport random\nimport ti"
  }
]

About this extraction

This page contains the full source code of the timmeinhardt/trackformer GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 91 files (458.1 KB), approximately 118.7k tokens, and a symbol index with 422 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!