Full Code of ymzis69/HybridSORT for AI

master 396f8d30db13 cached
878 files
18.6 MB
4.9M tokens
4424 symbols
1 requests
Copy disabled (too large) Download .txt
Showing preview only (19,696K chars total). Download the full file to get everything.
Repository: ymzis69/HybridSORT
Branch: master
Commit: 396f8d30db13
Files: 878
Total size: 18.6 MB

Directory structure:
gitextract_y7pd_xks/

├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── TrackEval/
│   ├── .gitignore
│   ├── LICENSE
│   ├── Readme.md
│   ├── docs/
│   │   ├── BDD100k-format.txt
│   │   ├── DAVIS-format.txt
│   │   ├── How_To/
│   │   │   └── Add_a_new_metric.md
│   │   ├── KITTI-format.txt
│   │   ├── MOTChallenge-Official/
│   │   │   └── Readme.md
│   │   ├── MOTChallenge-format.txt
│   │   ├── MOTS-format.txt
│   │   ├── OpenWorldTracking-Official/
│   │   │   └── Readme.md
│   │   ├── RobMOTS-Official/
│   │   │   └── Readme.md
│   │   ├── TAO-format.txt
│   │   └── YouTube-VIS-format.txt
│   ├── minimum_requirements.txt
│   ├── pyproject.toml
│   ├── requirements.txt
│   ├── scripts/
│   │   ├── comparison_plots.py
│   │   ├── run_bdd.py
│   │   ├── run_davis.py
│   │   ├── run_headtracking_challenge.py
│   │   ├── run_kitti.py
│   │   ├── run_kitti_mots.py
│   │   ├── run_mot_challenge.py
│   │   ├── run_mots_challenge.py
│   │   ├── run_rob_mots.py
│   │   ├── run_tao.py
│   │   ├── run_tao_ow.py
│   │   └── run_youtube_vis.py
│   ├── setup.cfg
│   ├── setup.py
│   ├── tests/
│   │   ├── test_all_quick.py
│   │   ├── test_davis.py
│   │   ├── test_metrics.py
│   │   ├── test_mot17.py
│   │   └── test_mots.py
│   └── trackeval/
│       ├── __init__.py
│       ├── _timing.py
│       ├── baselines/
│       │   ├── __init__.py
│       │   ├── baseline_utils.py
│       │   ├── non_overlap.py
│       │   ├── pascal_colormap.py
│       │   ├── stp.py
│       │   ├── thresholder.py
│       │   └── vizualize.py
│       ├── datasets/
│       │   ├── __init__.py
│       │   ├── _base_dataset.py
│       │   ├── bdd100k.py
│       │   ├── davis.py
│       │   ├── head_tracking_challenge.py
│       │   ├── kitti_2d_box.py
│       │   ├── kitti_mots.py
│       │   ├── mot_challenge_2d_box.py
│       │   ├── mots_challenge.py
│       │   ├── rob_mots.py
│       │   ├── rob_mots_classmap.py
│       │   ├── run_rob_mots.py
│       │   ├── tao.py
│       │   ├── tao_ow.py
│       │   └── youtube_vis.py
│       ├── eval.py
│       ├── metrics/
│       │   ├── __init__.py
│       │   ├── _base_metric.py
│       │   ├── clear.py
│       │   ├── count.py
│       │   ├── hota.py
│       │   ├── identity.py
│       │   ├── ideucl.py
│       │   ├── j_and_f.py
│       │   ├── track_map.py
│       │   └── vace.py
│       ├── plotting.py
│       └── utils.py
├── deploy/
│   ├── ONNXRuntime/
│   │   ├── README.md
│   │   └── onnx_inference.py
│   ├── TensorRT/
│   │   ├── cpp/
│   │   │   ├── CMakeLists.txt
│   │   │   ├── README.md
│   │   │   ├── include/
│   │   │   │   ├── BYTETracker.h
│   │   │   │   ├── STrack.h
│   │   │   │   ├── dataType.h
│   │   │   │   ├── kalmanFilter.h
│   │   │   │   ├── lapjv.h
│   │   │   │   └── logging.h
│   │   │   └── src/
│   │   │       ├── BYTETracker.cpp
│   │   │       ├── STrack.cpp
│   │   │       ├── bytetrack.cpp
│   │   │       ├── kalmanFilter.cpp
│   │   │       ├── lapjv.cpp
│   │   │       └── utils.cpp
│   │   └── python/
│   │       └── README.md
│   ├── ncnn/
│   │   └── cpp/
│   │       ├── CMakeLists.txt
│   │       ├── README.md
│   │       ├── include/
│   │       │   ├── BYTETracker.h
│   │       │   ├── STrack.h
│   │       │   ├── dataType.h
│   │       │   ├── kalmanFilter.h
│   │       │   └── lapjv.h
│   │       └── src/
│   │           ├── BYTETracker.cpp
│   │           ├── STrack.cpp
│   │           ├── bytetrack.cpp
│   │           ├── kalmanFilter.cpp
│   │           ├── lapjv.cpp
│   │           └── utils.cpp
│   └── scripts/
│       ├── export_onnx.py
│       └── trt.py
├── docs/
│   └── DEPLOY.md
├── exps/
│   ├── default/
│   │   ├── nano.py
│   │   ├── yolov3.py
│   │   ├── yolox_l.py
│   │   ├── yolox_m.py
│   │   ├── yolox_s.py
│   │   ├── yolox_tiny.py
│   │   └── yolox_x.py
│   ├── example/
│   │   └── mot/
│   │       ├── yolox_dancetrack_test.py
│   │       ├── yolox_dancetrack_test_hybrid_sort.py
│   │       ├── yolox_dancetrack_test_hybrid_sort_reid.py
│   │       ├── yolox_dancetrack_val.py
│   │       ├── yolox_dancetrack_val_hybrid_sort.py
│   │       ├── yolox_dancetrack_val_hybrid_sort_reid.py
│   │       ├── yolox_l_mix_det.py
│   │       ├── yolox_m_mix_det.py
│   │       ├── yolox_nano_mix_det.py
│   │       ├── yolox_s_mix_det.py
│   │       ├── yolox_tiny_mix_det.py
│   │       ├── yolox_x_ablation.py
│   │       ├── yolox_x_ablation_hybrid_sort.py
│   │       ├── yolox_x_ablation_hybrid_sort_reid.py
│   │       ├── yolox_x_ch.py
│   │       ├── yolox_x_mix_det.py
│   │       ├── yolox_x_mix_det_hybrid_sort.py
│   │       ├── yolox_x_mix_det_hybrid_sort_reid.py
│   │       ├── yolox_x_mix_det_train.py
│   │       ├── yolox_x_mix_mot20_ch.py
│   │       ├── yolox_x_mix_mot20_ch_hybrid_sort.py
│   │       ├── yolox_x_mix_mot20_ch_hybrid_sort_reid.py
│   │       ├── yolox_x_mix_mot20_ch_train.py
│   │       ├── yolox_x_mix_mot20_ch_valhalf.py
│   │       ├── yolox_x_mot17_ablation_half_train.py
│   │       ├── yolox_x_mot17_half.py
│   │       └── yolox_x_mot17_train.py
│   └── permatrack_kitti_test/
│       ├── 0000.txt
│       ├── 0001.txt
│       ├── 0002.txt
│       ├── 0003.txt
│       ├── 0004.txt
│       ├── 0005.txt
│       ├── 0006.txt
│       ├── 0007.txt
│       ├── 0008.txt
│       ├── 0009.txt
│       ├── 0010.txt
│       ├── 0011.txt
│       ├── 0012.txt
│       ├── 0013.txt
│       ├── 0014.txt
│       ├── 0015.txt
│       ├── 0016.txt
│       ├── 0017.txt
│       ├── 0018.txt
│       ├── 0019.txt
│       ├── 0020.txt
│       ├── 0021.txt
│       ├── 0022.txt
│       ├── 0023.txt
│       ├── 0024.txt
│       ├── 0025.txt
│       ├── 0026.txt
│       ├── 0027.txt
│       └── 0028.txt
├── fast_reid/
│   ├── CHANGELOG.md
│   ├── GETTING_STARTED.md
│   ├── INSTALL.md
│   ├── LICENSE
│   ├── MODEL_ZOO.md
│   ├── README.md
│   ├── __init__.py
│   ├── configs/
│   │   ├── Base-AGW.yml
│   │   ├── Base-MGN.yml
│   │   ├── Base-SBS.yml
│   │   ├── Base-bagtricks.yml
│   │   ├── CUHKSYSU/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── CUHKSYSU_DanceTrack/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── mgn_R50-ibn_64d.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── CUHKSYSU_MOT17/
│   │   │   └── sbs_S50.yml
│   │   ├── CUHKSYSU_MOT20/
│   │   │   └── sbs_S50.yml
│   │   ├── DanceTrack/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── DukeMTMC/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── MOT17/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── MOT20/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── MSMT17/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── Market1501/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── bagtricks_vit.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── VERIWild/
│   │   │   └── bagtricks_R50-ibn.yml
│   │   ├── VeRi/
│   │   │   └── sbs_R50-ibn.yml
│   │   └── VehicleID/
│   │       └── bagtricks_R50-ibn.yml
│   ├── datasets/
│   │   ├── generate_cuhksysu_dance_patches.py
│   │   └── generate_mot_patches.py
│   ├── demo/
│   │   ├── README.md
│   │   ├── demo.py
│   │   ├── plot_roc_with_pickle.py
│   │   ├── predictor.py
│   │   └── visualize_result.py
│   ├── docker/
│   │   ├── Dockerfile
│   │   └── README.md
│   ├── docs/
│   │   ├── .gitignore
│   │   ├── Makefile
│   │   ├── README.md
│   │   ├── _static/
│   │   │   └── css/
│   │   │       └── custom.css
│   │   ├── conf.py
│   │   ├── index.rst
│   │   ├── modules/
│   │   │   ├── checkpoint.rst
│   │   │   ├── config.rst
│   │   │   ├── data.rst
│   │   │   ├── data_transforms.rst
│   │   │   ├── engine.rst
│   │   │   ├── evaluation.rst
│   │   │   ├── index.rst
│   │   │   ├── layers.rst
│   │   │   ├── modeling.rst
│   │   │   ├── solver.rst
│   │   │   └── utils.rst
│   │   └── requirements.txt
│   ├── fast_reid_interfece.py
│   ├── fastreid/
│   │   ├── __init__.py
│   │   ├── config/
│   │   │   ├── __init__.py
│   │   │   ├── config.py
│   │   │   └── defaults.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── build.py
│   │   │   ├── common.py
│   │   │   ├── data_utils.py
│   │   │   ├── datasets/
│   │   │   │   ├── AirportALERT.py
│   │   │   │   ├── __init__.py
│   │   │   │   ├── bases.py
│   │   │   │   ├── caviara.py
│   │   │   │   ├── cuhk03.py
│   │   │   │   ├── cuhksysu.py
│   │   │   │   ├── cuhksysu_dancetrack.py
│   │   │   │   ├── cuhksysu_mot17.py
│   │   │   │   ├── cuhksysu_mot20.py
│   │   │   │   ├── dancetrack.py
│   │   │   │   ├── dukemtmcreid.py
│   │   │   │   ├── grid.py
│   │   │   │   ├── iLIDS.py
│   │   │   │   ├── lpw.py
│   │   │   │   ├── market1501.py
│   │   │   │   ├── mot17.py
│   │   │   │   ├── mot20.py
│   │   │   │   ├── mot20_.py
│   │   │   │   ├── msmt17.py
│   │   │   │   ├── pes3d.py
│   │   │   │   ├── pku.py
│   │   │   │   ├── prai.py
│   │   │   │   ├── prid.py
│   │   │   │   ├── saivt.py
│   │   │   │   ├── sensereid.py
│   │   │   │   ├── shinpuhkan.py
│   │   │   │   ├── sysu_mm.py
│   │   │   │   ├── thermalworld.py
│   │   │   │   ├── vehicleid.py
│   │   │   │   ├── veri.py
│   │   │   │   ├── veriwild.py
│   │   │   │   ├── viper.py
│   │   │   │   └── wildtracker.py
│   │   │   ├── samplers/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── data_sampler.py
│   │   │   │   ├── imbalance_sampler.py
│   │   │   │   └── triplet_sampler.py
│   │   │   └── transforms/
│   │   │       ├── __init__.py
│   │   │       ├── autoaugment.py
│   │   │       ├── build.py
│   │   │       ├── functional.py
│   │   │       └── transforms.py
│   │   ├── engine/
│   │   │   ├── __init__.py
│   │   │   ├── defaults.py
│   │   │   ├── hooks.py
│   │   │   ├── launch.py
│   │   │   └── train_loop.py
│   │   ├── evaluation/
│   │   │   ├── __init__.py
│   │   │   ├── clas_evaluator.py
│   │   │   ├── evaluator.py
│   │   │   ├── query_expansion.py
│   │   │   ├── rank.py
│   │   │   ├── rank_cylib/
│   │   │   │   ├── Makefile
│   │   │   │   ├── __init__.py
│   │   │   │   ├── rank_cy.c
│   │   │   │   ├── rank_cy.pyx
│   │   │   │   ├── roc_cy.c
│   │   │   │   ├── roc_cy.pyx
│   │   │   │   ├── setup.py
│   │   │   │   └── test_cython.py
│   │   │   ├── reid_evaluation.py
│   │   │   ├── rerank.py
│   │   │   ├── roc.py
│   │   │   └── testing.py
│   │   ├── layers/
│   │   │   ├── __init__.py
│   │   │   ├── activation.py
│   │   │   ├── any_softmax.py
│   │   │   ├── batch_norm.py
│   │   │   ├── context_block.py
│   │   │   ├── drop.py
│   │   │   ├── frn.py
│   │   │   ├── gather_layer.py
│   │   │   ├── helpers.py
│   │   │   ├── non_local.py
│   │   │   ├── pooling.py
│   │   │   ├── se_layer.py
│   │   │   ├── splat.py
│   │   │   └── weight_init.py
│   │   ├── modeling/
│   │   │   ├── __init__.py
│   │   │   ├── backbones/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build.py
│   │   │   │   ├── mobilenet.py
│   │   │   │   ├── mobilenetv3.py
│   │   │   │   ├── osnet.py
│   │   │   │   ├── regnet/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── config.py
│   │   │   │   │   ├── effnet/
│   │   │   │   │   │   ├── EN-B0_dds_8gpu.yaml
│   │   │   │   │   │   ├── EN-B1_dds_8gpu.yaml
│   │   │   │   │   │   ├── EN-B2_dds_8gpu.yaml
│   │   │   │   │   │   ├── EN-B3_dds_8gpu.yaml
│   │   │   │   │   │   ├── EN-B4_dds_8gpu.yaml
│   │   │   │   │   │   └── EN-B5_dds_8gpu.yaml
│   │   │   │   │   ├── effnet.py
│   │   │   │   │   ├── regnet.py
│   │   │   │   │   ├── regnetx/
│   │   │   │   │   │   ├── RegNetX-1.6GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-12GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-16GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-200MF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-3.2GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-32GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-4.0GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-400MF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-6.4GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-600MF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-8.0GF_dds_8gpu.yaml
│   │   │   │   │   │   └── RegNetX-800MF_dds_8gpu.yaml
│   │   │   │   │   └── regnety/
│   │   │   │   │       ├── RegNetY-1.6GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-12GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-16GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-200MF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-3.2GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-32GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-4.0GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-400MF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-6.4GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-600MF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-8.0GF_dds_8gpu.yaml
│   │   │   │   │       └── RegNetY-800MF_dds_8gpu.yaml
│   │   │   │   ├── repvgg.py
│   │   │   │   ├── resnest.py
│   │   │   │   ├── resnet.py
│   │   │   │   ├── resnext.py
│   │   │   │   ├── shufflenet.py
│   │   │   │   └── vision_transformer.py
│   │   │   ├── heads/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build.py
│   │   │   │   ├── clas_head.py
│   │   │   │   └── embedding_head.py
│   │   │   ├── losses/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── circle_loss.py
│   │   │   │   ├── cross_entroy_loss.py
│   │   │   │   ├── focal_loss.py
│   │   │   │   ├── triplet_loss.py
│   │   │   │   └── utils.py
│   │   │   └── meta_arch/
│   │   │       ├── __init__.py
│   │   │       ├── baseline.py
│   │   │       ├── build.py
│   │   │       ├── distiller.py
│   │   │       ├── mgn.py
│   │   │       └── moco.py
│   │   ├── solver/
│   │   │   ├── __init__.py
│   │   │   ├── build.py
│   │   │   ├── lr_scheduler.py
│   │   │   └── optim/
│   │   │       ├── __init__.py
│   │   │       ├── lamb.py
│   │   │       ├── radam.py
│   │   │       └── swa.py
│   │   └── utils/
│   │       ├── __init__.py
│   │       ├── checkpoint.py
│   │       ├── collect_env.py
│   │       ├── comm.py
│   │       ├── compute_dist.py
│   │       ├── env.py
│   │       ├── events.py
│   │       ├── faiss_utils.py
│   │       ├── file_io.py
│   │       ├── history_buffer.py
│   │       ├── logger.py
│   │       ├── params.py
│   │       ├── precision_bn.py
│   │       ├── registry.py
│   │       ├── summary.py
│   │       ├── timer.py
│   │       └── visualizer.py
│   ├── projects/
│   │   ├── CrossDomainReID/
│   │   │   └── README.md
│   │   ├── DG-ReID/
│   │   │   └── README.md
│   │   ├── FastAttr/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-attribute.yml
│   │   │   │   ├── dukemtmc.yml
│   │   │   │   ├── market1501.yml
│   │   │   │   └── pa100.yml
│   │   │   ├── fastattr/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── attr_dataset.py
│   │   │   │   ├── attr_evaluation.py
│   │   │   │   ├── config.py
│   │   │   │   ├── datasets/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── bases.py
│   │   │   │   │   ├── dukemtmcattr.py
│   │   │   │   │   ├── market1501attr.py
│   │   │   │   │   └── pa100k.py
│   │   │   │   └── modeling/
│   │   │   │       ├── __init__.py
│   │   │   │       ├── attr_baseline.py
│   │   │   │       ├── attr_head.py
│   │   │   │       └── bce_loss.py
│   │   │   └── train_net.py
│   │   ├── FastClas/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   └── base-clas.yaml
│   │   │   ├── fastclas/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── bee_ant.py
│   │   │   │   ├── dataset.py
│   │   │   │   └── trainer.py
│   │   │   └── train_net.py
│   │   ├── FastDistill/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-kd.yml
│   │   │   │   ├── kd-sbs_r101ibn-sbs_r34.yml
│   │   │   │   ├── sbs_r101ibn.yml
│   │   │   │   └── sbs_r34.yml
│   │   │   ├── fastdistill/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── overhaul.py
│   │   │   │   └── resnet_distill.py
│   │   │   └── train_net.py
│   │   ├── FastFace/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── face_base.yml
│   │   │   │   ├── r101_ir.yml
│   │   │   │   └── r50_ir.yml
│   │   │   ├── fastface/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── datasets/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── ms1mv2.py
│   │   │   │   │   └── test_dataset.py
│   │   │   │   ├── face_data.py
│   │   │   │   ├── face_evaluator.py
│   │   │   │   ├── modeling/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── face_baseline.py
│   │   │   │   │   ├── face_head.py
│   │   │   │   │   ├── iresnet.py
│   │   │   │   │   └── partial_fc.py
│   │   │   │   ├── pfc_checkpointer.py
│   │   │   │   ├── trainer.py
│   │   │   │   ├── utils_amp.py
│   │   │   │   └── verification.py
│   │   │   └── train_net.py
│   │   ├── FastRT/
│   │   │   ├── .gitignore
│   │   │   ├── CMakeLists.txt
│   │   │   ├── README.md
│   │   │   ├── demo/
│   │   │   │   ├── CMakeLists.txt
│   │   │   │   └── inference.cpp
│   │   │   ├── docker/
│   │   │   │   ├── trt7cu100/
│   │   │   │   │   └── Dockerfile
│   │   │   │   └── trt7cu102/
│   │   │   │       └── Dockerfile
│   │   │   ├── fastrt/
│   │   │   │   ├── CMakeLists.txt
│   │   │   │   ├── backbones/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   └── sbs_resnet.cpp
│   │   │   │   ├── common/
│   │   │   │   │   ├── calibrator.cpp
│   │   │   │   │   └── utils.cpp
│   │   │   │   ├── engine/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   └── InferenceEngine.cpp
│   │   │   │   ├── factory/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   └── factory.cpp
│   │   │   │   ├── heads/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   └── embedding_head.cpp
│   │   │   │   ├── layers/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   ├── layers.cpp
│   │   │   │   │   ├── poolingLayerRT.cpp
│   │   │   │   │   └── poolingLayerRT.h
│   │   │   │   └── meta_arch/
│   │   │   │       ├── CMakeLists.txt
│   │   │   │       ├── baseline.cpp
│   │   │   │       └── model.cpp
│   │   │   ├── include/
│   │   │   │   └── fastrt/
│   │   │   │       ├── IPoolingLayerRT.h
│   │   │   │       ├── InferenceEngine.h
│   │   │   │       ├── baseline.h
│   │   │   │       ├── calibrator.h
│   │   │   │       ├── config.h.in
│   │   │   │       ├── cuda_utils.h
│   │   │   │       ├── embedding_head.h
│   │   │   │       ├── factory.h
│   │   │   │       ├── holder.h
│   │   │   │       ├── layers.h
│   │   │   │       ├── logging.h
│   │   │   │       ├── model.h
│   │   │   │       ├── module.h
│   │   │   │       ├── sbs_resnet.h
│   │   │   │       ├── struct.h
│   │   │   │       └── utils.h
│   │   │   ├── pybind_interface/
│   │   │   │   ├── CMakeLists.txt
│   │   │   │   ├── docker/
│   │   │   │   │   ├── trt7cu100/
│   │   │   │   │   │   └── Dockerfile
│   │   │   │   │   └── trt7cu102_torch160/
│   │   │   │   │       └── Dockerfile
│   │   │   │   ├── market_benchmark.py
│   │   │   │   ├── reid.cpp
│   │   │   │   └── test.py
│   │   │   ├── third_party/
│   │   │   │   └── cnpy/
│   │   │   │       ├── CMakeLists.txt
│   │   │   │       ├── LICENSE
│   │   │   │       ├── README.md
│   │   │   │       ├── cnpy.cpp
│   │   │   │       ├── cnpy.h
│   │   │   │       ├── example1.cpp
│   │   │   │       ├── mat2npz
│   │   │   │       ├── npy2mat
│   │   │   │       └── npz2mat
│   │   │   └── tools/
│   │   │       ├── How_to_Generate.md
│   │   │       └── gen_wts.py
│   │   ├── FastRetri/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── base-image_retri.yml
│   │   │   │   ├── cars.yml
│   │   │   │   ├── cub.yml
│   │   │   │   ├── inshop.yml
│   │   │   │   └── sop.yml
│   │   │   ├── fastretri/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── datasets.py
│   │   │   │   └── retri_evaluator.py
│   │   │   └── train_net.py
│   │   ├── FastTune/
│   │   │   ├── README.md
│   │   │   ├── autotuner/
│   │   │   │   ├── __init__.py
│   │   │   │   └── tune_hooks.py
│   │   │   ├── configs/
│   │   │   │   └── search_trial.yml
│   │   │   └── tune_net.py
│   │   ├── HAA/
│   │   │   └── Readme.md
│   │   ├── NAIC20/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-naic.yml
│   │   │   │   ├── nest101-base.yml
│   │   │   │   ├── r34-ibn.yml
│   │   │   │   └── submit.yml
│   │   │   ├── label.txt
│   │   │   ├── naic/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── naic_dataset.py
│   │   │   │   └── naic_evaluator.py
│   │   │   ├── naic20r2_train_list_clean.txt
│   │   │   ├── train_list_clean.txt
│   │   │   ├── train_net.py
│   │   │   ├── val_gallery.txt
│   │   │   └── val_query.txt
│   │   ├── PartialReID/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   └── partial_market.yml
│   │   │   ├── partialreid/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── dsr_distance.py
│   │   │   │   ├── dsr_evaluation.py
│   │   │   │   ├── dsr_head.py
│   │   │   │   ├── partial_dataset.py
│   │   │   │   └── partialbaseline.py
│   │   │   └── train_net.py
│   │   └── README.md
│   ├── tests/
│   │   ├── __init__.py
│   │   ├── dataset_test.py
│   │   ├── feature_align.py
│   │   ├── interp_test.py
│   │   ├── lr_scheduler_test.py
│   │   ├── model_test.py
│   │   ├── sampler_test.py
│   │   └── test_repvgg.py
│   └── tools/
│       ├── deploy/
│       │   ├── Caffe/
│       │   │   ├── ReadMe.md
│       │   │   ├── __init__.py
│       │   │   ├── caffe.proto
│       │   │   ├── caffe_lmdb.py
│       │   │   ├── caffe_net.py
│       │   │   ├── caffe_pb2.py
│       │   │   ├── layer_param.py
│       │   │   └── net.py
│       │   ├── README.md
│       │   ├── caffe_export.py
│       │   ├── caffe_inference.py
│       │   ├── onnx_export.py
│       │   ├── onnx_inference.py
│       │   ├── pytorch_to_caffe.py
│       │   ├── trt_calibrator.py
│       │   ├── trt_export.py
│       │   └── trt_inference.py
│       ├── plain_train_net.py
│       └── train_net.py
├── motmetrics/
│   ├── __init__.py
│   ├── apps/
│   │   ├── __init__.py
│   │   ├── eval_detrac.py
│   │   ├── eval_motchallenge.py
│   │   ├── evaluateTracking.py
│   │   ├── example.py
│   │   └── list_metrics.py
│   ├── data/
│   │   ├── TUD-Campus/
│   │   │   ├── gt.txt
│   │   │   └── test.txt
│   │   ├── TUD-Stadtmitte/
│   │   │   ├── gt.txt
│   │   │   └── test.txt
│   │   └── iotest/
│   │       ├── detrac.mat
│   │       ├── detrac.xml
│   │       ├── motchallenge.txt
│   │       └── vatic.txt
│   ├── distances.py
│   ├── io.py
│   ├── lap.py
│   ├── math_util.py
│   ├── metrics.py
│   ├── mot.py
│   ├── preprocess.py
│   ├── tests/
│   │   ├── __init__.py
│   │   ├── test_distances.py
│   │   ├── test_io.py
│   │   ├── test_issue19.py
│   │   ├── test_lap.py
│   │   ├── test_metrics.py
│   │   ├── test_mot.py
│   │   └── test_utils.py
│   └── utils.py
├── pretrained/
│   └── README.md
├── requirements.txt
├── setup.cfg
├── setup.py
├── tools/
│   ├── convert_bdd_to_kitti.py
│   ├── convert_cityperson_to_coco.py
│   ├── convert_crowdhuman_to_coco.py
│   ├── convert_cuhk_to_coco.py
│   ├── convert_dance_to_coco.py
│   ├── convert_ethz_to_coco.py
│   ├── convert_kitti_to_bdd.py
│   ├── convert_mot17_to_coco.py
│   ├── convert_mot20_to_coco.py
│   ├── convert_video.py
│   ├── demo_track.py
│   ├── gp_interpolation.py
│   ├── interpolation.py
│   ├── mix_data_ablation.py
│   ├── mix_data_test_mot17.py
│   ├── mix_data_test_mot20.py
│   ├── mota.py
│   ├── plot_trajectory.py
│   ├── run_byte.py
│   ├── run_byte_dance.py
│   ├── run_deepsort.py
│   ├── run_deepsort_dance.py
│   ├── run_hybrid_sort_dance.py
│   ├── run_motdt.py
│   ├── run_motdt_dance.py
│   ├── run_ocsort.py
│   ├── run_ocsort_dance.py
│   ├── run_ocsort_public.py
│   ├── run_sort.py
│   ├── run_sort_dance.py
│   ├── train.py
│   ├── txt2video.py
│   └── visualize_results.py
├── trackers/
│   ├── byte_tracker/
│   │   ├── basetrack.py
│   │   ├── byte_tracker.py
│   │   ├── byte_tracker_public.py
│   │   ├── byte_tracker_score.py
│   │   ├── kalman_filter.py
│   │   ├── kalman_filter_score.py
│   │   └── matching.py
│   ├── deepsort_tracker/
│   │   ├── deepsort.py
│   │   ├── deepsort_score.py
│   │   ├── detection.py
│   │   ├── iou_matching.py
│   │   ├── kalman_filter.py
│   │   ├── kalman_filter_score.py
│   │   ├── linear_assignment.py
│   │   ├── linear_assignment_score.py
│   │   ├── reid_model.py
│   │   ├── reid_model_motdt.py
│   │   ├── track.py
│   │   └── track_score.py
│   ├── hybrid_sort_tracker/
│   │   ├── association.py
│   │   ├── hybrid_sort.py
│   │   ├── hybrid_sort_reid.py
│   │   ├── kalmanfilter.py
│   │   ├── kalmanfilter_score.py
│   │   ├── kalmanfilter_score_new.py
│   │   └── new_kalmanfilter.py
│   ├── motdt_tracker/
│   │   ├── basetrack.py
│   │   ├── kalman_filter.py
│   │   ├── kalman_filter_score.py
│   │   ├── matching.py
│   │   ├── motdt_tracker.py
│   │   ├── motdt_tracker_score.py
│   │   └── reid_model.py
│   ├── ocsort_tracker/
│   │   ├── association.py
│   │   ├── kalmanfilter.py
│   │   └── ocsort.py
│   ├── sort_tracker/
│   │   ├── sort.py
│   │   └── sort_score.py
│   └── tracking_utils/
│       ├── evaluation.py
│       ├── io.py
│       └── timer.py
├── trackeval/
│   ├── __init__.py
│   ├── _timing.py
│   ├── baselines/
│   │   ├── __init__.py
│   │   ├── baseline_utils.py
│   │   ├── non_overlap.py
│   │   ├── pascal_colormap.py
│   │   ├── stp.py
│   │   ├── thresholder.py
│   │   └── vizualize.py
│   ├── eval.py
│   ├── metrics/
│   │   ├── __init__.py
│   │   ├── _base_metric.py
│   │   ├── clear.py
│   │   ├── count.py
│   │   ├── hota.py
│   │   ├── identity.py
│   │   ├── ideucl.py
│   │   ├── j_and_f.py
│   │   ├── track_map.py
│   │   └── vace.py
│   ├── plotting.py
│   ├── scripts/
│   │   ├── comparison_plots.py
│   │   ├── run_bdd.py
│   │   ├── run_davis.py
│   │   ├── run_headtracking_challenge.py
│   │   ├── run_kitti.py
│   │   ├── run_kitti_mots.py
│   │   ├── run_mot_challenge.py
│   │   ├── run_mots_challenge.py
│   │   ├── run_rob_mots.py
│   │   ├── run_tao.py
│   │   ├── run_tao_ow.py
│   │   └── run_youtube_vis.py
│   └── utils.py
├── utils/
│   ├── args.py
│   ├── misc.py
│   ├── triplet.py
│   ├── utils.py
│   └── visualize.py
└── yolox/
    ├── __init__.py
    ├── core/
    │   ├── __init__.py
    │   ├── launch.py
    │   └── trainer.py
    ├── data/
    │   ├── __init__.py
    │   ├── data_augment.py
    │   ├── data_prefetcher.py
    │   ├── dataloading.py
    │   ├── datasets/
    │   │   ├── __init__.py
    │   │   ├── datasets_wrapper.py
    │   │   ├── mosaicdetection.py
    │   │   └── mot.py
    │   └── samplers.py
    ├── evaluators/
    │   ├── __init__.py
    │   ├── coco_evaluator.py
    │   ├── evaluation.py
    │   ├── mot_evaluator.py
    │   ├── mot_evaluator_dance.py
    │   └── mot_evaluator_public.py
    ├── exp/
    │   ├── __init__.py
    │   ├── base_exp.py
    │   ├── build.py
    │   └── yolox_base.py
    ├── layers/
    │   ├── __init__.py
    │   ├── csrc/
    │   │   ├── cocoeval/
    │   │   │   ├── cocoeval.cpp
    │   │   │   └── cocoeval.h
    │   │   └── vision.cpp
    │   └── fast_coco_eval_api.py
    ├── models/
    │   ├── __init__.py
    │   ├── darknet.py
    │   ├── losses.py
    │   ├── network_blocks.py
    │   ├── yolo_fpn.py
    │   ├── yolo_head.py
    │   ├── yolo_pafpn.py
    │   └── yolox.py
    └── utils/
        ├── __init__.py
        ├── allreduce_norm.py
        ├── boxes.py
        ├── checkpoint.py
        ├── demo_utils.py
        ├── dist.py
        ├── ema.py
        ├── logger.py
        ├── lr_scheduler.py
        ├── metric.py
        ├── model_utils.py
        ├── setup_env.py
        └── visualize.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

evaldata/
# C extensions
*.so
# Distribution / packaging
.Python
build/
evaldata
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# output
docs/api
.code-workspace.code-workspace
*.pkl
*.npy
*.pth
*.onnx
*.engine
events.out.tfevents*
YOLOX_outputs
visualizations
results/
error_log.txt
.idea/


================================================
FILE: Dockerfile
================================================
FROM nvcr.io/nvidia/tensorrt:21.09-py3

ENV DEBIAN_FRONTEND=noninteractive
ARG USERNAME=user
ARG WORKDIR=/workspace/OC_SORT

RUN apt-get update && apt-get install -y \
        automake autoconf libpng-dev nano python3-pip \
        curl zip unzip libtool swig zlib1g-dev pkg-config \
        python3-mock libpython3-dev libpython3-all-dev \
        g++ gcc cmake make pciutils cpio gosu wget \
        libgtk-3-dev libxtst-dev sudo apt-transport-https \
        build-essential gnupg git xz-utils vim \
        libva-drm2 libva-x11-2 vainfo libva-wayland2 libva-glx2 \
        libva-dev libdrm-dev xorg xorg-dev protobuf-compiler \
        openbox libx11-dev libgl1-mesa-glx libgl1-mesa-dev \
        libtbb2 libtbb-dev libopenblas-dev libopenmpi-dev \
    && sed -i 's/# set linenumbers/set linenumbers/g' /etc/nanorc \
    && apt clean \
    && rm -rf /var/lib/apt/lists/*

RUN git clone https://github.com/noahcao/OC_SORT \
    && cd OC_SORT \
    && mkdir -p YOLOX_outputs/yolox_x_mix_det/track_vis \
    && sed -i 's/torch>=1.7/torch==1.9.1+cu111/g' requirements.txt \
    && sed -i 's/torchvision==0.10.0/torchvision==0.10.1+cu111/g' requirements.txt \
    && sed -i "s/'cuda'/0/g" tools/demo_track.py \
    && pip3 install pip --upgrade \
    && pip3 install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html \
    && python3 setup.py develop \
    && pip3 install cython \
    && pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' \
    && pip3 install cython_bbox gdown \
    && ldconfig \
    && pip cache purge

RUN git clone https://github.com/NVIDIA-AI-IOT/torch2trt \
    && cd torch2trt \
    && git checkout 0400b38123d01cc845364870bdf0a0044ea2b3b2 \
    # https://github.com/NVIDIA-AI-IOT/torch2trt/issues/619
    && wget https://github.com/NVIDIA-AI-IOT/torch2trt/commit/8b9fb46ddbe99c2ddf3f1ed148c97435cbeb8fd3.patch \
    && git apply 8b9fb46ddbe99c2ddf3f1ed148c97435cbeb8fd3.patch \
    && python3 setup.py install

RUN echo "root:root" | chpasswd \
    && adduser --disabled-password --gecos "" "${USERNAME}" \
    && echo "${USERNAME}:${USERNAME}" | chpasswd \
    && echo "%${USERNAME}    ALL=(ALL)   NOPASSWD:    ALL" >> /etc/sudoers.d/${USERNAME} \
    && chmod 0440 /etc/sudoers.d/${USERNAME}
USER ${USERNAME}
RUN sudo chown -R ${USERNAME}:${USERNAME} ${WORKDIR}
WORKDIR ${WORKDIR}


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2021 Yifu Zhang

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# Hybrid-SORT

 [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ![test](https://img.shields.io/static/v1?label=By&message=Pytorch&color=red)

#### Hybrid-SORT is a simply and strong multi-object tracker.

> [**Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking**](https://arxiv.org/abs/2308.00783)
> 

## Abstract

Multi-Object Tracking (MOT) aims to detect and associate all desired objects across frames. Most methods accomplish the task by explicitly or implicitly leveraging strong cues (i.e., spatial and appearance information), which exhibit powerful instance-level discrimination. However, when object occlusion and clustering occur, both spatial and appearance information will become ambiguous simultaneously due to the high overlap between objects. In this paper, we demonstrate that this long-standing challenge in MOT can be efficiently and effectively resolved by incorporating weak cues to compensate for strong cues. Along with velocity direction, we introduce the confidence state and height state as potential weak cues. With superior performance, our method still maintains Simple, Online and Real-Time (SORT) characteristics. Furthermore, our method shows strong generalization for diverse trackers and scenarios in a plug-and-play and training-free manner. Significant and consistent improvements are observed when applying our method to 5 different representative trackers. Further, by leveraging both strong and weak cues, our method Hybrid-SORT achieves superior performance on diverse benchmarks, including MOT17, MOT20, and especially DanceTrack where interaction and occlusion are frequent and severe.

### Highlights

- Hybrid-SORT is a **SOTA** heuristic trackers on DanceTrack and performs excellently on MOT17/MOT20 datasets.
- Maintains **Simple, Online and Real-Time (SORT)** characteristics.
- **Training-free** and **plug-and-play** manner.
- **Strong generalization** for diverse trackers and scenarios

### Pipeline

<center>
<img src="assets/pipeline.png" width="800"/>
</center>

## News
* [12/09/2023]: Hybrid-SORT is accepted by **AAAI2024**!
* [08/24/2023]: Hybrid-SORT is supported in [yolo_tracking](https://github.com/mikel-brostrom/yolo_tracking). Many thanks to [@mikel-brostrom](https://github.com/mikel-brostrom) for the contribution.
* [08/01/2023]: The [arxiv preprint](https://arxiv.org/abs/2308.00783) of Hybrid-SORT is released.

## Tracking performance

### Results on DanceTrack test set

| Tracker          | HOTA | MOTA | IDF1 | FPS  |
| :--------------- | :--: | :--: | :--: | :--: |
| OC-SORT          | 54.6 | 89.6 | 54.6 | 30.3 |
| Hybrid-SORT      | 62.2 | 91.6 | 63.0 | 27.8 |
| Hybrid-SORT-ReID | 65.7 | 91.8 | 67.4 | 15.5 |

### Results on MOT20 challenge test set

| Tracker          | HOTA | MOTA | IDF1 |
| :--------------- | :--: | :--: | :--: |
| OC-SORT          | 62.1 | 75.5 | 75.9 |
| Hybrid-SORT      | 62.5 | 76.4 | 76.2 |
| Hybrid-SORT-ReID | 63.9 | 76.7 | 78.4 |

### Results on MOT17 challenge test set

| Tracker          | HOTA | MOTA | IDF1 |
| :--------------- | :--: | :--: | :--: |
| OC-SORT          | 63.2 | 78.0 | 77.5 |
| Hybrid-SORT      | 63.6 | 79.3 | 78.4 |
| Hybrid-SORT-ReID | 64.0 | 79.9 | 78.7 |

## Installation

Hybrid-SORT code is based on [OC-SORT](https://github.com/noahcao/OC_SORT) and [FastReID](https://github.com/JDAI-CV/fast-reid). The ReID component is optional and based on [FastReID](https://github.com/JDAI-CV/fast-reid). Tested the code with Python 3.8 + Pytorch 1.10.0 + torchvision 0.11.0.

Step1. Install Hybrid_SORT

```shell
git clone https://github.com/ymzis69/HybridSORT.git
cd HybridSORT
pip3 install -r requirements.txt
python3 setup.py develop
```

Step2. Install [pycocotools](https://github.com/cocodataset/cocoapi).

```shell
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
```

Step3. Others

```shell
pip3 install cython_bbox pandas xmltodict
```

Step4. [optional] FastReID Installation

You can refer to [FastReID Installation](https://github.com/JDAI-CV/fast-reid/blob/master/INSTALL.md).

```shell
pip install -r fast_reid/docs/requirements.txt
```

## Data preparation

**Our data structure is the same as [OC-SORT](https://github.com/noahcao/OC_SORT).** 

1. Download [MOT17](https://motchallenge.net/), [MOT20](https://motchallenge.net/), [CrowdHuman](https://www.crowdhuman.org/), [Cityperson](https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/DATASET_ZOO.md), [ETHZ](https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/DATASET_ZOO.md), [DanceTrack](https://github.com/DanceTrack/DanceTrack), [CUHKSYSU](http://www.ee.cuhk.edu.hk/~xgwang/PS/dataset.html) and put them under <HYBRIDSORT_HOME>/datasets in the following structure (CrowdHuman, Cityperson and ETHZ are not needed if you download YOLOX weights from [ByteTrack](https://github.com/ifzhang/ByteTrack) or [OC-SORT](https://github.com/noahcao/OC_SORT)) :

   ```
   datasets
   |——————mot
   |        └——————train
   |        └——————test
   └——————crowdhuman
   |        └——————Crowdhuman_train
   |        └——————Crowdhuman_val
   |        └——————annotation_train.odgt
   |        └——————annotation_val.odgt
   └——————MOT20
   |        └——————train
   |        └——————test
   └——————Cityscapes
   |        └——————images
   |        └——————labels_with_ids
   └——————ETHZ
   |        └——————eth01
   |        └——————...
   |        └——————eth07
   └——————CUHKSYSU
   |        └——————images
   |        └——————labels_with_ids
   └——————dancetrack        
            └——————train
               └——————train_seqmap.txt
            └——————val
               └——————val_seqmap.txt
            └——————test
               └——————test_seqmap.txt
   ```

2. Prepare DanceTrack dataset:

   ```python
   # replace "dance" with ethz/mot17/mot20/crowdhuman/cityperson/cuhk for others
   python3 tools/convert_dance_to_coco.py 
   ```

3. Prepare MOT17/MOT20 dataset. 

   ```python
   # build mixed training sets for MOT17 and MOT20 
   python3 tools/mix_data_{ablation/mot17/mot20}.py
   ```

4. [optional] Prepare ReID datasets:

   ```
   cd <HYBRIDSORT_HOME>
   
   # For MOT17 
   python3 fast_reid/datasets/generate_mot_patches.py --data_path <dataets_dir> --mot 17
   
   # For MOT20
   python3 fast_reid/datasets/generate_mot_patches.py --data_path <dataets_dir> --mot 20
   
   # For DanceTrack
   python3 fast_reid/datasets/generate_cuhksysu_dance_patches.py --data_path <dataets_dir> 
   ```

## Model Zoo

Download and store the trained models in 'pretrained' folder as follow:

```
<HYBRIDSORT_HOME>/pretrained
```

### Detection Model

We provide some pretrained YOLO-X weights for Hybrid-SORT, which are inherited from [ByteTrack](https://github.com/ifzhang/ByteTrack).

| Dataset         | HOTA | IDF1 | MOTA | Model                                                        |
| --------------- | ---- | ---- | ---- | ------------------------------------------------------------ |
| DanceTrack-val  | 59.3 | 60.6 | 89.5 | [Google Drive](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing) |
| DanceTrack-test | 62.2 | 63.0 | 91.6 | [Google Drive](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing) |
| MOT17-half-val  | 67.1 | 78.0 | 75.8 | [Google Drive](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing) |
| MOT17-test      | 63.6 | 78.7 | 79.9 | [Google Drive](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing) |
| MOT20-test      | 62.5 | 78.4 | 76.7 | [Google Drive](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing) |


* For more YOLO-X weights, please refer to the model zoo of [ByteTrack](https://github.com/ifzhang/ByteTrack).

### ReID Model

Ours ReID models for **MOT17/MOT20** is the same as [BoT-SORT](https://github.com/NirAharon/BOT-SORT) , you can download from [MOT17-SBS-S50](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing), [MOT20-SBS-S50](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing), ReID models for DanceTrack is trained by ourself, you can download from [DanceTrack](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing).

**Notes**:


* [MOT20-SBS-S50](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing) is trained by [Deep-OC-SORT](https://github.com/GerardMaggiolino/Deep-OC-SORT), because the weight from BOT-SORT is corrupted. Refer to [Issue](https://github.com/GerardMaggiolino/Deep-OC-SORT/issues/6).
* ReID models for DanceTrack is trained by ourself, with both DanceTrack and CUHKSYSU datasets.

## Training

### Train the Detection Model

You can use Hybrid-SORT without training by adopting existing detectors. But we borrow the training guidelines from ByteTrack in case you want work on your own detector. 

Download the COCO-pretrained YOLOX weight [here](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.0) and put it under *\<HYBRIDSORT_HOME\>/pretrained*.

* **Train ablation model (MOT17 half train and CrowdHuman)**

  ```shell
  python3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  ```

* **Train MOT17 test model (MOT17 train, CrowdHuman, Cityperson and ETHZ)**

  ```shell
  python3 tools/train.py -f exps/example/mot/yolox_x_mix_det.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  ```

* **Train MOT20 test model (MOT20 train, CrowdHuman)**

  For MOT20, you need to uncomment some code lines to add box clipping: [[1]](https://github.com/ifzhang/ByteTrack/blob/72cd6dd24083c337a9177e484b12bb2b5b3069a6/yolox/data/data_augment),[[2]](https://github.com/ifzhang/ByteTrack/blob/72cd6dd24083c337a9177e484b12bb2b5b3069a6/yolox/data/datasets/mosaicdetection.py#L122),[[3]](https://github.com/ifzhang/ByteTrack/blob/72cd6dd24083c337a9177e484b12bb2b5b3069a6/yolox/data/datasets/mosaicdetection.py#L217) and [[4]](https://github.com/ifzhang/ByteTrack/blob/72cd6dd24083c337a9177e484b12bb2b5b3069a6/yolox/utils/boxes.py#L115). Then run the command:

  ```shell
  python3 tools/train.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  ```

* **Train on DanceTrack train set**

  ```shell
  python3 tools/train.py -f exps/example/dancetrack/yolox_x.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  ```

* **Train custom dataset**

  First, you need to prepare your dataset in COCO format. You can refer to [MOT-to-COCO](https://github.com/ifzhang/ByteTrack/blob/main/tools/convert_mot17_to_coco.py) or [CrowdHuman-to-COCO](https://github.com/ifzhang/ByteTrack/blob/main/tools/convert_crowdhuman_to_coco.py). Then, you need to create a Exp file for your dataset. You can refer to the [CrowdHuman](https://github.com/ifzhang/ByteTrack/blob/main/exps/example/mot/yolox_x_ch.py) training Exp file. Don't forget to modify get_data_loader() and get_eval_loader in your Exp file. Finally, you can train bytetrack on your dataset by running:

  ```shell
  python3 tools/train.py -f exps/example/mot/your_exp_file.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  ```

### Train the ReID Model

After generating MOT ReID dataset as described in the 'Data Preparation' section.

```shell
cd <BoT-SORT_dir>

# For training MOT17 
python3 fast_reid/tools/train_net.py --config-file ./fast_reid/configs/MOT17/sbs_S50.yml MODEL.DEVICE "cuda:0"

# For training MOT20
python3 fast_reid/tools/train_net.py --config-file ./fast_reid/configs/MOT20/sbs_S50.yml MODEL.DEVICE "cuda:0"

# For training DanceTrack, we joint the CHUKSUSY to train ReID Model for DanceTrack
python3 fast_reid/tools/train_net.py --config-file ./fast_reid/configs/CUHKSYSU_DanceTrack/sbs_S50.yml MODEL.DEVICE "cuda:0"
```

Refer to [FastReID](https://github.com/JDAI-CV/fast-reid)  repository for addition explanations and options.

## Tracking

**Notes**:


* Some parameters are set in the cfg.py. For example, if you run Hybrid-SORT on the dancetrack-val dataset, you should pay attention to the line 35-45 in ```exps/example/mot/yolox_dancetrack_val_hybrid_sort.py``` .
* We set  ```fp16==False``` on the MOT datasets becacuse fp16 will lead to significant result fluctuations.

### DanceTrack

**dancetrack-val dataset**

```
# Hybrid-SORT
python tools/run_hybrid_sort_dance.py -f exps/example/mot/yolox_dancetrack_val_hybrid_sort.py -b 1 -d 1 --fp16 --fuse --expn $exp_name 

# Hybrid-SORT-ReID
python tools/run_hybrid_sort_dance.py -f exps/example/mot/yolox_dancetrack_val_hybrid_sort_reid.py -b 1 -d 1 --fp16 --fuse --expn $exp_name
```

**dancetrack-test dataset**

```
# Hybrid-SORT
python tools/run_hybrid_sort_dance.py --test -f exps/example/mot/yolox_dancetrack_test_hybrid_sort.py -b 1 -d 1 --fp16 --fuse --expn $exp_name

# Hybrid-SORT-ReID
python tools/run_hybrid_sort_dance.py --test -f exps/example/mot/yolox_dancetrack_test_hybrid_sort_reid.py -b 1 -d 1 --fp16 --fuse --expn $exp_name
```

### MOT20

**MOT20-test dataset**

```
#Hybrid-SORT
python tools/run_hybrid_sort_dance.py -f exps/example/mot/yolox_x_mix_mot20_ch_hybrid_sort.py -b 1 -d 1 --fuse --mot20 --expn $exp_name 

#Hybrid-SORT-ReID
python tools/run_hybrid_sort_dance.py -f exps/example/mot/yolox_x_mix_mot20_ch_hybrid_sort_reid.py -b 1 -d 1 --fuse --mot20 --expn $exp_name
```

Hybrid-SORT is designed for online tracking, but offline interpolation has been demonstrated efficient for many cases and used by other online trackers. If you want to reproduct out result on  **MOT20-test** dataset, please use the linear interpolation over existing tracking results:

```shell
# offline post-processing
python3 tools/interpolation.py $result_path $save_path
```

### MOT17

**MOT17-val dataset**

```
# Hybrid-SORT
python3 tools/run_hybrid_sort_dance.py -f exps/example/mot/yolox_x_ablation_hybrid_sort.py -b 1 -d 1 --fuse --expn $exp_name 

# Hybrid-SORT-ReID
python3 tools/run_hybrid_sort_dance.py -f exps/example/mot/yolox_x_ablation_hybrid_sort_reid.py -b 1 -d 1 --fuse --expn  $exp_name 
```

**MOT17-test dataset**

```
# Hybrid-SORT
python3 tools/run_hybrid_sort_dance.py -f exps/example/mot/yolox_x_mix_det_hybrid_sort.py -b 1 -d 1 --fuse --expn $exp_name

# Hybrid-SORT-ReID
python3 tools/run_hybrid_sort_dance.py -f exps/example/mot/yolox_x_mix_det_hybrid_sort_reid.py -b 1 -d 1 --fuse --expn $exp_name
```

Hybrid-SORT is designed for online tracking, but offline interpolation has been demonstrated efficient for many cases and used by other online trackers. If you want to reproduct out result on  **MOT17-test** dataset, please use the linear interpolation over existing tracking results:

```shell
# offline post-processing
python3 tools/interpolation.py $result_path $save_path
```

### Demo

Hybrid-SORT, with the parameter settings of the dancetrack-val dataset

```
python3 tools/demo_track.py --demo_type image -f exps/example/mot/yolox_dancetrack_val_hybrid_sort.py -c pretrained/ocsort_dance_model.pth.tar --path ./datasets/dancetrack/val/dancetrack0079/img1 --fp16 --fuse --save_result
```

Hybrid-SORT-ReID, with the parameter settings of the dancetrack-val dataset

```
python3 tools/demo_track.py --demo_type image -f exps/example/mot/yolox_dancetrack_val_hybrid_sort_reid.py -c pretrained/ocsort_dance_model.pth.tar --path ./datasets/dancetrack/val/dancetrack0079/img1 --fp16 --fuse --save_result
```

<img src="assets/demo.gif" alt="demo" style="zoom:34%;" />

## TCM on other trackers

download ReID weight from [googlenet_part8_all_xavier_ckpt_56.h5](https://drive.google.com/drive/folders/18IsZGeGiyKDshhYIzbpYXoNEcBhPY8lN?usp=sharing) for MOTDT and DeepSORT.

**dancetrack-val dataset**

```
# SORT
python tools/run_sort_dance.py -f exps/example/mot/yolox_dancetrack_val.py -c pretrained/bytetrack_dance_model.pth.tar -b 1 -d 1 --fp16 --fuse --dataset dancetrack --expn sort_score_kalman_fir_step --TCM_first_step

# MOTDT
python3 tools/run_motdt_dance.py -f exps/example/mot/yolox_dancetrack_val.py -c pretrained/bytetrack_dance_model.pth.tar -b 1 -d 1 --fp16 --fuse --dataset dancetrack --expn motdt_score_kalman_fir_step --TCM_first_step

# ByteTrack
python3 tools/run_byte_dance.py -f exps/example/mot/yolox_dancetrack_val.py -c pretrained/bytetrack_dance_model.pth.tar -b 1 -d 1 --fp16 --fuse --dataset dancetrack --expn byte_score_kalman_fir_step --TCM_first_step

# DeepSORT
python3 tools/run_deepsort_dance.py -f exps/example/mot/yolox_dancetrack_val.py -c pretrained/bytetrack_dance_model.pth.tar -b 1 -d 1 --fp16 --fuse --dataset dancetrack --expn deepsort_score_kalman_fir_step --TCM_first_step
```

**mot17-val dataset**

```
# SORT
python3 tools/run_sort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/ocsort_mot17_ablation.pth.tar -b 1 -d 1 --fuse --expn mot17_sort_score_test_fp32 --TCM_first_step

# MOTDT
python3 tools/run_motdt.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/ocsort_mot17_ablation.pth.tar -b 1 -d 1 --fuse --expn mot17_motdt_score_test_fp32 --TCM_first_step

# ByteTrack
python3 tools/run_byte.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/ocsort_mot17_ablation.pth.tar -b 1 -d 1 --fuse --expn mot17_byte_score_test_fp32 --TCM_first_step --TCM_first_step_weight 0.6

# DeepSORT
python3 tools/run_deepsort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/ocsort_mot17_ablation.pth.tar -b 1 -d 1 --fuse --expn mot17_deepsort_score_test_fp32 --TCM_first_step
```

## Citation

If you find this work useful, please consider to cite our paper:
```
@inproceedings{yang2024hybrid,
  title={Hybrid-sort: Weak cues matter for online multi-object tracking},
  author={Yang, Mingzhan and Han, Guangxin and Yan, Bin and Zhang, Wenhua and Qi, Jinqing and Lu, Huchuan and Wang, Dong},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={7},
  pages={6504--6512},
  year={2024}
}
```

## Acknowledgement

A large part of the code is borrowed from [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX), [OC-SORT](https://github.com/noahcao/OC_SORT), [ByteTrack](https://github.com/ifzhang/ByteTrack), [BoT-SORT](https://github.com/NirAharon/BOT-SORT) and [FastReID](https://github.com/JDAI-CV/fast-reid). Many thanks for their wonderful works.



================================================
FILE: TrackEval/.gitignore
================================================
gt_data/*
!gt_data/Readme.md
tracker_output/*
!tracker_output/Readme.md
output/*
data/*
!goutput/Readme.md
**/__pycache__
.idea
error_log.txt

================================================
FILE: TrackEval/LICENSE
================================================
MIT License

Copyright (c) 2020 Jonathon Luiten

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: TrackEval/Readme.md
================================================

# TrackEval
*Code for evaluating object tracking.*

This codebase provides code for a number of different tracking evaluation metrics (including the [HOTA metrics](https://link.springer.com/article/10.1007/s11263-020-01375-2)), as well as supporting running all of these metrics on a number of different tracking benchmarks. Plus plotting of results and other things one may want to do for tracking evaluation.

## **NEW**: RobMOTS Challenge 2021

Call for submission to our [RobMOTS Challenge](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110) (Robust Multi-Object Tracking and Segmentation) held in conjunction with our [RVSU CVPR'21 Workshop](https://eval.vision.rwth-aachen.de/rvsu-workshop21/). Robust tracking evaluation against 8 tracking benchmarks. Challenge submission deadline June 15th. Also check out our workshop [call for papers](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=74).

## Official Evaluation Code

The following benchmarks use TrackEval as their official evaluation code, check out the links to see TrackEval in action:

 - **[RobMOTS](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110)** ([Official Readme](docs/RobMOTS-Official/Readme.md))
 - **[KITTI Tracking](http://www.cvlibs.net/datasets/kitti/eval_tracking.php)**
 - **[KITTI MOTS](http://www.cvlibs.net/datasets/kitti/eval_mots.php)**
 - **[MOTChallenge](https://motchallenge.net/)** ([Official Readme](docs/MOTChallenge-Official/Readme.md))
 - **[Open World Tracking](https://openworldtracking.github.io)** ([Official Readme](docs/OpenWorldTracking-Official))
 <!--- **[MOTS-Challenge](https://motchallenge.net/data/MOTS/)** ([Official Readme](docs/MOTS-Challenge-Official/Readme.md)) --->

If you run a tracking benchmark and want to use TrackEval as your official evaluation code, please contact Jonathon (contact details below).

## Currently implemented metrics

The following metrics are currently implemented:

Metric Family | Sub metrics | Paper | Code | Notes |
|----- | ----------- |----- | ----------- | ----- |
| | | |  |  |
|**HOTA metrics**|HOTA, DetA, AssA, LocA, DetPr, DetRe, AssPr, AssRe|[paper](https://link.springer.com/article/10.1007/s11263-020-01375-2)|[code](trackeval/metrics/hota.py)|**Recommended tracking metric**|
|**CLEARMOT metrics**|MOTA, MOTP, MT, ML, Frag, etc.|[paper](https://link.springer.com/article/10.1155/2008/246309)|[code](trackeval/metrics/clear.py)| |
|**Identity metrics**|IDF1, IDP, IDR|[paper](https://arxiv.org/abs/1609.01775)|[code](trackeval/metrics/identity.py)| |
|**VACE metrics**|ATA, SFDA|[paper](https://link.springer.com/chapter/10.1007/11612704_16)|[code](trackeval/metrics/vace.py)| |
|**Track mAP metrics**|Track mAP|[paper](https://arxiv.org/abs/1905.04804)|[code](trackeval/metrics/track_map.py)|Requires confidence scores|
|**J & F metrics**|J&F, J, F|[paper](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Perazzi_A_Benchmark_Dataset_CVPR_2016_paper.pdf)|[code](trackeval/metrics/j_and_f.py)|Only for Seg Masks|
|**ID Euclidean**|ID Euclidean|[paper](https://arxiv.org/pdf/2103.13516.pdf)|[code](trackeval/metrics/ideucl.py)| |


## Currently implemented benchmarks

The following benchmarks are currently implemented:

Benchmark | Sub-benchmarks | Type | Website | Code | Data Format |
|----- | ----------- |----- | ----------- | ----- | ----- |
| | | |  |  | |
|**RobMOTS**|Combination of 8 benchmarks|Seg Masks|[website](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110)|[code](trackeval/datasets/rob_mots.py)|[format](docs/RobMOTS-Official/Readme.md)|
|**Open World Tracking**|TAO-OW|OpenWorld / Seg Masks|[website](https://openworldtracking.github.io)|[code](trackeval/datasets/tao_ow.py)|[format](docs/OpenWorldTracking-Official/Readme.md)|
|**MOTChallenge**|MOT15/16/17/20|2D BBox|[website](https://motchallenge.net/)|[code](trackeval/datasets/mot_challenge_2d_box.py)|[format](docs/MOTChallenge-format.txt)|
|**KITTI Tracking**| |2D BBox|[website](http://www.cvlibs.net/datasets/kitti/eval_tracking.php)|[code](trackeval/datasets/kitti_2d_box.py)|[format](docs/KITTI-format.txt)|
|**BDD-100k**| |2D BBox|[website](https://bdd-data.berkeley.edu/)|[code](trackeval/datasets/bdd100k.py)|[format](docs/BDD100k-format.txt)|
|**TAO**| |2D BBox|[website](https://taodataset.org/)|[code](trackeval/datasets/tao.py)|[format](docs/TAO-format.txt)|
|**MOTS**|KITTI-MOTS, MOTS-Challenge|Seg Mask|[website](https://www.vision.rwth-aachen.de/page/mots)|[code](trackeval/datasets/mots_challenge.py) and [code](trackeval/datasets/kitti_mots.py)|[format](docs/MOTS-format.txt)|
|**DAVIS**|Unsupervised|Seg Mask|[website](https://davischallenge.org/)|[code](trackeval/datasets/davis.py)|[format](docs/DAVIS-format.txt)|
|**YouTube-VIS**| |Seg Mask|[website](https://youtube-vos.org/dataset/vis/)|[code](trackeval/datasets/youtube_vis.py)|[format](docs/YouTube-VIS-format.txt)|
|**Head Tracking Challenge**| |2D BBox|[website](https://arxiv.org/pdf/2103.13516.pdf)|[code](trackeval/datasets/head_tracking_challenge.py)|[format](docs/MOTChallenge-format.txt)|

## HOTA metrics

This code is also the official reference implementation for the HOTA metrics:

*[HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking](https://link.springer.com/article/10.1007/s11263-020-01375-2). IJCV 2020. Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixe and Bastian Leibe.*

HOTA is a novel set of MOT evaluation metrics which enable better understanding of tracking behavior than previous metrics.

For more information check out the following links:
 - [Short blog post on HOTA](https://jonathonluiten.medium.com/how-to-evaluate-tracking-with-the-hota-metrics-754036d183e1) - **HIGHLY RECOMMENDED READING**
 - [IJCV version of paper](https://link.springer.com/article/10.1007/s11263-020-01375-2) (Open Access)
 - [ArXiv version of paper](https://arxiv.org/abs/2009.07736)
 - [Code](trackeval/metrics/hota.py)

## Properties of this codebase

The code is written 100% in python with only numpy and scipy as minimum requirements.

The code is designed to be easily understandable and easily extendable. 

The code is also extremely fast, running at more than 10x the speed of the both [MOTChallengeEvalKit](https://github.com/dendorferpatrick/MOTChallengeEvalKit), and [py-motmetrics](https://github.com/cheind/py-motmetrics) (see detailed speed comparison below).

The implementation of CLEARMOT and ID metrics aligns perfectly with the [MOTChallengeEvalKit](https://github.com/dendorferpatrick/MOTChallengeEvalKit).

By default the code prints results to the screen, saves results out as both a summary txt file and a detailed results csv file, and outputs plots of the results. All outputs are by default saved to the 'tracker' folder for each tracker.

## Running the code

The code can be run in one of two ways:

 - From the terminal via one of the scripts [here](scripts/). See each script for instructions and arguments, hopefully this is self-explanatory.
 - Directly by importing this package into your code, see the same scripts above for how. 

## Quickly evaluate on supported benchmarks

To enable you to use TrackEval for evaluation as quickly and easily as possible, we provide ground-truth data, meta-data and example trackers for all currently supported benchmarks.
You can download this here: [data.zip](https://omnomnom.vision.rwth-aachen.de/data/TrackEval/data.zip) (~150mb).

The data for RobMOTS is separate and can be found here: [rob_mots_train_data.zip](https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/train_data.zip) (~750mb).

The easiest way to begin is to extract this zip into the repository root folder such that the file paths look like: TrackEval/data/gt/...

This then corresponds to the default paths in the code. You can now run each of the scripts [here](scripts/) without providing any arguments and they will by default evaluate all trackers present in the supplied file structure. To evaluate your own tracking results, simply copy your files as a new tracker folder into the file structure at the same level as the example trackers (MPNTrack, CIWT, track_rcnn, qdtrack, ags, Tracktor++, STEm_Seg), ensuring the same file structure for your trackers as in the example.

Of course, if your ground-truth and tracker files are located somewhere else you can simply use the script arguments to point the code toward your data.

To ensure your tracker outputs data in the correct format, check out our format guides for each of the supported benchmarks [here](docs), or check out the example trackers provided.

## Evaluate on your own custom benchmark

To evaluate on your own data, you have two options:
 - Write custom dataset code (more effort, rarely worth it).
 - Convert your current dataset and trackers to the same format of an already implemented benchmark.

To convert formats, check out the format specifications defined [here](docs).

By default, we would recommend the MOTChallenge format, although any implemented format should work. Note that for many cases you will want to use the argument ```--DO_PREPROC False``` unless you want to run preprocessing to remove distractor objects.

## Requirements
 Code tested on Python 3.7.
 
 - Minimum requirements: numpy, scipy
 - For plotting: matplotlib
 - For segmentation datasets (KITTI MOTS, MOTS-Challenge, DAVIS, YouTube-VIS): pycocotools
 - For DAVIS dataset: Pillow
 - For J & F metric: opencv_python, scikit_image
 - For simples test-cases for metrics: pytest

use ```pip3 -r install requirements.txt``` to install all possible requirements.

use ```pip3 -r install minimum_requirments.txt``` to only install the minimum if you don't need the extra functionality as listed above.

## Timing analysis

Evaluating CLEAR + ID metrics on Lift_T tracker on MOT17-train (seconds) on a i7-9700K CPU with 8 physical cores (median of 3 runs):		
Num Cores|TrackEval|MOTChallenge|Speedup vs MOTChallenge|py-motmetrics|Speedup vs py-motmetrics
:---|:---|:---|:---|:---|:---
1|9.64|66.23|6.87x|99.65|10.34x
4|3.01|29.42|9.77x| |33.11x*
8|1.62|29.51|18.22x| |61.51x*

*using a different number of cores as py-motmetrics doesn't allow multiprocessing.
				
```
python scripts/run_mot_challenge.py --BENCHMARK MOT17 --TRACKERS_TO_EVAL Lif_T --METRICS CLEAR Identity --USE_PARALLEL False --NUM_PARALLEL_CORES 1  
```
				
Evaluating CLEAR + ID metrics on LPC_MOT tracker on MOT20-train (seconds) on a i7-9700K CPU with 8 physical cores (median of 3 runs):	
Num Cores|TrackEval|MOTChallenge|Speedup vs MOTChallenge|py-motmetrics|Speedup vs py-motmetrics
:---|:---|:---|:---|:---|:---
1|18.63|105.3|5.65x|175.17|9.40x

```
python scripts/run_mot_challenge.py --BENCHMARK MOT20 --TRACKERS_TO_EVAL LPC_MOT --METRICS CLEAR Identity --USE_PARALLEL False --NUM_PARALLEL_CORES 1
```

## License

TrackEval is released under the [MIT License](LICENSE).

## Contact

If you encounter any problems with the code, please contact [Jonathon Luiten](https://www.vision.rwth-aachen.de/person/216/) ([luiten@vision.rwth-aachen.de](mailto:luiten@vision.rwth-aachen.de)).
If anything is unclear, or hard to use, please leave a comment either via email or as an issue and I would love to help.

## Dedication

This codebase was built for you, in order to make your life easier! For anyone doing research on tracking or using trackers, please don't hesitate to reach out with any comments or suggestions on how things could be improved.

## Contributing

We welcome contributions of new metrics and new supported benchmarks. Also any other new features or code improvements. Send a PR, an email, or open an issue detailing what you'd like to add/change to begin a conversation.

## Citing TrackEval

If you use this code in your research, please use the following BibTeX entry:

```BibTeX
@misc{luiten2020trackeval,
  author =       {Jonathon Luiten, Arne Hoffhues},
  title =        {TrackEval},
  howpublished = {\url{https://github.com/JonathonLuiten/TrackEval}},
  year =         {2020}
}
```

Furthermore, if you use the HOTA metrics, please cite the following paper:

```
@article{luiten2020IJCV,
  title={HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking},
  author={Luiten, Jonathon and Osep, Aljosa and Dendorfer, Patrick and Torr, Philip and Geiger, Andreas and Leal-Taix{\'e}, Laura and Leibe, Bastian},
  journal={International Journal of Computer Vision},
  pages={1--31},
  year={2020},
  publisher={Springer}
}
```

If you use any other metrics please also cite the relevant papers, and don't forget to cite each of the benchmarks you evaluate on.


================================================
FILE: TrackEval/docs/BDD100k-format.txt
================================================
Taken from: https://bdd-data.berkeley.edu/wad-2020.html

BDD100K MOT Dataset

To advance the study on multiple object tracking, we introduce BDD100K MOT Dataset. We provide 1,400 video sequences for training, 200 video sequences for validation and 400 video sequences for testing. Each video sequence is about 40 seconds long with 5 FPS resulting in approximately 200 frames per video.

BDD100K MOT Dataset is not only diverse in visual scale among and within tracks, but in the temporal range of each track. Objects in the BDD100K MOT dataset also present complicated occlusion and reappearing patterns. An object may be fully occluded or move out of the frame, and then reappear later. BDD100K MOT Dataset shows real challenges of object re-identification for tracking in autonomous driving. Details about the MOT dataset can be found in the BDD100K paper (https://arxiv.org/abs/1805.04687). Access the BDD100K data website (https://bdd-data.berkeley.edu/) to download the data.

Folder Structure
bdd100k/
├── images/
|   ├── track/
|   |   ├── train/
|   |   |   ├── $VIDEO_NAME/
|   |   |   |   ├── $VIDEO_NAME-$FRAME_INDEX.jpg
|   |   ├── val/
|   |   ├── test/
├── labels-20/
|   ├── box-track/
|   |   ├── train/
|   |   |   ├── $VIDEO_NAME.json
|   |   |   |
|   |   ├── val/
The frames for each video are stored in a folder in the images directory. The labels for each video are stored in a json file with the format detailed below.

Label Format
Each json file contains a list of frame objects, and each frame object has the format below. The format follows the schema of BDD100K data format (https://github.com/ucbdrive/bdd100k/blob/master/doc/format.md).

- name: string
- videoName: string
- index: int
- labels: [ ]
    - id: string
    - category: string
    - attributes:
        - Crowd: boolean
        - Occluded: boolean
        - Truncated: boolean
    - box2d:
        - x1: float
        - y1: float
        - x2: float
        - y2: float
There are 11 object categories in this release:

pedestrian
rider
other person
car
bus
truck
train
trailer
other vehicle
motorcycle
bicycle

Notes:
The same instance shares "id" across frames.
The "pedestrian", "bicycle", and "motorcycle" correspond to the "person", "bike", and "motor" classes in the BDD100K Detection dataset.
We consider "other person", "trailer", and "other vehicle" as distractors, which are ignored during evaluation. We only evaluate the multi-object tracking of the other 8 categories.
We set three super-categories: "person" (with classes "pedestrian" and"rider"), "vehicle" ("car", "bus", "truck", and "train"), and "bike" ("motorcycle" and "bicycle") for the purpose of evaluation.

Submission Format
The submission file for each of the two phases is a json file compressed by zip. Each json file is a list of frame objects with the format detailed below. The format also follows the schema of BDD100K data format (https://github.com/ucbdrive/bdd100k/blob/master/doc/format.md).

- name: string
- labels [ ]:
    - id: string
    - category: string
    - box2d:
       - x1: float
       - y1: float
       - x2: float
       - y2: float

Note that objects with the same identity share id across frames in a given video, and should be unique across different videos. Our evaluation will match the category string in evaluation, so you can assign your own integer ID for the categories in your model. But we recommend to encode the 8 relevant categories in the following order so that it is easier for the research community to share the models.

pedestrian
rider
car
truck
bus
train
motorcycle
bicycle

The evaluation server will perform evaluation for each category and aggregate the results to compute the overall metrics. Then the server will merge both the ground-truth and predicted labels into super-categories and evaluate for each super- category.

Evaluation
Evaluation platform: We host our evaluation server on CodaLab (https://competitions.codalab.org/competitions/24492). There are two phases for the challenge: val phase and test phase. The final ranking will be based on the test phase.
Pre-training: It is a fair game to pre-train your network with ImageNet or COCO, but if other datasets are used, please note in the submission description. We will rank the methods without using external datasets except ImageNet and COCO.
Ignoring distractors: As a preprocessing step, all predicted boxes are matched and the ones matched to distractor ground-truth boxes ("other person", "trailer", and "other vehicle") are ignored.
Crowd region: After bounding box matching, we ignore all detected false-positive boxes that has >50% overlap with the crowd region (ground-truth boxes with the "Crowd" attribute).
Super-category: In addition to the evaluation of all 8 classes, we merge ground truth and prediction categories into 3 super-categories specified above, and evaluate the results for each super-category. The super-category evaluation results will be provided only for the purpose of reference.

================================================
FILE: TrackEval/docs/DAVIS-format.txt
================================================
Annotation Format:


The annotations in each frame are stored in png format.
This png is stored indexed i.e. it has a single channel and each pixel has a value from 0 to 254 that corresponds to a color palette attached to the png file.
It is important to take this into account when decoding the png i.e. the output of decoding should be a single channel image and it should not be necessary to do any remap from RGB to indexes. 
The latter is crucial to preseve the index of each object so it can match to the correct object in evaluation.

Each pixel that belongs to the same object has the same value in this png map through the whole video.
Start at 1 for the first object, then 2, 3, 4 etc.
The background (not an object) has value 0.
Also note that invalid/void pixels are stored with a 254 value.


These can be read like this:

import PIL.Image as Image
img = np.array(Image.open("000005.png"))


or like this:

ann_data = tf.read_file(ann_filename)
ann = tf.image.decode_image(ann_data, dtype=tf.uint8, channels=1)


See the code for loading the davis dataset for more details.



================================================
FILE: TrackEval/docs/How_To/Add_a_new_metric.md
================================================
# How to add a new or custom family of evaluation metrics to TrackEval

 - Create your metrics code in ```trackeval/metrics/<your_metric>.py```.
 - It's probably easiest to start by copying an existing metrics code and editing it, e.g. ```trackeval/metrics/identity.py``` is probably the simplest.
 - Your metric should be class, and it should inherit from the ```trackeval.metrics._base_metric._BaseMetric``` class.
 - Define an ```__init__``` function that defines the different ```fields``` (values) that your metric will calculate. See ```trackeval/metrics/_base_metric.py``` for a list of currently used field types. Feel free to add new types.
 - Define your code to actually calculate your metric for a single sequence and single class in a function called ```eval_sequence```, which takes a data dictionary as input, and returns a results dictionary as output.
 - Define functions for how to combine your metric field values over a) sequences ```combine_sequences```, b) over classes ```combine_classes_class_averaged```, and c) over classes weighted by the number of detections ```combine_classes_det_averaged```.
 - We find using a function such as the ```_compute_final_fields``` function that we use in the current metrics is convienient because it is likely used for metrics calculation and for the different metric combination, however this is not required.
 - Register your new metric by adding it to ```trackeval/metrics/init.py```  
 - Your new metric can be used by passing the metrics class to a list of metrics which is passed to the evaluator (see files in ```scripts/*```).


================================================
FILE: TrackEval/docs/KITTI-format.txt
================================================
Taken from download link found at: http://www.cvlibs.net/datasets/kitti/eval_tracking.php

###########################################################################
#           THE KITTI VISION BENCHMARK SUITE: TRACKING BENCHMARK          #
#              Andreas Geiger    Philip Lenz    Raquel Urtasun            #
#                    Karlsruhe Institute of Technology                    #
#                Toyota Technological Institute at Chicago                #
#                             www.cvlibs.net                              #
###########################################################################

For recent updates see http://www.cvlibs.net/datasets/kitti/eval_tracking.php.

This file describes the KITTI tracking benchmarks, consisting of 21 
training sequences and 29 test sequences. 

Despite the fact that we have labeled 8 different classes, only the classes 
'Car' and 'Pedestrian' are evaluated in our benchmark, as only for those 
classes enough instances for a comprehensive evaluation have been labeled. 

The labeling process has been performed in two steps: First we hired a set 
of annotators, to label 3D bounding boxes for tracklets in 3D Velodyne 
point clouds. Since for a pedestrian tracklet, a single 3D bounding box 
tracklet (dimensions have been fixed) often fits badly, we additionally 
labeled the left/right boundaries of each object by making use of Mechanical
Turk. We also collected labels of the object's occlusion state, and computed 
the object's truncation via backprojecting a car/pedestrian model into the
image plane.

NOTE: WHEN SUBMITTING RESULTS, PLEASE STORE THEM IN THE SAME DATA FORMAT IN
WHICH THE GROUND TRUTH DATA IS PROVIDED (SEE BELOW), USING THE FILE NAMES
0000.txt 0001.txt ... CREATE A ZIP ARCHIVE OF THEM AND STORE YOUR
RESULTS (ONLY THE RESULTS OF THE TEST SET) IN ITS ROOT FOLDER. FOR A 
RE-SUBMISSION, _ONLY_ THE RE-SUBMITTED RESULTS WILL BE SHOWN IN THE TABLE.

Data Format Description
=======================

The data for training and testing can be found in the corresponding folders.
The sub-folders are structured as follows:

  - image_02/%04d/ contains the left color camera sequence images (png)
  - image_03/%04d/ contains the right color camera sequence images  (png)
  - label_02/ contains the left color camera label files (plain text files)
  - calib/ contains the calibration for all four cameras (plain text files)

The label files contain the following information. 
All values (numerical or strings) are separated via spaces, each row 
corresponds to one object. The 17 columns represent:

#Values    Name      Description
----------------------------------------------------------------------------
   1    frame        Frame within the sequence where the object appearers
   1    track id     Unique tracking id of this object within this sequence
   1    type         Describes the type of object: 'Car', 'Van', 'Truck',
                     'Pedestrian', 'Person_sitting', 'Cyclist', 'Tram',
                     'Misc' or 'DontCare'
   1    truncated    Integer (0,1,2) indicating the level of truncation.
                     Note that this is in contrast to the object detection
                     benchmark where truncation is a float in [0,1].
   1    occluded     Integer (0,1,2,3) indicating occlusion state:
                     0 = fully visible, 1 = partly occluded
                     2 = largely occluded, 3 = unknown
   1    alpha        Observation angle of object, ranging [-pi..pi]
   4    bbox         2D bounding box of object in the image (0-based index):
                     contains left, top, right, bottom pixel coordinates
   3    dimensions   3D object dimensions: height, width, length (in meters)
   3    location     3D object location x,y,z in camera coordinates (in meters)
   1    rotation_y   Rotation ry around Y-axis in camera coordinates [-pi..pi]
   1    score        Only for results: Float, indicating confidence in
                     detection, needed for p/r curves, higher is better.
                     

Here, 'DontCare' labels denote regions in which objects have not been labeled,
for example because they have been too far away from the laser scanner. To
prevent such objects from being counted as false positives our evaluation
script will ignore objects tracked in don't care regions of the test set.
You can use the don't care labels in the training set to avoid that your object
detector/tracking algorithm is harvesting hard negatives from those areas, 
in case you consider non-object regions from the training images as negative 
examples.

The reference point for the 3D bounding box for each object is centered on the
bottom face of the box. The corners of bounding box are computed as follows with
respect to the reference point and in the object coordinate system:
x_corners = [l/2, l/2, -l/2, -l/2,  l/2,  l/2, -l/2, -l/2]^T
y_corners = [0,   0,    0,    0,   -h,   -h,   -h,   -h  ]^T
z_corners = [w/2, -w/2, -w/2, w/2, w/2, -w/2, -w/2, w/2  ]^T
with l=length, h=height, and w=width.

The coordinates in the camera coordinate system can be projected in the image
by using the 3x4 projection matrix in the calib folder, where for the left
color camera for which the images are provided, P2 must be used. The
difference between rotation_y and alpha is, that rotation_y is directly
given in camera coordinates, while alpha also considers the vector from the
camera center to the object center, to compute the relative orientation of
the object with respect to the camera. For example, a car which is facing
along the X-axis of the camera coordinate system corresponds to rotation_y=0,
no matter where it is located in the X/Z plane (bird's eye view), while
alpha is zero only, when this object is located along the Z-axis of the
camera. When moving the car away from the Z-axis, the observation angle
(\alpha) will change.

An overview of the coordinate systems, reference point and geometrical 
definitions is given in cs_overview.pdf.

To project a point from Velodyne coordinates into the left color image,
you can use this formula: x = P2 * R0_rect * Tr_velo_to_cam * y
For the right color image: x = P3 * R0_rect * Tr_velo_to_cam * y

Note: All matrices are stored row-major, i.e., the first values correspond
to the first row. R0_rect contains a 3x3 matrix which you need to extend to
a 4x4 matrix by adding a 1 as the bottom-right element and 0's elsewhere.
Tr_xxx is a 3x4 matrix (R|t), which you need to extend to a 4x4 matrix 
in the same way!

The sensors were not moved between the different days while taking footage.
However, the full camera calibration was performed for every day separately.
Therefore, only 'Tr_imu_velo' is constant for all sequences.

Note that while all this information is available for the training data,
only the data which is actually needed for the particular benchmark must
be provided to the evaluation server. However, all 17 values must be provided
at all times, with the unused ones set to their default values (=invalid).
Additionally a 18'th value must be provided
with a floating value of the score for a particular tracked detection, where 
higher indicates higher confidence in the detection. The range of your scores 
will be automatically determined by our evaluation server, you don't have to
normalize it, but it should be roughly linear.

Tracking Benchmark
==================

The goal in the object tracking task is to estimate object tracklets for the 
classes 'Car', 'Pedestrian', and (optional) 'Cyclist'. The tracking
algorithm must provide as output the 2D 0-based bounding boxes in each image in
the sequence using the format specified above, as well as a score, indicating
the confidence in the particular frame for this track. All other values must be
set to their default values (=invalid), see above. One text file per sequence
must be provided in a zip archive, where each file can contain many detections,
depending on the number of objects per sequence. In our evaluation we only
evaluate detections/objects larger than 25 pixel (height) in the image and do
not count Vans as false positives for cars or Sitting Persons as wrong positives
for Pedestrians due to their similarity in appearance. (All ignored objects 
are considered as DontCare areas.) As evaluation criterion we follow the 
HOTA, CLEARMOT and Mostly-Tracked/Partly-Tracked/Mostly-Lost metrics.

Raw Data
========

Raw data is mapped to the tracking benchmark sequences and available for 
download.

The velodyne and positioning data for training and testing can be found in the 
corresponding folders. The sub-folders are structured as follows:

  - velodyne/%04d/ contains the raw velodyne point clouds (binary file)
  - oxts/ contains the raw position (oxts) data (plain text files)





================================================
FILE: TrackEval/docs/MOTChallenge-Official/Readme.md
================================================
![Test Image 4](https://motchallenge.net/img/header-bg/mot_bannerthin.png)
![MOT_PIC](https://motchallenge.net/sequenceVideos/MOT17-04-SDP-gt.jpg)
# MOTChallenge Official Evaluation Kit - Multi-Object Tracking - MOT15, MOT16, MOT17, MOT20

TrackEval is now the Official Evaluation Kit for MOTChallenge.

This repository contains the evaluation scripts for the MOT challenges available at www.MOTChallenge.net.

This codebase replaces the previous version that used to be accessible at https://github.com/dendorferpatrick/MOTChallengeEvalKit and is no longer maintained.

Challenge Name | Data url |
|----- | ----------- |
|2D MOT 15| https://motchallenge.net/data/MOT15/ |
|MOT 16| https://motchallenge.net/data/MOT16/       |
|MOT 17| https://motchallenge.net/data/MOT17/       |
|MOT 20| https://motchallenge.net/data/MOT20/       |

## Requirements 
* Python (3.5 or newer)
* Numpy and Scipy

## Directories and Data
The easiest way to get started is to simply download the TrackEval example data from here: [data.zip](https://omnomnom.vision.rwth-aachen.de/data/TrackEval/data.zip) (~150mb).

This contains all the ground-truth, example trackers and meta-data that you will need.

Extract this zip into the repository root folder such that the file paths look like: TrackEval/data/gt/...

## Evaluation
To run the evaluation for your method please run the script at ```TrackEval/scripts/run_mot_challenge.py```.

Some of the basic arguments are described below. For more arguments, please see the script itself.

```BENCHMARK```: Name of the benchmark, e.g. MOT15, MO16, MOT17 or MOT20  (default : MOT17)

```SPLIT_TO_EVAL```: Data split on which to evalute e.g. train, test (default : train)

```TRACKERS_TO_EVAL```: List of tracker names for which you wish to run evaluation. e.g. MPNTrack (default: all trackers in tracker folder)

```METRICS```: List of metric families which you wish to compute. e.g. HOTA CLEAR Identity VACE (default: HOTA CLEAR Identity)

```USE_PARALLEL```: Whether to run evaluation in parallel on multiple cores. (default: False)

```NUM_PARALLEL_CORES```: Number of cores to use when running in parallel. (default: 8)

An example is below (this will work on the supplied example data above):
```
python scripts/run_mot_challenge.py --BENCHMARK MOT17 --SPLIT_TO_EVAL train --TRACKERS_TO_EVAL MPNTrack --METRICS HOTA CLEAR Identity VACE --USE_PARALLEL False --NUM_PARALLEL_CORES 1  
```


## Data Format
<p>
The tracker file format should be the same as the ground truth file, 
which is a CSV text-file containing one object instance per line.
Each line must contain 10 values:
</p>

</p>
<code>
&lt;frame&gt;,
&lt;id&gt;,
&lt;bb_left&gt;,
&lt;bb_top&gt;,
&lt;bb_width&gt;,
&lt;bb_height&gt;,
&lt;conf&gt;,
&lt;x&gt;,
&lt;y&gt;,
&lt;z&gt;
</code>
</p>

The world coordinates <code>x,y,z</code>
are ignored for the 2D challenge and can be filled with -1.
Similarly, the bounding boxes are ignored for the 3D challenge.
However, each line is still required to contain 10 values.

All frame numbers, target IDs and bounding boxes are 1-based. Here is an example:

<pre>
1, 3, 794.27, 247.59, 71.245, 174.88, -1, -1, -1, -1
1, 6, 1648.1, 119.61, 66.504, 163.24, -1, -1, -1, -1
1, 8, 875.49, 399.98, 95.303, 233.93, -1, -1, -1, -1
...
</pre>

 
## Evaluating on your own Data
The repository also allows you to include your own datasets and evaluate your method on your own challenge ```<YourChallenge>```.  To do so, follow these two steps:  
***1. Ground truth data preparation***  
Prepare your sequences in directory ```TrackEval/data/gt/mot_challenge/<YourChallenge>``` following this structure:

```
.
|—— <SeqName01>
	|—— gt
		|—— gt.txt
	|—— seqinfo.ini
|—— <SeqName02>
	|—— ……
|—— <SeqName03>
	|—— …...
```

***2. Sequence file***  
Create text files containing the sequence names; ```<YourChallenge>-train.txt```, ```<YourChallenge>-test.txt```,  ```<YourChallenge>-test.txt``` inside the ```seqmaps``` folder, e.g.:
```<YourChallenge>-all.txt```
```
name
<seqName1> 
<seqName2>
<seqName3>
```

```<YourChallenge>-train.txt```
```
name
<seqName1> 
<seqName2>
```

```<YourChallenge>-test.txt```
```
name
<seqName3>
```


To run the evaluation for your method adjust the file ```scripts/run_mot_challenge.py``` and set ```BENCHMARK = <YourChallenge>```


## Citation
If you work with the code and the benchmark, please cite:

***TrackEval***
```
@misc{luiten2020trackeval,
  author =       {Jonathon Luiten, Arne Hoffhues},
  title =        {TrackEval},
  howpublished = {\url{https://github.com/JonathonLuiten/TrackEval}},
  year =         {2020}
}
```
***MOTChallenge Journal***
```
@article{dendorfer2020motchallenge,
  title={MOTChallenge: A Benchmark for Single-camera Multiple Target Tracking},
  author={Dendorfer, Patrick and Osep, Aljosa and Milan, Anton and Schindler, Konrad and Cremers, Daniel and Reid, Ian and Roth, Stefan and Leal-Taix{\'e}, Laura},
  journal={International Journal of Computer Vision},
  pages={1--37},
  year={2020},
  publisher={Springer}
}
```
***MOT 15***
```
@article{MOTChallenge2015,
	title = {{MOTC}hallenge 2015: {T}owards a Benchmark for Multi-Target Tracking},
	shorttitle = {MOTChallenge 2015},
	url = {http://arxiv.org/abs/1504.01942},
	journal = {arXiv:1504.01942 [cs]},
	author = {Leal-Taix\'{e}, L. and Milan, A. and Reid, I. and Roth, S. and Schindler, K.},
	month = apr,
	year = {2015},
	note = {arXiv: 1504.01942},
	keywords = {Computer Science - Computer Vision and Pattern Recognition}
}
```
***MOT 16, MOT 17***
```
@article{MOT16,
	title = {{MOT}16: {A} Benchmark for Multi-Object Tracking},
	shorttitle = {MOT16},
	url = {http://arxiv.org/abs/1603.00831},
	journal = {arXiv:1603.00831 [cs]},
	author = {Milan, A. and Leal-Taix\'{e}, L. and Reid, I. and Roth, S. and Schindler, K.},
	month = mar,
	year = {2016},
	note = {arXiv: 1603.00831},
	keywords = {Computer Science - Computer Vision and Pattern Recognition}
}
```
***MOT 20***
```
@article{MOTChallenge20,
    title={MOT20: A benchmark for multi object tracking in crowded scenes},
    shorttitle = {MOT20},
	url = {http://arxiv.org/abs/1906.04567},
	journal = {arXiv:2003.09003[cs]},
	author = {Dendorfer, P. and Rezatofighi, H. and Milan, A. and Shi, J. and Cremers, D. and Reid, I. and Roth, S. and Schindler, K. and Leal-Taix\'{e}, L. },
	month = mar,
	year = {2020},
	note = {arXiv: 2003.09003},
	keywords = {Computer Science - Computer Vision and Pattern Recognition}
}
```
***HOTA metrics***
```
@article{luiten2020IJCV,
  title={HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking},
  author={Luiten, Jonathon and Osep, Aljosa and Dendorfer, Patrick and Torr, Philip and Geiger, Andreas and Leal-Taix{\'e}, Laura and Leibe, Bastian},
  journal={International Journal of Computer Vision},
  pages={1--31},
  year={2020},
  publisher={Springer}
}
```

## Feedback and Contact
We are constantly working on improving our benchmark to provide the best performance to the community.
You can help us to make the benchmark better by open issues in the repo and reporting bugs.

For general questions, please contact one of the following:

```
Patrick Dendorfer - patrick.dendorfer@tum.de
Jonathon Luiten - luiten@vision.rwth-aachen.de
Aljosa Osep - aljosa.osep@tum.de
```



================================================
FILE: TrackEval/docs/MOTChallenge-format.txt
================================================
Taken from: https://motchallenge.net/instructions/

File Format

Please submit your results as a single .zip file. The results for each sequence must be stored in a separate .txt file in the archive's root folder. The file name must be exactly like the sequence name (case sensitive).

The file format should be the same as the ground truth file, which is a CSV text-file containing one object instance per line. Each line must contain 10 values:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>
The conf value contains the detection confidence in the det.txt files. For the ground truth, it acts as a flag whether the entry is to be considered. A value of 0 means that this particular instance is ignored in the evaluation, while any other value can be used to mark it as active. For submitted results, all lines in the .txt file are considered. The world coordinates x,y,z are ignored for the 2D challenge and can be filled with -1. Similarly, the bounding boxes are ignored for the 3D challenge. However, each line is still required to contain 10 values.

All frame numbers, target IDs and bounding boxes are 1-based. Here is an example:

Tracking with bounding boxes
(MOT15, MOT16, MOT17, MOT20)
  1, 3, 794.27, 247.59, 71.245, 174.88, -1, -1, -1, -1
  1, 6, 1648.1, 119.61, 66.504, 163.24, -1, -1, -1, -1
  1, 8, 875.49, 399.98, 95.303, 233.93, -1, -1, -1, -1
  ...

Multi Object Tracking & Segmentation
(MOTS Challenge)
Each line of an annotation txt file is structured like this (where rle means run-length encoding from COCO):

time_frame id class_id img_height img_width rle
An example line from a txt file:

52 1005 1 375 1242 WSV:2d;1O10000O10000O1O100O100O1O100O1000000000000000O100O102N5K00O1O1N2O110OO2O001O1NTga3
Meaning:
time frame 52
object id 1005 (meaning class id is 1, i.e. car and instance id is 5)
class id 1
image height 375
image width 1242
rle WSV:2d;1O10000O10000O1O100O100O1O100O1000000000000000O100O...1O1N

image height, image width, and rle can be used together to decode a mask using cocotools(https://github.com/cocodataset/cocoapi) .

================================================
FILE: TrackEval/docs/MOTS-format.txt
================================================
Taken from: https://www.vision.rwth-aachen.de/page/mots


Annotation Format
We provide two alternative and equivalent formats, one encoded as png images, and one encoded as txt files. The txt files are smaller, and faster to be read in, but the cocotools are needed to decode the masks. For code to read the annotations also see mots_tools/blob/master/mots_common/io.py

Note that in both formats an id value of 10,000 denotes an ignore region and 0 is background. The class id can be obtained by floor divison of the object id by 1000 (class_id = obj_id // 1000) and the instance id can be obtained by the object id modulo 1000 (instance_id = obj_id % 1000). The object ids are consistent over time.

The class ids are the following

car 1
pedestrian 2
png format
The png format has a single color channel with 16 bits and can for example be read like this:

import PIL.Image as Image
img = np.array(Image.open("000005.png"))
obj_ids = np.unique(img)
# to correctly interpret the id of a single object
obj_id = obj_ids[0]
class_id = obj_id // 1000
obj_instance_id = obj_id % 1000
When using a TensorFlow input pipeline for reading the annotations, you can use

ann_data = tf.read_file(ann_filename)
ann = tf.image.decode_image(ann_data, dtype=tf.uint16, channels=1)


txt format
Each line of an annotation txt file is structured like this (where rle means run-length encoding from COCO):

time_frame id class_id img_height img_width rle
An example line from a txt file:

52 1005 1 375 1242 WSV:2d;1O10000O10000O1O100O100O1O100O1000000000000000O100O102N5K00O1O1N2O110OO2O001O1NTga3
Which means

time frame 52
object id 1005 (meaning class id is 1, i.e. car and instance id is 5)
class id 1
image height 375
image width 1242
rle WSV:2d;1O10000O10000O1O100O100O1O100O1000000000000000O100O...1O1N

image height, image width, and rle can be used together to decode a mask using cocotools.

================================================
FILE: TrackEval/docs/OpenWorldTracking-Official/Readme.md
================================================
![owt](https://user-images.githubusercontent.com/23000532/160293694-6fc0a3da-c177-4776-8472-49ff6ff375a3.jpg)
# Opening Up Open-World Tracking - Official Evaluation Code

TrackEval now contains the official evalution code for evaluating the task of **Open World Tracking**.

This is the official code from the following paper:

<pre><b>Opening up Open-World Tracking</b>
Yang Liu*, Idil Esen Zulfikar*, Jonathon Luiten*, Achal Dave*, Deva Ramanan, Bastian Leibe, Aljoša Ošep, Laura Leal-Taixé
<t><t>*Equal contribution
CVPR 2022</pre>

[Paper](https://arxiv.org/abs/2104.11221)

[Website](https://openworldtracking.github.io)

## Running and understanding the code

The code can be run by running the following script (see script for arguments and how to run):
[TAO-OW run script](https://github.com/JonathonLuiten/TrackEval/blob/master/scripts/run_tao_ow.py)

To understand the the data is being read and used, see the TAO-OW dataset class:
[TAO-OW dataset class](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/datasets/tao_ow.py)

The 'Open World Tracking Accuracy' (OWTA) metric proposed in the paper is call RHOTA (Recall-based HOTA) within this repository, and the implementation can be found here:
[OWTA/RHOTA metric](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/metrics/hota.py)

## Citation
If you work with the code and the benchmark, please cite:

***Opening Up Open-World Tracking***
```
@inproceedings{liu2022opening,
  title={Opening up Open-World Tracking},
  author={Liu, Yang and Zulfikar, Idil Esen and Luiten, Jonathon and Dave, Achal and Ramanan, Deva and Leibe, Bastian and O{\v{s}}ep, Aljo{\v{s}}a and Leal-Taix{\'e}, Laura},
  journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}
```

***TrackEval***
```
@misc{luiten2020trackeval,
  author =       {Jonathon Luiten, Arne Hoffhues},
  title =        {TrackEval},
  howpublished = {\url{https://github.com/JonathonLuiten/TrackEval}},
  year =         {2020}
}
```


================================================
FILE: TrackEval/docs/RobMOTS-Official/Readme.md
================================================
[![image](https://user-images.githubusercontent.com/23000532/118353602-607d1080-b567-11eb-8744-3e346a438583.png)](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110)

# RobMOTS Official Evaluation Code

### NEWS: [RobMOTS Challenge](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110) for the [RVSU CVPR'21 Workshop](https://eval.vision.rwth-aachen.de/rvsu-workshop21/) is now live!!!! Challenge deadline June 15.

### NEWS: [Call for short papers](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=74) (4 pages) on tracking and other video topics for [RVSU CVPR'21 Workshop](https://eval.vision.rwth-aachen.de/rvsu-workshop21/)!!!! Paper deadline June 4.

TrackEval is now the Official Evaluation Kit for the RobMOTS Challenge.

This repository contains the official evaluation code for the challenges available at the [RobMOTS Website](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110).

The RobMOTS Challenge tests trackers' ability to work robustly across 8 different benchmarks, while tracking the [80 categories of objects from COCO](https://cocodataset.org/#explore).

The following benchmarks are included:

Benchmark | Website |
|----- | ----------- |
|MOTS Challenge| https://motchallenge.net/results/MOTS/ |
|KITTI-MOTS| http://www.cvlibs.net/datasets/kitti/eval_mots.php       |
|DAVIS Challenge Unsupervised| https://davischallenge.org/challenge2020/unsupervised.html       |
|YouTube-VIS| https://youtube-vos.org/dataset/vis/       |
|BDD100k MOTS| https://bdd-data.berkeley.edu/ |
|TAO| https://taodataset.org/       |
|Waymo Open Dataset| https://waymo.com/open/       |
|OVIS| http://songbai.site/ovis/       |

## Installing, obtaining the data, and running

Simply follow the code snippet below to install the evaluation code, download the train groundtruth data and an example tracker, and run the evaluation code on the sample tracker.

Note the code requires python 3.5 or higher.

```
# Download the TrackEval repo
git clone https://github.com/JonathonLuiten/TrackEval.git

# Move to repo folder
cd TrackEval

# Create a virtual env in the repo for evaluation
python3 -m venv ./venv

# Activate the virtual env
source venv/bin/activate

# Update pip to have the latest version of packages
pip install --upgrade pip

# Install the required packages
pip install -r requirements.txt

# Download the train gt data
wget https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/train_gt.zip

# Unzip the train gt data you just downloaded.
unzip train_gt.zip

# Download the example tracker 
wget https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/example_tracker.zip

# Unzip the example tracker you just downloaded.
unzip example_tracker.zip

# Run the evaluation on the provided example tracker on the train split (using 4 cores in parallel)
python scripts/run_rob_mots.py --ROBMOTS_SPLIT train --TRACKERS_TO_EVAL STP --USE_PARALLEL True --NUM_PARALLEL_CORES 4

```

You may further download the raw sequence images and supplied detections (as well as train GT data and example tracker) by following the ```Data Download``` link here:

[RobMOTS Challenge Info](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110)

## Accessing tracking evaluation results

You will find the results of the evaluation (for the supplied tracker STP) in the folder ```TrackEval/data/trackers/rob_mots/train/STP/```.
The overall summary of the results is in ```./final_results.csv```, and more detailed results per sequence and per class and results plots can be found under ```./results/*```.

The ```final_results.csv``` can be most easily read by opening it in Excel or similar. The ```c```, ```d``` and ```f``` prepending the metric names refer respectively to ```class averaged```, ```detection averaged (class agnostic)``` and ```final``` (the geometric mean of class and detection averaged).

## Supplied Detections

To make creating your own tracker particularly easy, we supply a set of strong supplied detection. 

These detections are from the Detectron 2 Mask R-CNN X152 (very bottom model on this [page](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md) which achieves a COCO detection mAP score of 50.2). 

We then obtain segmentation masks for these detections using the Box2Seg Network (also called Refinement Net), which results in far more accurate masks than the default Mask R-CNN masks. The code for this can be found [here](https://github.com/JonathonLuiten/PReMVOS/tree/master/code/refinement_net). 

We supply two different supplied detections. The first is the ```raw_supplied``` detections, which is taking all 1000 detections output from the Mask R-CNN, and only removing those for which the maximum class score is less than 0.02 (here no non-maximum suppression, NMS, is run). These can be downloaded [here](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110).

The second is ```non_overlap_supplied``` detections. These are the same detections as above, but with further processing steps applied to them. First we perform Non-Maximum Suppression (NMS) with a threshold of 0.5 to remove any masks which have an IoU of 0.5 or more with any other mask that has a higher score. Second we run a Non-Overlap algorithm which forces all of the masks for a single image to be non-overlapping. It does this by putting all the masks 'on top of' each other, ordered by score, such that masks with a lower score will be partially removed if a mask with a higher score partially overlaps them. Note that these detections are still only thresholded at a score of 0.02, in general we recommend further thresholding with a higher value to get a good balance of precision and recall. 

Code for this NMS and Non-Overlap algorithm can be found here:
[Non-Overlap Code](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/baselines/non_overlap.py).

Note that for RobMOTS evaluation the final tracking results need to be 'non-overlapping' so we recommend using the ```non_overlap_supplied``` detections, however you may use the ```raw_supplied```, or your own or any other detections as you like.

Supplied detections (both raw and non-overlapping) are available for the train, val and test sets.

Example code for reading in these detections and using them can be found here:

[Tracker Example](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/baselines/stp.py).

## Creating your own tracker

We provide sample code ([Tracker Example](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/baselines/stp.py)) for our STP tracker (Simplest Tracker Possible) which walks though how to create tracking results in the required RobMOTS format.

This includes code for reading in the supplied detections and writing out the tracking results in the desired format, plus many other useful functions (IoU calculation etc).

## Evaluating your own tracker

To evaluate your tracker, put the results in the folder ```TrackEval/data/trackers/rob_mots/train/```, in a folder alongside the supplied tracker STP with the folder labelled as your tracker name, e.g. YOUR_TRACKER.

You can then run the evaluation code on your tracker like this:

```
python scripts/run_rob_mots.py --ROBMOTS_SPLIT train --TRACKERS_TO_EVAL YOUR_TRACKER --USE_PARALLEL True --NUM_PARALLEL_CORES 4
```

## Data format

For RobMOTS, trackers must submit their results in the following folder format:

```
|—— <Benchmark01>
  |—— <Benchmark01SeqName01>.txt
  |—— <Benchmark01SeqName02>.txt
  |—— <Benchmark01SeqName03>.txt
|—— <Benchmark02>
  |—— <Benchmark02SeqName01>.txt
  |—— <Benchmark02SeqName02>.txt
  |—— <Benchmark02SeqName03>.txt
```

See the supplied STP tracker results (in the Train Data linked above) for an example.

Thus there is one .txt file for each sequence. This file has one row per detection (object mask in one frame). Each row must have 7 values and has the following format:

</p>
<code>
&lt;Timestep&gt;(int),
&lt;Track ID&gt;(int),
&lt;Class Number&gt;(int),
&lt;Detection Confidence&gt;(float),
&lt;Image Height&gt;(int),
&lt;Image Width&gt;(int),
&lt;Compressed RLE Mask&gt;(string),
</code>
</p>

Timesteps are the same as the frame names for the supplied images. These start at 0.

Track IDs must be unique across all classes within a frame. They can be non-unique across different sequences.

The mapping of class numbers to class names can be found is [this file](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/datasets/rob_mots_classmap.py). Note that this is the same as used in Detectron 2, and is the default COCO class ordering with the unused numbers removed.

Detection Confidence score should be between 0 and 1. This is not used for HOTA evaluation, but is used for other eval metrics like Track mAP.

Image height and width are needed to decode the compressed RLE mask representation.

The Compressed RLE Mask is the same format used by coco, pycocotools and mots.

An example of a tracker result file looks like this:

```
0 1 3 0.9917707443237305 1200 1920 VaTi0b0lT17F8K3M3N1O1N2O0O2M3N2N101O1O1O01O1O0100O100O01O1O100O10O1000O1000000000000000O1000001O0000000000000000O101O00000000000001O0000010O0110O0O100O1O2N1O2N0O2O2M3M2N2O1O2N5J;DgePZ1
0 2 3 0.989478349685669 1200 1920 Ql^c05ZU12O2N001O0O10OTkNIaT17^kNKaT15^kNLbT14^kNMaT13^kNOaT11_kN0`T10_kN1`T11_kN0`T11_kN0`T1a0O00001O1O1O3M;E5K3M2N000000000O100000000000000000001O00001O2N1O1O1O000001O001O0O2O0O2M3M3M3N2O1O1O1N2O002N1O2N10O02N10000O1O101M3N2N2M7H^_g_1
1 2 3 0.964085042476654 1200 1920 o_Uc03\U12O1O1N102N002N001O1O000O2O1O00002N6J1O001O2N1O3L3N2N4L5K2N1O000000000000001O1O2N01O01O010O01N2O0O2O1M4L3N2N101N2O001O1O100O0100000O1O1O1O2N6I4Mdm^`1
```

Note that for the evaluation to be valid, the masks must not overlap within one frame.

The supplied detections have the same format (but with all the Track IDs being set to 0).

The groundtruth data for most benchmarks is in the exact same format as above (usually Detection Confidence is set to 1.0). The exception is the few benchmarks for which the ground-truth is not segmentation masks but bounding boxes (Waymo and TAO). For these the last three columns are not there (height, width and mask) as these encode a mask, and instead there are 4 columns encoding the bounding box co-ordinates in the format ```x0 y0 x1 y1```, where x0 and y0 are the coordinates of the top left of the box and x1 and y0 are the coordinates for the bottom right.

The groundtruth can also contain ignore regions. The are marked by being having a class number of 100 or larger. Class number 100 encodes and ignore region for all class, which class numbers higher than 100 encode ignore regions specific to each class. E.g. class number 105 are ignore regions for class 5. 

As well as the per sequence files described above, the groundtruth for each benchmark contains two more files ```clsmap.txt``` and ```seqmap.txt```. 

```clsmap.txt``` is a single row, space-separated, containing all of the valid classes that should be evaluated for each benchmark (not all benchmarks evaluate all of the coco classes). 

```seqmap.txt``` contains a list of the sequences to be evaluated for that benchmark. Each row has at least 4 values. These are:
```
<sequence name> <number of frames in sequence> <sequence image height> <sequence image width>
```
More than 4 values can be present, the remaining values are 'ignore classes for this sequence'. E.g. classes which are evaluated for the particular benchmark as a whole, but should be ignored for this sequence. 

## Visualizing GT and Tracker Masks

We provide code for converting our .txt format with compressed RLE masks into .png format where it is easy to visualize the GT and Predicted masks.

This code can be found here:

[Vizualize Tracking Results](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/baselines/vizualize.py).


## Evaluate on the validation and test server

The val and test GT will NOT be provided. However we provide a live evaluation server to upload your tracking results and evaluate it on the val and test set.

The val server will allow infinite uploads, while the test will limit trackers to 4 uploads total.

These evaluation servers can be found here: https://eval.vision.rwth-aachen.de/vision/

Ensure that your files to upload are in the correct format. Examples of the correct way to upload files can be found here: [STP val upload](https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/STP_val_upload.zip),  [STP test upload](https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/STP_test_upload.zip).

## Citation
If you work with the code and the benchmark, please cite:

***TrackEval***
```
@misc{luiten2020trackeval,
  author =       {Jonathon Luiten, Arne Hoffhues},
  title =        {TrackEval},
  howpublished = {\url{https://github.com/JonathonLuiten/TrackEval}},
  year =         {2020}
}
```
***HOTA metrics***
```
@article{luiten2020IJCV,
  title={HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking},
  author={Luiten, Jonathon and Osep, Aljosa and Dendorfer, Patrick and Torr, Philip and Geiger, Andreas and Leal-Taix{\'e}, Laura and Leibe, Bastian},
  journal={International Journal of Computer Vision},
  pages={1--31},
  year={2020},
  publisher={Springer}
}
```

## Feedback and Contact
We are constantly working on improving RobMOTS, and wish to provide the most useful support to the community.
You can help us to make the benchmark better by open issues in the repo and reporting bugs.

For general questions, please contact the following:

```
Jonathon Luiten - luiten@vision.rwth-aachen.de
```


================================================
FILE: TrackEval/docs/TAO-format.txt
================================================
Taken from: https://github.com/TAO-Dataset/tao/blob/master/tao/toolkit/tao/tao.py

Annotation file format:
{
    "info" : info,
    "images" : [image],
    "videos": [video],
    "tracks": [track],
    "annotations" : [annotation],
    "categories": [category],
    "licenses" : [license],
}
info: As in MS COCO
image: {
    "id" : int,
    "video_id": int,
    "file_name" : str,
    "license" : int,
    # Redundant fields for COCO-compatibility
    "width": int,
    "height": int,
    "frame_index": int
}
video: {
    "id": int,
    "name": str,
    "width" : int,
    "height" : int,
    "neg_category_ids": [int],
    "not_exhaustive_category_ids": [int],
    "metadata": dict,  # Metadata about the video
}
track: {
    "id": int,
    "category_id": int,
    "video_id": int
}
category: {
    "id": int,
    "name": str,
    "synset": str,  # For non-LVIS objects, this is "unknown"
    ... [other fields copied from LVIS v0.5 and unused]
}
annotation: {
    "image_id": int,
    "track_id": int,
    "bbox": [x,y,width,height],
    "area": float,
    # Redundant field for compatibility with COCO scripts
    "category_id": int
}
license: {
    "id" : int,
    "name" : str,
    "url" : str,
}


================================================
FILE: TrackEval/docs/YouTube-VIS-format.txt
================================================
Taken from: https://competitions.codalab.org/competitions/20128#participate-get-data

The label file follows MSCOCO's style in json format. We adapt the entry name and label format for video. The definition of json file is:


        {
            "info" : info,
            "videos" : [video],
            "annotations" : [annotation],
            "categories" : [category],
        }
        video{
            "id" : int,
            "width" : int,
            "height" : int,
            "length" : int,
            "file_names" : [file_name],
        }
        annotation{
            "id" : int, 
            "video_id" : int, 
            "category_id" : int, 
            "segmentations" : [RLE or [polygon] or None], 
            "areas" : [float or None], 
            "bboxes" : [[x,y,width,height] or None], 
            "iscrowd" : 0 or 1,
        }
        category{
            "id" : int, 
            "name" : str, 
            "supercategory" : str,
        }
    
The submission file is also in json format. The file should contain a list of predictions:


        prediction{
            "video_id" : int, 
            "category_id" : int, 
            "segmentations" : [RLE or [polygon] or None], 
            "score" : float, 
        }
    
The submission file should be named as "results.json", and compressed without any subfolder. There is an example "valid_submission_sample.zip" in download links above. The example is generated by our proposed MaskTrack R-CNN algorithm.

================================================
FILE: TrackEval/minimum_requirements.txt
================================================
scipy==1.4.1
numpy==1.18.1


================================================
FILE: TrackEval/pyproject.toml
================================================
[build-system]
requires = [
    "setuptools>=42",
    "wheel"
]
build-backend = "setuptools.build_meta"


================================================
FILE: TrackEval/requirements.txt
================================================
numpy==1.18.1
scipy==1.4.1
pycocotools==2.0.2
matplotlib==3.2.1
opencv_python==4.4.0.46
scikit_image==0.16.2
pytest==6.0.1
Pillow==8.1.2


================================================
FILE: TrackEval/scripts/comparison_plots.py
================================================
import sys
import os

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

plots_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'data', 'plots'))
tracker_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'data', 'trackers'))

# dataset = os.path.join('kitti', 'kitti_2d_box_train')
# classes = ['cars', 'pedestrian']

dataset = os.path.join('mot_challenge', 'MOT17-train')
classes = ['pedestrian']

data_fol = os.path.join(tracker_folder, dataset)
trackers = os.listdir(data_fol)
out_loc = os.path.join(plots_folder, dataset)
for cls in classes:
    trackeval.plotting.plot_compare_trackers(data_fol, trackers, cls, out_loc)


================================================
FILE: TrackEval/scripts/run_bdd.py
================================================

""" run_bdd.py

Run example:
run_bdd.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL qdtrack

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
            'GT_FOLDER': os.path.join(code_path, 'data/gt/bdd100k/bdd100k_val'),  # Location of GT data
            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/bdd100k/bdd100k_val'),  # Trackers location
            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
            'CLASSES_TO_EVAL': ['pedestrian', 'rider', 'car', 'bus', 'truck', 'train', 'motorcycle', 'bicycle'],
            # Valid: ['pedestrian', 'rider', 'car', 'bus', 'truck', 'train', 'motorcycle', 'bicycle']
            'SPLIT_TO_EVAL': 'val',  # Valid: 'training', 'val',
            'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped
            'PRINT_CONFIG': True,  # Whether to print current config
            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL
    Metric arguments:
        'METRICS': ['Hota','Clear', 'ID', 'Count']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['PRINT_ONLY_COMBINED'] = True
    default_dataset_config = trackeval.datasets.BDD100K.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity']}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.BDD100K(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric())
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)

================================================
FILE: TrackEval/scripts/run_davis.py
================================================
""" run_davis.py

Run example:
run_davis.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL ags

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
    '   'GT_FOLDER': os.path.join(code_path, 'data/gt/davis/'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/davis/davis_val'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'SPLIT_TO_EVAL': 'val',  # Valid: 'val', 'train'
        'PRINT_CONFIG': True,  # Whether to print current config
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
        'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL
        'SEQMAP_FOLDER': None,  # Where seqmaps are found (if None, GT_FOLDER/ImageSets/2017)
        'SEQMAP_FILE': None,  # Directly specify seqmap file (if none use seqmap_folder/split-to-eval.txt)
        'SEQ_INFO': None,  # If not None, directly specify sequences to eval and their number of timesteps
        'GT_LOC_FORMAT': '{gt_folder}/Annotations_unsupervised/480p/{seq}',
        # '{gt_folder}/Annotations_unsupervised/480p/{seq}'
        'MAX_DETECTIONS': 0  # Maximum number of allowed detections per sequence (0 for no threshold)
    Metric arguments:
        'METRICS': ['HOTA', 'CLEAR', 'Identity', 'JAndF']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_dataset_config = trackeval.datasets.DAVIS.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity', 'JAndF']}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.DAVIS(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.JAndF]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric())
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)

================================================
FILE: TrackEval/scripts/run_headtracking_challenge.py
================================================

""" run_mot_challenge.py

Run example:
run_mot_challenge.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL Lif_T

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': ['pedestrian'],  # Valid: ['pedestrian']
        'BENCHMARK': 'MOT17',  # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
        'SPLIT_TO_EVAL': 'train',  # Valid: 'train', 'test', 'all'
        'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped
        'PRINT_CONFIG': True,  # Whether to print current config
        'DO_PREPROC': True,  # Whether to perform preprocessing (never done for 2D_MOT_2015)
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
    Metric arguments:
        'METRICS': ['HOTA', 'CLEAR', 'Identity', 'IDEucl']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['DISPLAY_LESS_PROGRESS'] = False
    default_dataset_config = trackeval.datasets.HeadTrackingChallenge.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity', 'IDEucl'], 'THRESHOLD': 0.4}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            elif setting == 'SEQ_INFO':
                x = dict(zip(args[setting], [None]*len(args[setting])))
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.HeadTrackingChallenge(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.IDEucl]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric(metrics_config))
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)


================================================
FILE: TrackEval/scripts/run_kitti.py
================================================

""" run_kitti.py

Run example:
run_kitti.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL CIWT

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/kitti/kitti_2d_box_train'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/kitti/kitti_2d_box_train/'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': ['car', 'pedestrian'],  # Valid: ['car', 'pedestrian']
        'SPLIT_TO_EVAL': 'training',  # Valid: 'training', 'val', 'training_minus_val', 'test'
        'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped
        'PRINT_CONFIG': True,  # Whether to print current config
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': ''  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
    Metric arguments:
        'METRICS': ['Hota','Clear', 'ID', 'Count']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['DISPLAY_LESS_PROGRESS'] = False
    default_dataset_config = trackeval.datasets.Kitti2DBox.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity']}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.Kitti2DBox(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric())
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)


================================================
FILE: TrackEval/scripts/run_kitti_mots.py
================================================

""" run_kitti_mots.py

Run example:
run_kitti_mots.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL trackrcnn

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/kitti/kitti_mots'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/kitti/kitti_mots_val'),   # Location of all
                                                                                            # trackers
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': ['car', 'pedestrian'],  # Valid: ['car', 'pedestrian']
        'SPLIT_TO_EVAL': 'val',  # Valid: 'training', 'val'
        'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped
        'PRINT_CONFIG': True,  # Whether to print current config
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
        'SEQMAP_FOLDER': None,  # Where seqmaps are found (if None, GT_FOLDER)
        'SEQMAP_FILE': None,    # Directly specify seqmap file (if none use seqmap_folder/split_to_eval.seqmap)
        'SEQ_INFO': None,  # If not None, directly specify sequences to eval and their number of timesteps
        'GT_LOC_FORMAT': '{gt_folder}/instances_txt/{seq}.txt',  # format of gt localization
    Metric arguments:
        'METRICS': ['HOTA', 'CLEAR', 'Identity']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['DISPLAY_LESS_PROGRESS'] = False
    default_dataset_config = trackeval.datasets.KittiMOTS.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity']}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.KittiMOTS(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.JAndF]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric())
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)


================================================
FILE: TrackEval/scripts/run_mot_challenge.py
================================================

""" run_mot_challenge.py

Run example:
run_mot_challenge.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL Lif_T

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': ['pedestrian'],  # Valid: ['pedestrian']
        'BENCHMARK': 'MOT17',  # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
        'SPLIT_TO_EVAL': 'train',  # Valid: 'train', 'test', 'all'
        'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped
        'PRINT_CONFIG': True,  # Whether to print current config
        'DO_PREPROC': True,  # Whether to perform preprocessing (never done for 2D_MOT_2015)
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
    Metric arguments:
        'METRICS': ['HOTA', 'CLEAR', 'Identity', 'VACE']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

# python TrackEval/scripts/run_mot_challenge.py --BENCHMARK MOT17 --SPLIT_TO_EVAL train --TRACKERS_TO_EVAL ByteTrack --METRICS HOTA CLEAR Identity VACE --TIME_PROGRESS False --USE_PARALLEL False --NUM_PARALLEL_CORES 1  --GT_FOLDER datasets/mot/ --TRACKERS_FOLDER YOLOX_outputs/yolox_s_mot17_half_repro1/track_results_ByteTrack/track_results --GT_LOC_FORMAT {gt_folder}/{seq}/gt/gt_val_half.txt
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['DISPLAY_LESS_PROGRESS'] = False
    default_dataset_config = trackeval.datasets.MotChallenge2DBox.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity'], 'THRESHOLD': 0.5}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            elif setting == 'SEQ_INFO':
                x = dict(zip(args[setting], [None]*len(args[setting])))
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    if type(dataset_config['SEQMAP_FILE']) is list:         # TODO: [hgx 0409] for dancetrack dataset
        dataset_config['SEQMAP_FILE'] = dataset_config['SEQMAP_FILE'][0]

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.MotChallenge2DBox(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.VACE]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric(metrics_config))
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)


================================================
FILE: TrackEval/scripts/run_mots_challenge.py
================================================
""" run_mots.py

Run example:
run_mots.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL TrackRCNN

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': ['pedestrian'],  # Valid: ['pedestrian']
        'SPLIT_TO_EVAL': 'train',  # Valid: 'train', 'test'
        'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped
        'PRINT_CONFIG': True,  # Whether to print current config
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
        'SEQMAP_FOLDER': None,  # Where seqmaps are found (if None, GT_FOLDER/seqmaps)
        'SEQMAP_FILE': None,  # Directly specify seqmap file (if none use seqmap_folder/MOTS-split_to_eval)
        'SEQ_INFO': None,  # If not None, directly specify sequences to eval and their number of timesteps
        'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt',  # '{gt_folder}/{seq}/gt/gt.txt'
        'SKIP_SPLIT_FOL': False,    # If False, data is in GT_FOLDER/MOTS-SPLIT_TO_EVAL/ and in
                                    # TRACKERS_FOLDER/MOTS-SPLIT_TO_EVAL/tracker/
                                    # If True, then the middle 'MOTS-split' folder is skipped for both.
    Metric arguments:
        'METRICS': ['HOTA','CLEAR', 'Identity', 'VACE', 'JAndF']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['DISPLAY_LESS_PROGRESS'] = False
    default_dataset_config = trackeval.datasets.MOTSChallenge.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity']}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            elif setting == 'SEQ_INFO':
                x = dict(zip(args[setting], [None]*len(args[setting])))
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.MOTSChallenge(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.VACE,
                   trackeval.metrics.JAndF]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric())
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)


================================================
FILE: TrackEval/scripts/run_rob_mots.py
================================================
# python3 scripts/run_rob_mots.py --ROBMOTS_SPLIT train --TRACKERS_TO_EVAL STP --USE_PARALLEL True --NUM_PARALLEL_CORES 8

import sys
import os
import csv
import numpy as np
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402
from trackeval import utils

code_path = utils.get_code_path()

if __name__ == '__main__':
    freeze_support()

    script_config = {
        'ROBMOTS_SPLIT': 'train',  # 'train',  # valid: 'train', 'val', 'test', 'test_live', 'test_post', 'test_all'
        'BENCHMARKS': None,  # If None, use all for each split.
        'GT_FOLDER': os.path.join(code_path, 'data/gt/rob_mots'),
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/rob_mots'),
    }

    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['PRINT_ONLY_COMBINED'] = True
    default_eval_config['DISPLAY_LESS_PROGRESS'] = True
    default_dataset_config = trackeval.datasets.RobMOTS.get_default_dataset_config()
    config = {**default_eval_config, **default_dataset_config, **script_config}

    # Command line interface:
    config = utils.update_config(config)

    if not config['BENCHMARKS']:
        if config['ROBMOTS_SPLIT'] == 'val':
            config['BENCHMARKS'] = ['kitti_mots', 'bdd_mots', 'davis_unsupervised', 'youtube_vis', 'ovis',
                                    'tao', 'mots_challenge', 'waymo']
            config['SPLIT_TO_EVAL'] = 'val'
        elif config['ROBMOTS_SPLIT'] == 'test' or config['SPLIT_TO_EVAL'] == 'test_live':
            config['BENCHMARKS'] = ['kitti_mots', 'bdd_mots', 'davis_unsupervised', 'youtube_vis', 'tao']
            config['SPLIT_TO_EVAL'] = 'test'
        elif config['ROBMOTS_SPLIT'] == 'test_post':
            config['BENCHMARKS'] = ['mots_challenge', 'waymo', 'ovis']
            config['SPLIT_TO_EVAL'] = 'test'
        elif config['ROBMOTS_SPLIT'] == 'test_all':
            config['BENCHMARKS'] = ['kitti_mots', 'bdd_mots', 'davis_unsupervised', 'youtube_vis', 'ovis',
                                    'tao', 'mots_challenge', 'waymo']
            config['SPLIT_TO_EVAL'] = 'test'
        elif config['ROBMOTS_SPLIT'] == 'train':
            config['BENCHMARKS'] = ['kitti_mots', 'davis_unsupervised', 'youtube_vis', 'ovis', 'tao', 'bdd_mots']
            config['SPLIT_TO_EVAL'] = 'train'
    else:
        config['SPLIT_TO_EVAL'] = config['ROBMOTS_SPLIT']

    metrics_config = {'METRICS': ['HOTA']}
    eval_config = {k: v for k, v in config.items() if k in config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in config.keys()}

    # Run code
    try:
        dataset_list = []
        for bench in config['BENCHMARKS']:
            dataset_config['SUB_BENCHMARK'] = bench
            dataset_list.append(trackeval.datasets.RobMOTS(dataset_config))
        evaluator = trackeval.Evaluator(eval_config)
        metrics_list = []
        for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity,
                       trackeval.metrics.VACE, trackeval.metrics.JAndF]:
            if metric.get_name() in metrics_config['METRICS']:
                metrics_list.append(metric())
        if len(metrics_list) == 0:
            raise Exception('No metrics selected for evaluation')
        output_res, output_msg = evaluator.evaluate(dataset_list, metrics_list)
        output = list(list(output_msg.values())[0].values())[0]

    except Exception as err:
        if type(err) == trackeval.utils.TrackEvalException:
            output = str(err)
        else:
            output = 'Unknown error occurred.'

    success = output == 'Success'
    if not success:
        output = 'ERROR, evaluation failed. \n\nError message: ' + output
        print(output)

    if config['TRACKERS_TO_EVAL']:
        msg = "Thanks you for participating in the RobMOTS benchmark.\n\n"
        msg += "The status of your evaluation is: \n" + output + '\n\n'
        msg += "If your tracking results evaluated successfully on the evaluation server you can see your results here: \n"
        msg += "https://eval.vision.rwth-aachen.de/vision/"
        status_file = os.path.join(config['TRACKERS_FOLDER'], config['ROBMOTS_SPLIT'], config['TRACKERS_TO_EVAL'][0],
                                   'status.txt')
        with open(status_file, 'w', newline='') as f:
            f.write(msg)

    if success:
        # For each benchmark, combine the 'all' score with the 'cls_averaged' using geometric mean.
        metrics_to_calc = ['HOTA', 'DetA', 'AssA', 'DetRe', 'DetPr', 'AssRe', 'AssPr', 'LocA']
        trackers = list(output_res['RobMOTS.' + config['BENCHMARKS'][0]].keys())
        for tracker in trackers:
            # final_results[benchmark][result_type][metric]
            final_results = {}
            res = {bench: output_res['RobMOTS.' + bench][tracker]['COMBINED_SEQ'] for bench in config['BENCHMARKS']}
            for bench in config['BENCHMARKS']:
                final_results[bench] = {'cls_av': {}, 'det_av': {}, 'final': {}}
                for metric in metrics_to_calc:
                    final_results[bench]['cls_av'][metric] = np.mean(res[bench]['cls_comb_cls_av']['HOTA'][metric])
                    final_results[bench]['det_av'][metric] = np.mean(res[bench]['all']['HOTA'][metric])
                    final_results[bench]['final'][metric] = \
                        np.sqrt(final_results[bench]['cls_av'][metric] * final_results[bench]['det_av'][metric])

            # Take the arithmetic mean over all the benchmarks
            final_results['overall'] = {'cls_av': {}, 'det_av': {}, 'final': {}}
            for metric in metrics_to_calc:
                final_results['overall']['cls_av'][metric] = \
                    np.mean([final_results[bench]['cls_av'][metric] for bench in config['BENCHMARKS']])
                final_results['overall']['det_av'][metric] = \
                    np.mean([final_results[bench]['det_av'][metric] for bench in config['BENCHMARKS']])
                final_results['overall']['final'][metric] = \
                    np.mean([final_results[bench]['final'][metric] for bench in config['BENCHMARKS']])

            # Save out result
            headers = [config['SPLIT_TO_EVAL']] + [x + '___' + metric for x in ['f', 'c', 'd'] for metric in
                                                   metrics_to_calc]


            def rowify(d):
                return [d[x][metric] for x in ['final', 'cls_av', 'det_av'] for metric in metrics_to_calc]


            out_file = os.path.join(config['TRACKERS_FOLDER'], config['ROBMOTS_SPLIT'], tracker,
                                    'final_results.csv')

            with open(out_file, 'w', newline='') as f:
                writer = csv.writer(f, delimiter=',')
                writer.writerow(headers)
                writer.writerow(['overall'] + rowify(final_results['overall']))
                for bench in config['BENCHMARKS']:
                    if bench == 'overall':
                        continue
                    writer.writerow([bench] + rowify(final_results[bench]))


================================================
FILE: TrackEval/scripts/run_tao.py
================================================
""" run_tao.py

Run example:
run_tao.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL Tracktor++

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/tao/tao_training'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/tao/tao_training'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': None,  # Classes to eval (if None, all classes)
        'SPLIT_TO_EVAL': 'training',  # Valid: 'training', 'val'
        'PRINT_CONFIG': True,  # Whether to print current config
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
        'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL
        'MAX_DETECTIONS': 300,  # Number of maximal allowed detections per image (0 for unlimited)
    Metric arguments:
        'METRICS': ['HOTA', 'CLEAR', 'Identity', 'TrackMAP']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    # print only combined since TrackMAP is undefined for per sequence breakdowns
    default_eval_config['PRINT_ONLY_COMBINED'] = True
    default_eval_config['DISPLAY_LESS_PROGRESS'] = True
    default_dataset_config = trackeval.datasets.TAO.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity', 'TrackMAP']}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.TAO(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.TrackMAP, trackeval.metrics.CLEAR, trackeval.metrics.Identity,
                   trackeval.metrics.HOTA]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric())
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)

================================================
FILE: TrackEval/scripts/run_tao_ow.py
================================================
""" run_tao.py

Run example:
run_tao.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL Tracktor++

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/tao/tao_training'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/tao/tao_training'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': None,  # Classes to eval (if None, all classes)
        'SPLIT_TO_EVAL': 'training',  # Valid: 'training', 'val'
        'PRINT_CONFIG': True,  # Whether to print current config
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
        'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL
        'MAX_DETECTIONS': 300,  # Number of maximal allowed detections per image (0 for unlimited)
        'SUBSET': 'unknown',  # Evaluate on the following subsets ['all', 'known', 'unknown', 'distractor']
    Metric arguments:
        'METRICS': ['HOTA', 'CLEAR', 'Identity', 'TrackMAP']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    # print only combined since TrackMAP is undefined for per sequence breakdowns
    default_eval_config['PRINT_ONLY_COMBINED'] = True
    default_eval_config['DISPLAY_LESS_PROGRESS'] = True
    default_dataset_config = trackeval.datasets.TAO_OW.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity', 'TrackMAP']}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.TAO_OW(dataset_config)]
    metrics_list = []
    # for metric in [trackeval.metrics.TrackMAP, trackeval.metrics.CLEAR, trackeval.metrics.Identity,
    #                trackeval.metrics.HOTA]:
    for metric in [trackeval.metrics.HOTA]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric())
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)

================================================
FILE: TrackEval/scripts/run_youtube_vis.py
================================================

""" run_youtube_vis.py
Run example:
run_youtube_vis.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL STEm_Seg
Command Line Arguments: Defaults, # Comments
    Eval arguments:
            'USE_PARALLEL': False,
            'NUM_PARALLEL_CORES': 8,
            'BREAK_ON_ERROR': True,  # Raises exception and exits with error
            'RETURN_ON_ERROR': False,  # if not BREAK_ON_ERROR, then returns from function on error
            'LOG_ON_ERROR': os.path.join(code_path, 'error_log.txt'),  # if not None, save any errors into a log file.
            'PRINT_RESULTS': True,
            'PRINT_ONLY_COMBINED': False,
            'PRINT_CONFIG': True,
            'TIME_PROGRESS': True,
            'DISPLAY_LESS_PROGRESS': True,
            'OUTPUT_SUMMARY': True,
            'OUTPUT_EMPTY_CLASSES': True,  # If False, summary files are not output for classes with no detections
            'OUTPUT_DETAILED': True,
            'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/youtube_vis/youtube_vis_training'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/youtube_vis/youtube_vis_training'),
        # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': None,  # Classes to eval (if None, all classes)
        'SPLIT_TO_EVAL': 'training',  # Valid: 'training', 'val'
        'PRINT_CONFIG': True,  # Whether to print current config
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL
    Metric arguments:
        'METRICS': ['TrackMAP', 'HOTA', 'CLEAR', 'Identity']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    # print only combined since TrackMAP is undefined for per sequence breakdowns
    default_eval_config['PRINT_ONLY_COMBINED'] = True
    default_dataset_config = trackeval.datasets.YouTubeVIS.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['TrackMAP', 'HOTA', 'CLEAR', 'Identity']}
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    # Run code
    evaluator = trackeval.Evaluator(eval_config)
    dataset_list = [trackeval.datasets.YouTubeVIS(dataset_config)]
    metrics_list = []
    for metric in [trackeval.metrics.TrackMAP, trackeval.metrics.HOTA, trackeval.metrics.CLEAR,
                   trackeval.metrics.Identity]:
        if metric.get_name() in metrics_config['METRICS']:
            # specify TrackMAP config for YouTubeVIS
            if metric == trackeval.metrics.TrackMAP:
                default_track_map_config = metric.get_default_metric_config()
                default_track_map_config['USE_TIME_RANGES'] = False
                default_track_map_config['AREA_RANGES'] = [[0 ** 2, 128 ** 2],
                                                           [ 128 ** 2, 256 ** 2],
                                                           [256 ** 2, 1e5 ** 2]]
                metrics_list.append(metric(default_track_map_config))
            else:
                metrics_list.append(metric())
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)

================================================
FILE: TrackEval/setup.cfg
================================================
[metadata]
name = trackeval
version = 1.0.dev1
author = Jonathon Luiten, Arne Hoffhues
author_email = jonoluiten@gmail.com
description = Code for evaluating object tracking
long_description = file: Readme.md
long_description_content_type = text/markdown
url = https://github.com/JonathonLuiten/TrackEval
project_urls =
    Bug Tracker = https://github.com/JonathonLuiten/TrackEval/issues
classifiers =
    Programming Language :: Python :: 3
    Programming Language :: Python :: 3 :: Only
    License :: OSI Approved :: MIT License
    Operating System :: OS Independent
    Topic :: Scientific/Engineering
license_files = LICENSE

[options]
install_requires =
    numpy
    scipy
packages = find:

[options.packages.find]
include = trackeval*


================================================
FILE: TrackEval/setup.py
================================================
from setuptools import setup

setup()


================================================
FILE: TrackEval/tests/test_all_quick.py
================================================
""" Test to ensure that the code is working correctly.
Should test ALL metrics across all datasets and splits currently supported.
Only tests one tracker per dataset/split to give a quick test result.
"""

import sys
import os
import numpy as np
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

# Fixes multiprocessing on windows, does nothing otherwise
if __name__ == '__main__':
    freeze_support()

eval_config = {'USE_PARALLEL': False,
               'NUM_PARALLEL_CORES': 8,
               }
evaluator = trackeval.Evaluator(eval_config)
metrics_list = [trackeval.metrics.HOTA(), trackeval.metrics.CLEAR(), trackeval.metrics.Identity()]

tests = [
    {'DATASET': 'Kitti2DBox', 'SPLIT_TO_EVAL': 'training', 'TRACKERS_TO_EVAL': ['CIWT']},
    {'DATASET': 'MotChallenge2DBox', 'BENCHMARK': 'MOT15', 'SPLIT_TO_EVAL': 'train', 'TRACKERS_TO_EVAL': ['MPNTrack']},
    {'DATASET': 'MotChallenge2DBox', 'BENCHMARK': 'MOT16', 'SPLIT_TO_EVAL': 'train', 'TRACKERS_TO_EVAL': ['MPNTrack']},
    {'DATASET': 'MotChallenge2DBox', 'BENCHMARK': 'MOT17', 'SPLIT_TO_EVAL': 'train', 'TRACKERS_TO_EVAL': ['MPNTrack']},
    {'DATASET': 'MotChallenge2DBox', 'BENCHMARK': 'MOT20', 'SPLIT_TO_EVAL': 'train', 'TRACKERS_TO_EVAL': ['MPNTrack']},
]

for dataset_config in tests:

    dataset_name = dataset_config.pop('DATASET')
    if dataset_name == 'MotChallenge2DBox':
        dataset_list = [trackeval.datasets.MotChallenge2DBox(dataset_config)]
        file_loc = os.path.join('mot_challenge', dataset_config['BENCHMARK'] + '-' + dataset_config['SPLIT_TO_EVAL'])
    elif dataset_name == 'Kitti2DBox':
        dataset_list = [trackeval.datasets.Kitti2DBox(dataset_config)]
        file_loc = os.path.join('kitti', 'kitti_2d_box_train')
    else:
        raise Exception('Dataset %s does not exist.' % dataset_name)

    raw_results, messages = evaluator.evaluate(dataset_list, metrics_list)

    classes = dataset_list[0].config['CLASSES_TO_EVAL']
    tracker = dataset_config['TRACKERS_TO_EVAL'][0]
    test_data_loc = os.path.join(os.path.dirname(__file__), '..', 'data', 'tests', file_loc)

    for cls in classes:
        results = {seq: raw_results[dataset_name][tracker][seq][cls] for seq in raw_results[dataset_name][tracker].keys()}
        current_metrics_list = metrics_list + [trackeval.metrics.Count()]
        metric_names = trackeval.utils.validate_metrics_list(current_metrics_list)

        # Load expected results:
        test_data = trackeval.utils.load_detail(os.path.join(test_data_loc, tracker, cls + '_detailed.csv'))

        # Do checks
        for seq in test_data.keys():
            assert len(test_data[seq].keys()) > 250, len(test_data[seq].keys())

            details = []
            for metric, metric_name in zip(current_metrics_list, metric_names):
                table_res = {seq_key: seq_value[metric_name] for seq_key, seq_value in results.items()}
                details.append(metric.detailed_results(table_res))
            res_fields = sum([list(s['COMBINED_SEQ'].keys()) for s in details], [])
            res_values = sum([list(s[seq].values()) for s in details], [])
            res_dict = dict(zip(res_fields, res_values))

            for field in test_data[seq].keys():
                assert np.isclose(res_dict[field], test_data[seq][field]), seq + ': ' + cls + ': ' + field

    print('Tracker %s tests passed' % tracker)
print('All tests passed')



================================================
FILE: TrackEval/tests/test_davis.py
================================================
import sys
import os
import numpy as np
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

# Fixes multiprocessing on windows, does nothing otherwise
if __name__ == '__main__':
    freeze_support()


eval_config = {'USE_PARALLEL': False,
               'NUM_PARALLEL_CORES': 8,
               'PRINT_RESULTS': False,
               'PRINT_CONFIG': True,
               'TIME_PROGRESS': True,
               'DISPLAY_LESS_PROGRESS': True,
               'OUTPUT_SUMMARY': False,
               'OUTPUT_EMPTY_CLASSES': False,
               'OUTPUT_DETAILED': False,
               'PLOT_CURVES': False,
               }
evaluator = trackeval.Evaluator(eval_config)
metrics_list = [trackeval.metrics.HOTA(), trackeval.metrics.CLEAR(), trackeval.metrics.Identity(),
                trackeval.metrics.JAndF()]

tests = [
    {'SPLIT_TO_EVAL': 'val', 'TRACKERS_TO_EVAL': ['ags']},
]

for dataset_config in tests:

    dataset_list = [trackeval.datasets.DAVIS(dataset_config)]
    file_loc = os.path.join('davis', 'davis_unsupervised_' + dataset_config['SPLIT_TO_EVAL'])

    raw_results, messages = evaluator.evaluate(dataset_list, metrics_list)

    classes = dataset_list[0].config['CLASSES_TO_EVAL']
    tracker = dataset_config['TRACKERS_TO_EVAL'][0]
    test_data_loc = os.path.join(os.path.dirname(__file__), '..', 'data', 'tests', file_loc)

    for cls in classes:
        results = {seq: raw_results['DAVIS'][tracker][seq][cls] for seq in raw_results['DAVIS'][tracker].keys()}
        current_metrics_list = metrics_list + [trackeval.metrics.Count()]
        metric_names = trackeval.utils.validate_metrics_list(current_metrics_list)

        # Load expected results:
        test_data = trackeval.utils.load_detail(os.path.join(test_data_loc, tracker, cls + '_detailed.csv'))

        # Do checks
        for seq in test_data.keys():
            assert len(test_data[seq].keys()) > 250, len(test_data[seq].keys())

            details = []
            for metric, metric_name in zip(current_metrics_list, metric_names):
                table_res = {seq_key: seq_value[metric_name] for seq_key, seq_value in results.items()}
                details.append(metric.detailed_results(table_res))
            res_fields = sum([list(s['COMBINED_SEQ'].keys()) for s in details], [])
            res_values = sum([list(s[seq].values()) for s in details], [])
            res_dict = dict(zip(res_fields, res_values))

            for field in test_data[seq].keys():
                assert np.isclose(res_dict[field], test_data[seq][field]), seq + ': ' + cls + ': ' + field

    print('Tracker %s tests passed' % tracker)
print('All tests passed')

================================================
FILE: TrackEval/tests/test_metrics.py
================================================
import numpy as np
import pytest

import trackeval


def no_confusion():
    num_timesteps = 5
    num_gt_ids = 2
    num_tracker_ids = 2

    # No overlap between pairs (0, 0) and (1, 1).
    similarity = np.zeros([num_timesteps, num_gt_ids, num_tracker_ids])
    similarity[:, 0, 1] = [0, 0, 0, 1, 1]
    similarity[:, 1, 0] = [1, 1, 0, 0, 0]
    gt_present = np.zeros([num_timesteps, num_gt_ids])
    gt_present[:, 0] = [1, 1, 1, 1, 1]
    gt_present[:, 1] = [1, 1, 1, 0, 0]
    tracker_present = np.zeros([num_timesteps, num_tracker_ids])
    tracker_present[:, 0] = [1, 1, 1, 1, 0]
    tracker_present[:, 1] = [1, 1, 1, 1, 1]

    expected = {
            'clear': {
                    'CLR_TP': 4,
                    'CLR_FN': 4,
                    'CLR_FP': 5,
                    'IDSW': 0,
                    'MOTA': 1 - 9 / 8,
            },
            'identity': {
                    'IDTP': 4,
                    'IDFN': 4,
                    'IDFP': 5,
                    'IDR': 4 / 8,
                    'IDP': 4 / 9,
                    'IDF1': 2 * 4 / 17,
            },
            'vace': {
                    'STDA': 2 / 5 + 2 / 4,
                    'ATA': (2 / 5 + 2 / 4) / 2,
            },
    }

    data = _from_dense(
            num_timesteps=num_timesteps,
            num_gt_ids=num_gt_ids,
            num_tracker_ids=num_tracker_ids,
            gt_present=gt_present,
            tracker_present=tracker_present,
            similarity=similarity,
    )
    return data, expected


def with_confusion():
    num_timesteps = 5
    num_gt_ids = 2
    num_tracker_ids = 2

    similarity = np.zeros([num_timesteps, num_gt_ids, num_tracker_ids])
    similarity[:, 0, 1] = [0, 0, 0, 1, 1]
    similarity[:, 1, 0] = [1, 1, 0, 0, 0]
    # Add some overlap between (0, 0) and (1, 1).
    similarity[:, 0, 0] = [0, 0, 1, 0, 0]
    similarity[:, 1, 1] = [0, 1, 0, 0, 0]
    gt_present = np.zeros([num_timesteps, num_gt_ids])
    gt_present[:, 0] = [1, 1, 1, 1, 1]
    gt_present[:, 1] = [1, 1, 1, 0, 0]
    tracker_present = np.zeros([num_timesteps, num_tracker_ids])
    tracker_present[:, 0] = [1, 1, 1, 1, 0]
    tracker_present[:, 1] = [1, 1, 1, 1, 1]

    expected = {
            'clear': {
                    'CLR_TP': 5,
                    'CLR_FN': 3,  # 8 - 5
                    'CLR_FP': 4,  # 9 - 5
                    'IDSW': 1,
                    'MOTA': 1 - 8 / 8,
            },
            'identity': {
                    'IDTP': 4,
                    'IDFN': 4,
                    'IDFP': 5,
                    'IDR': 4 / 8,
                    'IDP': 4 / 9,
                    'IDF1': 2 * 4 / 17,
            },
            'vace': {
                    'STDA': 2 / 5 + 2 / 4,
                    'ATA': (2 / 5 + 2 / 4) / 2,
            },
    }

    data = _from_dense(
            num_timesteps=num_timesteps,
            num_gt_ids=num_gt_ids,
            num_tracker_ids=num_tracker_ids,
            gt_present=gt_present,
            tracker_present=tracker_present,
            similarity=similarity,
    )
    return data, expected


def split_tracks():
    num_timesteps = 5
    num_gt_ids = 2
    num_tracker_ids = 5

    similarity = np.zeros([num_timesteps, num_gt_ids, num_tracker_ids])
    # Split ground-truth 0 between tracks 0, 3.
    similarity[:, 0, 0] = [1, 1, 0, 0, 0]
    similarity[:, 0, 3] = [0, 0, 0, 1, 1]
    # Split ground-truth 1 between tracks 1, 2, 4.
    similarity[:, 1, 1] = [0, 0, 1, 1, 0]
    similarity[:, 1, 2] = [0, 0, 0, 0, 1]
    similarity[:, 1, 4] = [1, 1, 0, 0, 0]
    gt_present = np.zeros([num_timesteps, num_gt_ids])
    gt_present[:, 0] = [1, 1, 0, 1, 1]
    gt_present[:, 1] = [1, 1, 1, 1, 1]
    tracker_present = np.zeros([num_timesteps, num_tracker_ids])
    tracker_present[:, 0] = [1, 1, 0, 0, 0]
    tracker_present[:, 1] = [0, 0, 1, 1, 1]
    tracker_present[:, 2] = [0, 0, 0, 0, 1]
    tracker_present[:, 3] = [0, 0, 1, 1, 1]
    tracker_present[:, 4] = [1, 1, 0, 0, 0]

    expected = {
            'clear': {
                    'CLR_TP': 9,
                    'CLR_FN': 0,  # 9 - 9
                    'CLR_FP': 2,  # 11 - 9
                    'IDSW': 3,
                    'MOTA': 1 - 5 / 9,
            },
            'identity': {
                    'IDTP': 4,
                    'IDFN': 5,  # 9 - 4
                    'IDFP': 7,  # 11 - 4
                    'IDR': 4 / 9,
                    'IDP': 4 / 11,
                    'IDF1': 2 * 4 / 20,
            },
            'vace': {
                    'STDA': 2 / 4 + 2 / 5,
                    'ATA': (2 / 4 + 2 / 5) / (0.5 * (2 + 5)),
            },
    }

    data = _from_dense(
            num_timesteps=num_timesteps,
            num_gt_ids=num_gt_ids,
            num_tracker_ids=num_tracker_ids,
            gt_present=gt_present,
            tracker_present=tracker_present,
            similarity=similarity,
    )
    return data, expected


def _from_dense(num_timesteps, num_gt_ids, num_tracker_ids, gt_present, tracker_present, similarity):
    gt_subset = [np.flatnonzero(gt_present[t, :]) for t in range(num_timesteps)]
    tracker_subset = [np.flatnonzero(tracker_present[t, :]) for t in range(num_timesteps)]
    similarity_subset = [
            similarity[t][gt_subset[t], :][:, tracker_subset[t]]
            for t in range(num_timesteps)
    ]
    data = {
            'num_timesteps': num_timesteps,
            'num_gt_ids': num_gt_ids,
            'num_tracker_ids': num_tracker_ids,
            'num_gt_dets': np.sum(gt_present),
            'num_tracker_dets': np.sum(tracker_present),
            'gt_ids': gt_subset,
            'tracker_ids': tracker_subset,
            'similarity_scores': similarity_subset,
    }
    return data


METRICS_BY_NAME = {
        'clear': trackeval.metrics.CLEAR(),
        'identity': trackeval.metrics.Identity(),
        'vace': trackeval.metrics.VACE(),
}

SEQUENCE_BY_NAME = {
        'no_confusion': no_confusion(),
        'with_confusion': with_confusion(),
        'split_tracks': split_tracks(),
}


@pytest.mark.parametrize('sequence_name,metric_name', [
        ('no_confusion', 'clear'),
        ('no_confusion', 'identity'),
        ('no_confusion', 'vace'),
        ('with_confusion', 'clear'),
        ('with_confusion', 'identity'),
        ('with_confusion', 'vace'),
        ('split_tracks', 'clear'),
        ('split_tracks', 'identity'),
        ('split_tracks', 'vace'),
])
def test_metric(sequence_name, metric_name):
    data, expected = SEQUENCE_BY_NAME[sequence_name]
    metric = METRICS_BY_NAME[metric_name]
    result = metric.eval_sequence(data)
    for key, value in expected[metric_name].items():
        assert result[key] == pytest.approx(value), key


================================================
FILE: TrackEval/tests/test_mot17.py
================================================
""" Test to ensure that the code is working correctly.
Runs all metrics on 14 trackers for the MOT Challenge MOT17 benchmark.
"""


import sys
import os
import numpy as np
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

# Fixes multiprocessing on windows, does nothing otherwise
if __name__ == '__main__':
    freeze_support()

eval_config = {'USE_PARALLEL': False,
               'NUM_PARALLEL_CORES': 8,
               }
evaluator = trackeval.Evaluator(eval_config)
metrics_list = [trackeval.metrics.HOTA(), trackeval.metrics.CLEAR(), trackeval.metrics.Identity()]
test_data_loc = os.path.join(os.path.dirname(__file__), '..', 'data', 'tests', 'mot_challenge', 'MOT17-train')
trackers = [
    'DPMOT',
    'GNNMatch',
    'IA',
    'ISE_MOT17R',
    'Lif_T',
    'Lif_TsimInt',
    'LPC_MOT',
    'MAT',
    'MIFTv2',
    'MPNTrack',
    'SSAT',
    'TracktorCorr',
    'Tracktorv2',
    'UnsupTrack',
]

for tracker in trackers:
    # Run code on tracker
    dataset_config = {'TRACKERS_TO_EVAL': [tracker],
                      'BENCHMARK': 'MOT17'}
    dataset_list = [trackeval.datasets.MotChallenge2DBox(dataset_config)]
    raw_results, messages = evaluator.evaluate(dataset_list, metrics_list)

    results = {seq: raw_results['MotChallenge2DBox'][tracker][seq]['pedestrian'] for seq in
               raw_results['MotChallenge2DBox'][tracker].keys()}
    current_metrics_list = metrics_list + [trackeval.metrics.Count()]
    metric_names = trackeval.utils.validate_metrics_list(current_metrics_list)

    # Load expected results:
    test_data = trackeval.utils.load_detail(os.path.join(test_data_loc, tracker, 'pedestrian_detailed.csv'))
    assert len(test_data.keys()) == 22, len(test_data.keys())

    # Do checks
    for seq in test_data.keys():
        assert len(test_data[seq].keys()) > 250, len(test_data[seq].keys())

        details = []
        for metric, metric_name in zip(current_metrics_list, metric_names):
            table_res = {seq_key: seq_value[metric_name] for seq_key, seq_value in results.items()}
            details.append(metric.detailed_results(table_res))
        res_fields = sum([list(s['COMBINED_SEQ'].keys()) for s in details], [])
        res_values = sum([list(s[seq].values()) for s in details], [])
        res_dict = dict(zip(res_fields, res_values))

        for field in test_data[seq].keys():
            if not np.isclose(res_dict[field], test_data[seq][field]):
                print(tracker, seq, res_dict[field], test_data[seq][field], field)
                raise AssertionError

    print('Tracker %s tests passed' % tracker)
print('All tests passed')



================================================
FILE: TrackEval/tests/test_mots.py
================================================
import sys
import os
import numpy as np
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

# Fixes multiprocessing on windows, does nothing otherwise
if __name__ == '__main__':
    freeze_support()

eval_config = {'USE_PARALLEL': False,
               'NUM_PARALLEL_CORES': 8,
               }
evaluator = trackeval.Evaluator(eval_config)
metrics_list = [trackeval.metrics.HOTA(), trackeval.metrics.CLEAR(), trackeval.metrics.Identity()]

tests = [
    {'DATASET': 'KittiMOTS', 'SPLIT_TO_EVAL': 'val', 'TRACKERS_TO_EVAL': ['trackrcnn']},
    {'DATASET': 'MOTSChallenge', 'SPLIT_TO_EVAL': 'train', 'TRACKERS_TO_EVAL': ['TrackRCNN']}
]

for dataset_config in tests:

    dataset_name = dataset_config.pop('DATASET')
    if dataset_name == 'MOTSChallenge':
        dataset_list = [trackeval.datasets.MOTSChallenge(dataset_config)]
        file_loc = os.path.join('mot_challenge', 'MOTS-' + dataset_config['SPLIT_TO_EVAL'])
    elif dataset_name == 'KittiMOTS':
        dataset_list = [trackeval.datasets.KittiMOTS(dataset_config)]
        file_loc = os.path.join('kitti', 'kitti_mots_val')
    else:
        raise Exception('Dataset %s does not exist.' % dataset_name)

    raw_results, messages = evaluator.evaluate(dataset_list, metrics_list)

    classes = dataset_list[0].config['CLASSES_TO_EVAL']
    tracker = dataset_config['TRACKERS_TO_EVAL'][0]
    test_data_loc = os.path.join(os.path.dirname(__file__), '..', 'data', 'tests', file_loc)

    for cls in classes:
        results = {seq: raw_results[dataset_name][tracker][seq][cls] for seq in raw_results[dataset_name][tracker].keys()}
        current_metrics_list = metrics_list + [trackeval.metrics.Count()]
        metric_names = trackeval.utils.validate_metrics_list(current_metrics_list)

        # Load expected results:
        test_data = trackeval.utils.load_detail(os.path.join(test_data_loc, tracker, cls + '_detailed.csv'))

        # Do checks
        for seq in test_data.keys():
            assert len(test_data[seq].keys()) > 250, len(test_data[seq].keys())

            details = []
            for metric, metric_name in zip(current_metrics_list, metric_names):
                table_res = {seq_key: seq_value[metric_name] for seq_key, seq_value in results.items()}
                details.append(metric.detailed_results(table_res))
            res_fields = sum([list(s['COMBINED_SEQ'].keys()) for s in details], [])
            res_values = sum([list(s[seq].values()) for s in details], [])
            res_dict = dict(zip(res_fields, res_values))

            for field in test_data[seq].keys():
                assert np.isclose(res_dict[field], test_data[seq][field]), seq + ': ' + cls + ': ' + field

    print('Tracker %s tests passed' % tracker)
print('All tests passed')

================================================
FILE: TrackEval/trackeval/__init__.py
================================================
from .eval import Evaluator
from . import datasets
from . import metrics
from . import plotting
from . import utils


================================================
FILE: TrackEval/trackeval/_timing.py
================================================
from functools import wraps
from time import perf_counter
import inspect

DO_TIMING = False
DISPLAY_LESS_PROGRESS = False
timer_dict = {}
counter = 0


def time(f):
    @wraps(f)
    def wrap(*args, **kw):
        if DO_TIMING:
            # Run function with timing
            ts = perf_counter()
            result = f(*args, **kw)
            te = perf_counter()
            tt = te-ts

            # Get function name
            arg_names = inspect.getfullargspec(f)[0]
            if arg_names[0] == 'self' and DISPLAY_LESS_PROGRESS:
                return result
            elif arg_names[0] == 'self':
                method_name = type(args[0]).__name__ + '.' + f.__name__
            else:
                method_name = f.__name__

            # Record accumulative time in each function for analysis
            if method_name in timer_dict.keys():
                timer_dict[method_name] += tt
            else:
                timer_dict[method_name] = tt

            # If code is finished, display timing summary
            if method_name == "Evaluator.evaluate":
                print("")
                print("Timing analysis:")
                for key, value in timer_dict.items():
                    print('%-70s %2.4f sec' % (key, value))
            else:
                # Get function argument values for printing special arguments of interest
                arg_titles = ['tracker', 'seq', 'cls']
                arg_vals = []
                for i, a in enumerate(arg_names):
                    if a in arg_titles:
                        arg_vals.append(args[i])
                arg_text = '(' + ', '.join(arg_vals) + ')'

                # Display methods and functions with different indentation.
                if arg_names[0] == 'self':
                    print('%-74s %2.4f sec' % (' '*4 + method_name + arg_text, tt))
                elif arg_names[0] == 'test':
                    pass
                else:
                    global counter
                    counter += 1
                    print('%i %-70s %2.4f sec' % (counter, method_name + arg_text, tt))

            return result
        else:
            # If config["TIME_PROGRESS"] is false, or config["USE_PARALLEL"] is true, run functions normally without timing.
            return f(*args, **kw)
    return wrap


================================================
FILE: TrackEval/trackeval/baselines/__init__.py
================================================
import baseline_utils
import stp
import non_overlap
import pascal_colormap
import thresholder
import vizualize

================================================
FILE: TrackEval/trackeval/baselines/baseline_utils.py
================================================

import os
import csv
import numpy as np
from copy import deepcopy
from PIL import Image
from pycocotools import mask as mask_utils
from scipy.optimize import linear_sum_assignment
from trackeval.baselines.pascal_colormap import pascal_colormap


def load_seq(file_to_load):
    """ Load input data from file in RobMOTS format (e.g. provided detections).
    Returns: Data object with the following structure (see STP :
        data['cls'][t] = {'ids', 'scores', 'im_hs', 'im_ws', 'mask_rles'}
    """
    fp = open(file_to_load)
    dialect = csv.Sniffer().sniff(fp.readline(), delimiters=' ')
    dialect.skipinitialspace = True
    fp.seek(0)
    reader = csv.reader(fp, dialect)
    read_data = {}
    num_timesteps = 0
    for i, row in enumerate(reader):
        if row[-1] in '':
            row = row[:-1]
        t = int(row[0])
        cid = row[1]
        c = int(row[2])
        s = row[3]
        h = row[4]
        w = row[5]
        rle = row[6]

        if t >= num_timesteps:
            num_timesteps = t + 1

        if c in read_data.keys():
            if t in read_data[c].keys():
                read_data[c][t]['ids'].append(cid)
                read_data[c][t]['scores'].append(s)
                read_data[c][t]['im_hs'].append(h)
                read_data[c][t]['im_ws'].append(w)
                read_data[c][t]['mask_rles'].append(rle)
            else:
                read_data[c][t] = {}
                read_data[c][t]['ids'] = [cid]
                read_data[c][t]['scores'] = [s]
                read_data[c][t]['im_hs'] = [h]
                read_data[c][t]['im_ws'] = [w]
                read_data[c][t]['mask_rles'] = [rle]
        else:
            read_data[c] = {t: {}}
            read_data[c][t]['ids'] = [cid]
            read_data[c][t]['scores'] = [s]
            read_data[c][t]['im_hs'] = [h]
            read_data[c][t]['im_ws'] = [w]
            read_data[c][t]['mask_rles'] = [rle]
    fp.close()

    data = {}
    for c in read_data.keys():
        data[c] = [{} for _ in range(num_timesteps)]
        for t in range(num_timesteps):
            if t in read_data[c].keys():
                data[c][t]['ids'] = np.atleast_1d(read_data[c][t]['ids']).astype(int)
                data[c][t]['scores'] = np.atleast_1d(read_data[c][t]['scores']).astype(float)
                data[c][t]['im_hs'] = np.atleast_1d(read_data[c][t]['im_hs']).astype(int)
                data[c][t]['im_ws'] = np.atleast_1d(read_data[c][t]['im_ws']).astype(int)
                data[c][t]['mask_rles'] = np.atleast_1d(read_data[c][t]['mask_rles']).astype(str)
            else:
                data[c][t]['ids'] = np.empty(0).astype(int)
                data[c][t]['scores'] = np.empty(0).astype(float)
                data[c][t]['im_hs'] = np.empty(0).astype(int)
                data[c][t]['im_ws'] = np.empty(0).astype(int)
                data[c][t]['mask_rles'] = np.empty(0).astype(str)
    return data


def threshold(tdata, thresh):
    """ Removes detections below a certian threshold ('thresh') score. """
    new_data = {}
    to_keep = tdata['scores'] > thresh
    for field in ['ids', 'scores', 'im_hs', 'im_ws', 'mask_rles']:
        new_data[field] = tdata[field][to_keep]
    return new_data


def create_coco_mask(mask_rles, im_hs, im_ws):
    """ Converts mask as rle text (+ height and width) to encoded version used by pycocotools. """
    coco_masks = [{'size': [h, w], 'counts': m.encode(encoding='UTF-8')}
                  for h, w, m in zip(im_hs, im_ws, mask_rles)]
    return coco_masks


def mask_iou(mask_rles1, mask_rles2, im_hs, im_ws, do_ioa=0):
    """ Calculate mask IoU between two masks.
    Further allows 'intersection over area' instead of IoU (over the area of mask_rle1).
    Allows either to pass in 1 boolean for do_ioa for all mask_rles2 or also one for each mask_rles2.
    It is recommended that mask_rles1 is a detection and mask_rles2 is a groundtruth.
    """
    coco_masks1 = create_coco_mask(mask_rles1, im_hs, im_ws)
    coco_masks2 = create_coco_mask(mask_rles2, im_hs, im_ws)

    if not hasattr(do_ioa, "__len__"):
        do_ioa = [do_ioa]*len(coco_masks2)
    assert(len(coco_masks2) == len(do_ioa))
    if len(coco_masks1) == 0 or len(coco_masks2) == 0:
        iou = np.zeros(len(coco_masks1), len(coco_masks2))
    else:
        iou = mask_utils.iou(coco_masks1, coco_masks2, do_ioa)
    return iou


def sort_by_score(t_data):
    """ Sorts data by score """
    sort_index = np.argsort(t_data['scores'])[::-1]
    for k in t_data.keys():
        t_data[k] = t_data[k][sort_index]
    return t_data


def mask_NMS(t_data, nms_threshold=0.5, already_sorted=False):
    """ Remove redundant masks by performing non-maximum suppression (NMS) """

    # Sort by score
    if not already_sorted:
        t_data = sort_by_score(t_data)

    #  Calculate the mask IoU between all detections in the timestep.
    mask_ious_all = mask_iou(t_data['mask_rles'], t_data['mask_rles'], t_data['im_hs'], t_data['im_ws'])

    # Determine which masks NMS should remove
    # (those overlapping greater than nms_threshold with another mask that has a higher score)
    num_dets = len(t_data['mask_rles'])
    to_remove = [False for _ in range(num_dets)]
    for i in range(num_dets):
        if not to_remove[i]:
            for j in range(i + 1, num_dets):
                if mask_ious_all[i, j] > nms_threshold:
                    to_remove[j] = True

    # Remove detections which should be removed
    to_keep = np.logical_not(to_remove)
    for k in t_data.keys():
        t_data[k] = t_data[k][to_keep]

    return t_data


def non_overlap(t_data, already_sorted=False):
    """ Enforces masks to be non-overlapping in an image, does this by putting masks 'on top of one another',
    such that higher score masks 'occlude' and thus remove parts of lower scoring masks.

    Help wanted: if anyone knows a way to do this WITHOUT converting the RLE to the np.array let me know, because that
    would be MUCH more efficient. (I have tried, but haven't yet had success).
    """

    # Sort by score
    if not already_sorted:
        t_data = sort_by_score(t_data)

    # Get coco masks
    coco_masks = create_coco_mask(t_data['mask_rles'], t_data['im_hs'], t_data['im_ws'])

    # Create a single np.array to hold all of the non-overlapping mask
    masks_array = np.zeros((t_data['im_hs'][0], t_data['im_ws'][0]), 'uint8')

    # Decode each mask into a np.array, and place it into the overall array for the whole frame.
    # Since masks with the lowest score are placed first, they are 'partially overridden' by masks with a higher score
    # if they overlap.
    for i, mask in enumerate(coco_masks[::-1]):
        masks_array[mask_utils.decode(mask).astype('bool')] = i + 1

    # Encode the resulting np.array back into a set of coco_masks which are now non-overlapping.
    num_dets = len(coco_masks)
    for i, j in enumerate(range(1, num_dets + 1)[::-1]):
        coco_masks[i] = mask_utils.encode(np.asfortranarray(masks_array == j, dtype=np.uint8))

    # Convert from coco_mask back into our mask_rle format.
    t_data['mask_rles'] = [m['counts'].decode("utf-8") for m in coco_masks]

    return t_data


def masks2boxes(mask_rles, im_hs, im_ws):
    """ Extracts bounding boxes which surround a set of masks. """
    coco_masks = create_coco_mask(mask_rles, im_hs, im_ws)
    boxes = np.array([mask_utils.toBbox(x) for x in coco_masks])
    if len(boxes) == 0:
        boxes = np.empty((0, 4))
    return boxes


def box_iou(bboxes1, bboxes2, box_format='xywh', do_ioa=False, do_giou=False):
    """ Calculates the IOU (intersection over union) between two arrays of boxes.
    Allows variable box formats ('xywh' and 'x0y0x1y1').
    If do_ioa (intersection over area), then calculates the intersection over the area of boxes1 - this is commonly
    used to determine if detections are within crowd ignore region.
    If do_giou (generalized intersection over union, then calculates giou.
    """
    if len(bboxes1) == 0 or len(bboxes2) == 0:
        ious = np.zeros((len(bboxes1), len(bboxes2)))
        return ious
    if box_format in 'xywh':
        # layout: (x0, y0, w, h)
        bboxes1 = deepcopy(bboxes1)
        bboxes2 = deepcopy(bboxes2)

        bboxes1[:, 2] = bboxes1[:, 0] + bboxes1[:, 2]
        bboxes1[:, 3] = bboxes1[:, 1] + bboxes1[:, 3]
        bboxes2[:, 2] = bboxes2[:, 0] + bboxes2[:, 2]
        bboxes2[:, 3] = bboxes2[:, 1] + bboxes2[:, 3]
    elif box_format not in 'x0y0x1y1':
        raise (Exception('box_format %s is not implemented' % box_format))

    # layout: (x0, y0, x1, y1)
    min_ = np.minimum(bboxes1[:, np.newaxis, :], bboxes2[np.newaxis, :, :])
    max_ = np.maximum(bboxes1[:, np.newaxis, :], bboxes2[np.newaxis, :, :])
    intersection = np.maximum(min_[..., 2] - max_[..., 0], 0) * np.maximum(min_[..., 3] - max_[..., 1], 0)
    area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * (bboxes1[..., 3] - bboxes1[..., 1])

    if do_ioa:
        ioas = np.zeros_like(intersection)
        valid_mask = area1 > 0 + np.finfo('float').eps
        ioas[valid_mask, :] = intersection[valid_mask, :] / area1[valid_mask][:, np.newaxis]

        return ioas
    else:
        area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * (bboxes2[..., 3] - bboxes2[..., 1])
        union = area1[:, np.newaxis] + area2[np.newaxis, :] - intersection
        intersection[area1 <= 0 + np.finfo('float').eps, :] = 0
        intersection[:, area2 <= 0 + np.finfo('float').eps] = 0
        intersection[union <= 0 + np.finfo('float').eps] = 0
        union[union <= 0 + np.finfo('float').eps] = 1
        ious = intersection / union

    if do_giou:
        enclosing_area = np.maximum(max_[..., 2] - min_[..., 0], 0) * np.maximum(max_[..., 3] - min_[..., 1], 0)
        eps = 1e-7
        # giou
        ious = ious - ((enclosing_area - union) / (enclosing_area + eps))

    return ious


def match(match_scores):
    match_rows, match_cols = linear_sum_assignment(-match_scores)
    return match_rows, match_cols


def write_seq(output_data, out_file):
    out_loc = os.path.dirname(out_file)
    if not os.path.exists(out_loc):
        os.makedirs(out_loc, exist_ok=True)
    fp = open(out_file, 'w', newline='')
    writer = csv.writer(fp, delimiter=' ')
    for row in output_data:
        writer.writerow(row)
    fp.close()


def combine_classes(data):
    """ Converts data from a class-separated to a class-combined format.
    Input format: data['cls'][t] = {'ids', 'scores', 'im_hs', 'im_ws', 'mask_rles'}
    Output format: data[t] = {'ids', 'scores', 'im_hs', 'im_ws', 'mask_rles', 'cls'}
    """
    output_data = [{} for _ in list(data.values())[0]]
    for cls, cls_data in data.items():
        for timestep, t_data in enumerate(cls_data):
            for k in t_data.keys():
                if k in output_data[timestep].keys():
                    output_data[timestep][k] += list(t_data[k])
                else:
                    output_data[timestep][k] = list(t_data[k])
            if 'cls' in output_data[timestep].keys():
                output_data[timestep]['cls'] += [cls]*len(output_data[timestep]['ids'])
            else:
                output_data[timestep]['cls'] = [cls]*len(output_data[timestep]['ids'])

    for timestep, t_data in enumerate(output_data):
        for k in t_data.keys():
            output_data[timestep][k] = np.array(output_data[timestep][k])

    return output_data


def save_as_png(t_data, out_file, im_h, im_w):
    """ Save a set of segmentation masks into a PNG format, the same as used for the DAVIS dataset."""

    if len(t_data['mask_rles']) > 0:
        coco_masks = create_coco_mask(t_data['mask_rles'], t_data['im_hs'], t_data['im_ws'])

        list_of_np_masks = [mask_utils.decode(mask) for mask in coco_masks]

        png = np.zeros((t_data['im_hs'][0], t_data['im_ws'][0]))
        for mask, c_id in zip(list_of_np_masks, t_data['ids']):
            png[mask.astype("bool")] = c_id + 1
    else:
        png = np.zeros((im_h, im_w))

    if not os.path.exists(os.path.dirname(out_file)):
        os.makedirs(os.path.dirname(out_file))

    colmap = (np.array(pascal_colormap) * 255).round().astype("uint8")
    palimage = Image.new('P', (16, 16))
    palimage.putpalette(colmap)
    im = Image.fromarray(np.squeeze(png.astype("uint8")))
    im2 = im.quantize(palette=palimage)
    im2.save(out_file)


def get_frame_size(data):
    """ Gets frame height and width from data. """
    for cls, cls_data in data.items():
        for timestep, t_data in enumerate(cls_data):
            if len(t_data['im_hs'] > 0):
                im_h = t_data['im_hs'][0]
                im_w = t_data['im_ws'][0]
                return im_h, im_w
    return None


================================================
FILE: TrackEval/trackeval/baselines/non_overlap.py
================================================
"""
Non-Overlap: Code to take in a set of raw detections and produce a set of non-overlapping detections from it.

Author: Jonathon Luiten
"""

import os
import sys
from multiprocessing.pool import Pool
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')))
from trackeval.baselines import baseline_utils as butils
from trackeval.utils import get_code_path

code_path = get_code_path()
config = {
    'INPUT_FOL': os.path.join(code_path, 'data/detections/rob_mots/{split}/raw_supplied/data/'),
    'OUTPUT_FOL': os.path.join(code_path, 'data/detections/rob_mots/{split}/non_overlap_supplied/data/'),
    'SPLIT': 'train',  # valid: 'train', 'val', 'test'.
    'Benchmarks': None,  # If None, all benchmarks in SPLIT.

    'Num_Parallel_Cores': None,  # If None, run without parallel.

    'THRESHOLD_NMS_MASK_IOU': 0.5,
}


def do_sequence(seq_file):

    # Load input data from file (e.g. provided detections)
    # data format: data['cls'][t] = {'ids', 'scores', 'im_hs', 'im_ws', 'mask_rles'}
    data = butils.load_seq(seq_file)

    # Converts data from a class-separated to a class-combined format.
    # data[t] = {'ids', 'scores', 'im_hs', 'im_ws', 'mask_rles', 'cls'}
    data = butils.combine_classes(data)

    # Where to accumulate output data for writing out
    output_data = []

    # Run for each timestep.
    for timestep, t_data in enumerate(data):

        # Remove redundant masks by performing non-maximum suppression (NMS)
        t_data = butils.mask_NMS(t_data, nms_threshold=config['THRESHOLD_NMS_MASK_IOU'])

        # Perform non-overlap, to get non_overlapping masks.
        t_data = butils.non_overlap(t_data, already_sorted=True)

        # Save result in output format to write to file later.
        # Output Format = [timestep ID class score im_h im_w mask_RLE]
        for i in range(len(t_data['ids'])):
            row = [timestep, int(t_data['ids'][i]), t_data['cls'][i], t_data['scores'][i], t_data['im_hs'][i],
                   t_data['im_ws'][i], t_data['mask_rles'][i]]
            output_data.append(row)

    # Write results to file
    out_file = seq_file.replace(config['INPUT_FOL'].format(split=config['SPLIT']),
                                config['OUTPUT_FOL'].format(split=config['SPLIT']))
    butils.write_seq(output_data, out_file)

    print('DONE:', seq_file)


if __name__ == '__main__':

    # Required to fix bug in multiprocessing on windows.
    freeze_support()

    # Obtain list of sequences to run tracker for.
    if config['Benchmarks']:
        benchmarks = config['Benchmarks']
    else:
        benchmarks = ['davis_unsupervised', 'kitti_mots', 'youtube_vis', 'ovis', 'bdd_mots', 'tao']
        if config['SPLIT'] != 'train':
            benchmarks += ['waymo', 'mots_challenge']
    seqs_todo = []
    for bench in benchmarks:
        bench_fol = os.path.join(config['INPUT_FOL'].format(split=config['SPLIT']), bench)
        seqs_todo += [os.path.join(bench_fol, seq) for seq in os.listdir(bench_fol)]

    # Run in parallel
    if config['Num_Parallel_Cores']:
        with Pool(config['Num_Parallel_Cores']) as pool:
            results = pool.map(do_sequence, seqs_todo)

    # Run in series
    else:
        for seq_todo in seqs_todo:
            do_sequence(seq_todo)



================================================
FILE: TrackEval/trackeval/baselines/pascal_colormap.py
================================================
pascal_colormap = [
    0     ,         0,         0,
    0.5020,         0,         0,
         0,    0.5020,         0,
    0.5020,    0.5020,         0,
         0,         0,    0.5020,
    0.5020,         0,    0.5020,
         0,    0.5020,    0.5020,
    0.5020,    0.5020,    0.5020,
    0.2510,         0,         0,
    0.7529,         0,         0,
    0.2510,    0.5020,         0,
    0.7529,    0.5020,         0,
    0.2510,         0,    0.5020,
    0.7529,         0,    0.5020,
    0.2510,    0.5020,    0.5020,
    0.7529,    0.5020,    0.5020,
         0,    0.2510,         0,
    0.5020,    0.2510,         0,
         0,    0.7529,         0,
    0.5020,    0.7529,         0,
         0,    0.2510,    0.5020,
    0.5020,    0.2510,    0.5020,
         0,    0.7529,    0.5020,
    0.5020,    0.7529,    0.5020,
    0.2510,    0.2510,         0,
    0.7529,    0.2510,         0,
    0.2510,    0.7529,         0,
    0.7529,    0.7529,         0,
    0.2510,    0.2510,    0.5020,
    0.7529,    0.2510,    0.5020,
    0.2510,    0.7529,    0.5020,
    0.7529,    0.7529,    0.5020,
         0,         0,    0.2510,
    0.5020,         0,    0.2510,
         0,    0.5020,    0.2510,
    0.5020,    0.5020,    0.2510,
         0,         0,    0.7529,
    0.5020,         0,    0.7529,
         0,    0.5020,    0.7529,
    0.5020,    0.5020,    0.7529,
    0.2510,         0,    0.2510,
    0.7529,         0,    0.2510,
    0.2510,    0.5020,    0.2510,
    0.7529,    0.5020,    0.2510,
    0.2510,         0,    0.7529,
    0.7529,         0,    0.7529,
    0.2510,    0.5020,    0.7529,
    0.7529,    0.5020,    0.7529,
         0,    0.2510,    0.2510,
    0.5020,    0.2510,    0.2510,
         0,    0.7529,    0.2510,
    0.5020,    0.7529,    0.2510,
         0,    0.2510,    0.7529,
    0.5020,    0.2510,    0.7529,
         0,    0.7529,    0.7529,
    0.5020,    0.7529,    0.7529,
    0.2510,    0.2510,    0.2510,
    0.7529,    0.2510,    0.2510,
    0.2510,    0.7529,    0.2510,
    0.7529,    0.7529,    0.2510,
    0.2510,    0.2510,    0.7529,
    0.7529,    0.2510,    0.7529,
    0.2510,    0.7529,    0.7529,
    0.7529,    0.7529,    0.7529,
    0.1255,         0,         0,
    0.6275,         0,         0,
    0.1255,    0.5020,         0,
    0.6275,    0.5020,         0,
    0.1255,         0,    0.5020,
    0.6275,         0,    0.5020,
    0.1255,    0.5020,    0.5020,
    0.6275,    0.5020,    0.5020,
    0.3765,         0,         0,
    0.8784,         0,         0,
    0.3765,    0.5020,         0,
    0.8784,    0.5020,         0,
    0.3765,         0,    0.5020,
    0.8784,         0,    0.5020,
    0.3765,    0.5020,    0.5020,
    0.8784,    0.5020,    0.5020,
    0.1255,    0.2510,         0,
    0.6275,    0.2510,         0,
    0.1255,    0.7529,         0,
    0.6275,    0.7529,         0,
    0.1255,    0.2510,    0.5020,
    0.6275,    0.2510,    0.5020,
    0.1255,    0.7529,    0.5020,
    0.6275,    0.7529,    0.5020,
    0.3765,    0.2510,         0,
    0.8784,    0.2510,         0,
    0.3765,    0.7529,         0,
    0.8784,    0.7529,         0,
    0.3765,    0.2510,    0.5020,
    0.8784,    0.2510,    0.5020,
    0.3765,    0.7529,    0.5020,
    0.8784,    0.7529,    0.5020,
    0.1255,         0,    0.2510,
    0.6275,         0,    0.2510,
    0.1255,    0.5020,    0.2510,
    0.6275,    0.5020,    0.2510,
    0.1255,         0,    0.7529,
    0.6275,         0,    0.7529,
    0.1255,    0.5020,    0.7529,
    0.6275,    0.5020,    0.7529,
    0.3765,         0,    0.2510,
    0.8784,         0,    0.2510,
    0.3765,    0.5020,    0.2510,
    0.8784,    0.5020,    0.2510,
    0.3765,         0,    0.7529,
    0.8784,         0,    0.7529,
    0.3765,    0.5020,    0.7529,
    0.8784,    0.5020,    0.7529,
    0.1255,    0.2510,    0.2510,
    0.6275,    0.2510,    0.2510,
    0.1255,    0.7529,    0.2510,
    0.6275,    0.7529,    0.2510,
    0.1255,    0.2510,    0.7529,
    0.6275,    0.2510,    0.7529,
    0.1255,    0.7529,    0.7529,
    0.6275,    0.7529,    0.7529,
    0.3765,    0.2510,    0.2510,
    0.8784,    0.2510,    0.2510,
    0.3765,    0.7529,    0.2510,
    0.8784,    0.7529,    0.2510,
    0.3765,    0.2510,    0.7529,
    0.8784,    0.2510,    0.7529,
    0.3765,    0.7529,    0.7529,
    0.8784,    0.7529,    0.7529,
         0,    0.1255,         0,
    0.5020,    0.1255,         0,
         0,    0.6275,         0,
    0.5020,    0.6275,         0,
         0,    0.1255,    0.5020,
    0.5020,    0.1255,    0.5020,
         0,    0.6275,    0.5020,
    0.5020,    0.6275,    0.5020,
    0.2510,    0.1255,         0,
    0.7529,    0.1255,         0,
    0.2510,    0.6275,         0,
    0.7529,    0.6275,         0,
    0.2510,    0.1255,    0.5020,
    0.7529,    0.1255,    0.5020,
    0.2510,    0.6275,    0.5020,
    0.7529,    0.6275,    0.5020,
         0,    0.3765,         0,
    0.5020,    0.3765,         0,
         0,    0.8784,         0,
    0.5020,    0.8784,         0,
         0,    0.3765,    0.5020,
    0.5020,    0.3765,    0.5020,
         0,    0.8784,    0.5020,
    0.5020,    0.8784,    0.5020,
    0.2510,    0.3765,         0,
    0.7529,    0.3765,         0,
    0.2510,    0.8784,         0,
    0.7529,    0.8784,         0,
    0.2510,    0.3765,    0.5020,
    0.7529,    0.3765,    0.5020,
    0.2510,    0.8784,    0.5020,
    0.7529,    0.8784,    0.5020,
         0,    0.1255,    0.2510,
    0.5020,    0.1255,    0.2510,
         0,    0.6275,    0.2510,
    0.5020,    0.6275,    0.2510,
         0,    0.1255,    0.7529,
    0.5020,    0.1255,    0.7529,
         0,    0.6275,    0.7529,
    0.5020,    0.6275,    0.7529,
    0.2510,    0.1255,    0.2510,
    0.7529,    0.1255,    0.2510,
    0.2510,    0.6275,    0.2510,
    0.7529,    0.6275,    0.2510,
    0.2510,    0.1255,    0.7529,
    0.7529,    0.1255,    0.7529,
    0.2510,    0.6275,    0.7529,
    0.7529,    0.6275,    0.7529,
         0,    0.3765,    0.2510,
    0.5020,    0.3765,    0.2510,
         0,    0.8784,    0.2510,
    0.5020,    0.8784,    0.2510,
         0,    0.3765,    0.7529,
    0.5020,    0.3765,    0.7529,
         0,    0.8784,    0.7529,
    0.5020,    0.8784,    0.7529,
    0.2510,    0.3765,    0.2510,
    0.7529,    0.3765,    0.2510,
    0.2510,    0.8784,    0.2510,
    0.7529,    0.8784,    0.2510,
    0.2510,    0.3765,    0.7529,
    0.7529,    0.3765,    0.7529,
    0.2510,    0.8784,    0.7529,
    0.7529,    0.8784,    0.7529,
    0.1255,    0.1255,         0,
    0.6275,    0.1255,         0,
    0.1255,    0.6275,         0,
    0.6275,    0.6275,         0,
    0.1255,    0.1255,    0.5020,
    0.6275,    0.1255,    0.5020,
    0.1255,    0.6275,    0.5020,
    0.6275,    0.6275,    0.5020,
    0.3765,    0.1255,         0,
    0.8784,    0.1255,         0,
    0.3765,    0.6275,         0,
    0.8784,    0.6275,         0,
    0.3765,    0.1255,    0.5020,
    0.8784,    0.1255,    0.5020,
    0.3765,    0.6275,    0.5020,
    0.8784,    0.6275,    0.5020,
    0.1255,    0.3765,         0,
    0.6275,    0.3765,         0,
    0.1255,    0.8784,         0,
    0.6275,    0.8784,         0,
    0.1255,    0.3765,    0.5020,
    0.6275,    0.3765,    0.5020,
    0.1255,    0.8784,    0.5020,
    0.6275,    0.8784,    0.5020,
    0.3765,    0.3765,         0,
    0.8784,    0.3765,         0,
    0.3765,    0.8784,         0,
    0.8784,    0.8784,         0,
    0.3765,    0.3765,    0.5020,
    0.8784,    0.3765,    0.5020,
    0.3765,    0.8784,    0.5020,
    0.8784,    0.8784,    0.5020,
    0.1255,    0.1255,    0.2510,
    0.6275,    0.1255,    0.2510,
    0.1255,    0.6275,    0.2510,
    0.6275,    0.6275,    0.2510,
    0.1255,    0.1255,    0.7529,
    0.6275,    0.1255,    0.7529,
    0.1255,    0.6275,    0.7529,
    0.6275,    0.6275,    0.7529,
    0.3765,    0.1255,    0.2510,
    0.8784,    0.1255,    0.2510,
    0.3765,    0.6275,    0.2510,
    0.8784,    0.6275,    0.2510,
    0.3765,    0.1255,    0.7529,
    0.8784,    0.1255,    0.7529,
    0.3765,    0.6275,    0.7529,
    0.8784,    0.6275,    0.7529,
    0.1255,    0.3765,    0.2510,
    0.6275,    0.3765,    0.2510,
    0.1255,    0.8784,    0.2510,
    0.6275,    0.8784,    0.2510,
    0.1255,    0.3765,    0.7529,
    0.6275,    0.3765,    0.7529,
    0.1255,    0.8784,    0.7529,
    0.6275,    0.8784,    0.7529,
    0.3765,    0.3765,    0.2510,
    0.8784,    0.3765,    0.2510,
    0.3765,    0.8784,    0.2510,
    0.8784,    0.8784,    0.2510,
    0.3765,    0.3765,    0.7529,
    0.8784,    0.3765,    0.7529,
    0.3765,    0.8784,    0.7529,
    0.8784,    0.8784,    0.7529]

================================================
FILE: TrackEval/trackeval/baselines/stp.py
================================================
"""
STP: Simplest Tracker Possible

Author: Jonathon Luiten

This simple tracker, simply assigns track IDs which maximise the 'bounding box IoU' between previous tracks and current
detections. It is also able to match detections to tracks at more than one timestep previously.
"""

import os
import sys
import numpy as np
from multiprocessing.pool import Pool
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')))
from trackeval.baselines import baseline_utils as butils
from trackeval.utils import get_code_path

code_path = get_code_path()
config = {
    'INPUT_FOL': os.path.join(code_path, 'data/detections/rob_mots/{split}/non_overlap_supplied/data/'),
    'OUTPUT_FOL': os.path.join(code_path, 'data/trackers/rob_mots/{split}/STP/data/'),
    'SPLIT': 'train',  # valid: 'train', 'val', 'test'.
    'Benchmarks': None,  # If None, all benchmarks in SPLIT.

    'Num_Parallel_Cores': None,  # If None, run without parallel.

    'DETECTION_THRESHOLD': 0.5,
    'ASSOCIATION_THRESHOLD': 1e-10,
    'MAX_FRAMES_SKIP': 7
}


def track_sequence(seq_file):

    # Load input data from file (e.g. provided detections)
    # data format: data['cls'][t] = {'ids', 'scores', 'im_hs', 'im_ws', 'mask_rles'}
    data = butils.load_seq(seq_file)

    # Where to accumulate output data for writing out
    output_data = []

    # To ensure IDs are unique per object across all classes.
    curr_max_id = 0

    # Run tracker for each class.
    for cls, cls_data in data.items():

        # Initialize container for holding previously tracked objects.
        prev = {'boxes': np.empty((0, 4)),
                'ids': np.array([], np.int),
                'timesteps': np.array([])}

        # Run tracker for each timestep.
        for timestep, t_data in enumerate(cls_data):

            # Threshold detections.
            t_data = butils.threshold(t_data, config['DETECTION_THRESHOLD'])

            # Convert mask dets to bounding boxes.
            boxes = butils.masks2boxes(t_data['mask_rles'], t_data['im_hs'], t_data['im_ws'])

            # Calculate IoU between previous and current frame dets.
            ious = butils.box_iou(prev['boxes'], boxes)

            # Score which decreases quickly for previous dets depending on how many timesteps before they come from.
            prev_timestep_scores = np.power(10, -1 * prev['timesteps'])

            # Matching score is such that it first tries to match 'most recent timesteps',
            # and within each timestep maximised IoU.
            match_scores = prev_timestep_scores[:, np.newaxis] * ious

            # Find best matching between current dets and previous tracks.
            match_rows, match_cols = butils.match(match_scores)

            # Remove matches that have an IoU below a certain threshold.
            actually_matched_mask = ious[match_rows, match_cols] > config['ASSOCIATION_THRESHOLD']
            match_rows = match_rows[actually_matched_mask]
            match_cols = match_cols[actually_matched_mask]

            # Assign the prev track ID to the current dets if they were matched.
            ids = np.nan * np.ones((len(boxes),), np.int)
            ids[match_cols] = prev['ids'][match_rows]

            # Create new track IDs for dets that were not matched to previous tracks.
            num_not_matched = len(ids) - len(match_cols)
            new_ids = np.arange(curr_max_id + 1, curr_max_id + num_not_matched + 1)
            ids[np.isnan(ids)] = new_ids

            # Update maximum ID to ensure future added tracks have a unique ID value.
            curr_max_id += num_not_matched

            # Drop tracks from 'previous tracks' if they have not been matched in the last MAX_FRAMES_SKIP frames.
            unmatched_rows = [i for i in range(len(prev['ids'])) if
                              i not in match_rows and (prev['timesteps'][i] + 1 <= config['MAX_FRAMES_SKIP'])]

            # Update the set of previous tracking results to include the newly tracked detections.
            prev['ids'] = np.concatenate((ids, prev['ids'][unmatched_rows]), axis=0)
            prev['boxes'] = np.concatenate((np.atleast_2d(boxes), np.atleast_2d(prev['boxes'][unmatched_rows])), axis=0)
            prev['timesteps'] = np.concatenate((np.zeros((len(ids),)), prev['timesteps'][unmatched_rows] + 1), axis=0)

            # Save result in output format to write to file later.
            # Output Format = [timestep ID class score im_h im_w mask_RLE]
            for i in range(len(t_data['ids'])):
                row = [timestep, int(ids[i]), cls, t_data['scores'][i], t_data['im_hs'][i], t_data['im_ws'][i],
                       t_data['mask_rles'][i]]
                output_data.append(row)

    # Write results to file
    out_file = seq_file.replace(config['INPUT_FOL'].format(split=config['SPLIT']),
                                config['OUTPUT_FOL'].format(split=config['SPLIT']))
    butils.write_seq(output_data, out_file)

    print('DONE:', seq_file)


if __name__ == '__main__':

    # Required to fix bug in multiprocessing on windows.
    freeze_support()

    # Obtain list of sequences to run tracker for.
    if config['Benchmarks']:
        benchmarks = config['Benchmarks']
    else:
        benchmarks = ['davis_unsupervised', 'kitti_mots', 'youtube_vis', 'ovis', 'bdd_mots', 'tao']
        if config['SPLIT'] != 'train':
            benchmarks += ['waymo', 'mots_challenge']
    seqs_todo = []
    for bench in benchmarks:
        bench_fol = os.path.join(config['INPUT_FOL'].format(split=config['SPLIT']), bench)
        seqs_todo += [os.path.join(bench_fol, seq) for seq in os.listdir(bench_fol)]

    # Run in parallel
    if config['Num_Parallel_Cores']:
        with Pool(config['Num_Parallel_Cores']) as pool:
            results = pool.map(track_sequence, seqs_todo)

    # Run in series
    else:
        for seq_todo in seqs_todo:
            track_sequence(seq_todo)



================================================
FILE: TrackEval/trackeval/baselines/thresholder.py
================================================
"""
Thresholder

Author: Jonathon Luiten

Simply reads in a set of detection, thresholds them at a certain score threshold, and writes them out again.
"""

import os
import sys
from multiprocessing.pool import Pool
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')))
from trackeval.baselines import baseline_utils as butils
from trackeval.utils import get_code_path

THRESHOLD = 0.2

code_path = get_code_path()
config = {
    'INPUT_FOL': os.path.join(code_path, 'data/detections/rob_mots/{split}/non_overlap_supplied/data/'),
    'OUTPUT_FOL': os.path.join(code_path, 'data/detections/rob_mots/{split}/threshold_' + str(100*THRESHOLD) + '/data/'),
    'SPLIT': 'train',  # valid: 'train', 'val', 'test'.
    'Benchmarks': None,  # If None, all benchmarks in SPLIT.

    'Num_Parallel_Cores': None,  # If None, run without parallel.

    'DETECTION_THRESHOLD': THRESHOLD,
}


def do_sequence(seq_file):

    # Load input data from file (e.g. provided detections)
    # data format: data['cls'][t] = {'ids', 'scores', 'im_hs', 'im_ws', 'mask_rles'}
    data = butils.load_seq(seq_file)

    # Where to accumulate output data for writing out
    output_data = []

    # Run for each class.
    for cls, cls_data in data.items():

        # Run for each timestep.
        for timestep, t_data in enumerate(cls_data):

            # Threshold detections.
            t_data = butils.threshold(t_data, config['DETECTION_THRESHOLD'])

            # Save result in output format to write to file later.
            # Output Format = [timestep ID class score im_h im_w mask_RLE]
            for i in range(len(t_data['ids'])):
                row = [timestep, int(t_data['ids'][i]), cls, t_data['scores'][i], t_data['im_hs'][i],
                       t_data['im_ws'][i], t_data['mask_rles'][i]]
                output_data.append(row)

    # Write results to file
    out_file = seq_file.replace(config['INPUT_FOL'].format(split=config['SPLIT']),
                                config['OUTPUT_FOL'].format(split=config['SPLIT']))
    butils.write_seq(output_data, out_file)

    print('DONE:', seq_todo)


if __name__ == '__main__':

    # Required to fix bug in multiprocessing on windows.
    freeze_support()

    # Obtain list of sequences to run tracker for.
    if config['Benchmarks']:
        benchmarks = config['Benchmarks']
    else:
        benchmarks = ['davis_unsupervised', 'kitti_mots', 'youtube_vis', 'ovis', 'bdd_mots', 'tao']
        if config['SPLIT'] != 'train':
            benchmarks += ['waymo', 'mots_challenge']
    seqs_todo = []
    for bench in benchmarks:
        bench_fol = os.path.join(config['INPUT_FOL'].format(split=config['SPLIT']), bench)
        seqs_todo += [os.path.join(bench_fol, seq) for seq in os.listdir(bench_fol)]

    # Run in parallel
    if config['Num_Parallel_Cores']:
        with Pool(config['Num_Parallel_Cores']) as pool:
            results = pool.map(do_sequence, seqs_todo)

    # Run in series
    else:
        for seq_todo in seqs_todo:
            do_sequence(seq_todo)



================================================
FILE: TrackEval/trackeval/baselines/vizualize.py
================================================
"""
Vizualize: Code which converts .txt rle tracking results into a visual .png format.

Author: Jonathon Luiten
"""

import os
import sys
from multiprocessing.pool import Pool
from multiprocessing import freeze_support

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')))
from trackeval.baselines import baseline_utils as butils
from trackeval.utils import get_code_path
from trackeval.datasets.rob_mots_classmap import cls_id_to_name

code_path = get_code_path()
config = {
    # Tracker format:
    'INPUT_FOL': os.path.join(code_path, 'data/trackers/rob_mots/{split}/STP/data/{bench}'),
    'OUTPUT_FOL': os.path.join(code_path, 'data/viz/rob_mots/{split}/STP/data/{bench}'),
    # GT format:
    # 'INPUT_FOL': os.path.join(code_path, 'data/gt/rob_mots/{split}/{bench}/data/'),
    # 'OUTPUT_FOL': os.path.join(code_path, 'data/gt_viz/rob_mots/{split}/{bench}/'),
    'SPLIT': 'train',  # valid: 'train', 'val', 'test'.
    'Benchmarks': None,  # If None, all benchmarks in SPLIT.
    'Num_Parallel_Cores': None,  # If None, run without parallel.
}


def do_sequence(seq_file):
    # Folder to save resulting visualization in
    out_fol = seq_file.replace(config['INPUT_FOL'].format(split=config['SPLIT'], bench=bench),
                               
Download .txt
gitextract_y7pd_xks/

├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── TrackEval/
│   ├── .gitignore
│   ├── LICENSE
│   ├── Readme.md
│   ├── docs/
│   │   ├── BDD100k-format.txt
│   │   ├── DAVIS-format.txt
│   │   ├── How_To/
│   │   │   └── Add_a_new_metric.md
│   │   ├── KITTI-format.txt
│   │   ├── MOTChallenge-Official/
│   │   │   └── Readme.md
│   │   ├── MOTChallenge-format.txt
│   │   ├── MOTS-format.txt
│   │   ├── OpenWorldTracking-Official/
│   │   │   └── Readme.md
│   │   ├── RobMOTS-Official/
│   │   │   └── Readme.md
│   │   ├── TAO-format.txt
│   │   └── YouTube-VIS-format.txt
│   ├── minimum_requirements.txt
│   ├── pyproject.toml
│   ├── requirements.txt
│   ├── scripts/
│   │   ├── comparison_plots.py
│   │   ├── run_bdd.py
│   │   ├── run_davis.py
│   │   ├── run_headtracking_challenge.py
│   │   ├── run_kitti.py
│   │   ├── run_kitti_mots.py
│   │   ├── run_mot_challenge.py
│   │   ├── run_mots_challenge.py
│   │   ├── run_rob_mots.py
│   │   ├── run_tao.py
│   │   ├── run_tao_ow.py
│   │   └── run_youtube_vis.py
│   ├── setup.cfg
│   ├── setup.py
│   ├── tests/
│   │   ├── test_all_quick.py
│   │   ├── test_davis.py
│   │   ├── test_metrics.py
│   │   ├── test_mot17.py
│   │   └── test_mots.py
│   └── trackeval/
│       ├── __init__.py
│       ├── _timing.py
│       ├── baselines/
│       │   ├── __init__.py
│       │   ├── baseline_utils.py
│       │   ├── non_overlap.py
│       │   ├── pascal_colormap.py
│       │   ├── stp.py
│       │   ├── thresholder.py
│       │   └── vizualize.py
│       ├── datasets/
│       │   ├── __init__.py
│       │   ├── _base_dataset.py
│       │   ├── bdd100k.py
│       │   ├── davis.py
│       │   ├── head_tracking_challenge.py
│       │   ├── kitti_2d_box.py
│       │   ├── kitti_mots.py
│       │   ├── mot_challenge_2d_box.py
│       │   ├── mots_challenge.py
│       │   ├── rob_mots.py
│       │   ├── rob_mots_classmap.py
│       │   ├── run_rob_mots.py
│       │   ├── tao.py
│       │   ├── tao_ow.py
│       │   └── youtube_vis.py
│       ├── eval.py
│       ├── metrics/
│       │   ├── __init__.py
│       │   ├── _base_metric.py
│       │   ├── clear.py
│       │   ├── count.py
│       │   ├── hota.py
│       │   ├── identity.py
│       │   ├── ideucl.py
│       │   ├── j_and_f.py
│       │   ├── track_map.py
│       │   └── vace.py
│       ├── plotting.py
│       └── utils.py
├── deploy/
│   ├── ONNXRuntime/
│   │   ├── README.md
│   │   └── onnx_inference.py
│   ├── TensorRT/
│   │   ├── cpp/
│   │   │   ├── CMakeLists.txt
│   │   │   ├── README.md
│   │   │   ├── include/
│   │   │   │   ├── BYTETracker.h
│   │   │   │   ├── STrack.h
│   │   │   │   ├── dataType.h
│   │   │   │   ├── kalmanFilter.h
│   │   │   │   ├── lapjv.h
│   │   │   │   └── logging.h
│   │   │   └── src/
│   │   │       ├── BYTETracker.cpp
│   │   │       ├── STrack.cpp
│   │   │       ├── bytetrack.cpp
│   │   │       ├── kalmanFilter.cpp
│   │   │       ├── lapjv.cpp
│   │   │       └── utils.cpp
│   │   └── python/
│   │       └── README.md
│   ├── ncnn/
│   │   └── cpp/
│   │       ├── CMakeLists.txt
│   │       ├── README.md
│   │       ├── include/
│   │       │   ├── BYTETracker.h
│   │       │   ├── STrack.h
│   │       │   ├── dataType.h
│   │       │   ├── kalmanFilter.h
│   │       │   └── lapjv.h
│   │       └── src/
│   │           ├── BYTETracker.cpp
│   │           ├── STrack.cpp
│   │           ├── bytetrack.cpp
│   │           ├── kalmanFilter.cpp
│   │           ├── lapjv.cpp
│   │           └── utils.cpp
│   └── scripts/
│       ├── export_onnx.py
│       └── trt.py
├── docs/
│   └── DEPLOY.md
├── exps/
│   ├── default/
│   │   ├── nano.py
│   │   ├── yolov3.py
│   │   ├── yolox_l.py
│   │   ├── yolox_m.py
│   │   ├── yolox_s.py
│   │   ├── yolox_tiny.py
│   │   └── yolox_x.py
│   ├── example/
│   │   └── mot/
│   │       ├── yolox_dancetrack_test.py
│   │       ├── yolox_dancetrack_test_hybrid_sort.py
│   │       ├── yolox_dancetrack_test_hybrid_sort_reid.py
│   │       ├── yolox_dancetrack_val.py
│   │       ├── yolox_dancetrack_val_hybrid_sort.py
│   │       ├── yolox_dancetrack_val_hybrid_sort_reid.py
│   │       ├── yolox_l_mix_det.py
│   │       ├── yolox_m_mix_det.py
│   │       ├── yolox_nano_mix_det.py
│   │       ├── yolox_s_mix_det.py
│   │       ├── yolox_tiny_mix_det.py
│   │       ├── yolox_x_ablation.py
│   │       ├── yolox_x_ablation_hybrid_sort.py
│   │       ├── yolox_x_ablation_hybrid_sort_reid.py
│   │       ├── yolox_x_ch.py
│   │       ├── yolox_x_mix_det.py
│   │       ├── yolox_x_mix_det_hybrid_sort.py
│   │       ├── yolox_x_mix_det_hybrid_sort_reid.py
│   │       ├── yolox_x_mix_det_train.py
│   │       ├── yolox_x_mix_mot20_ch.py
│   │       ├── yolox_x_mix_mot20_ch_hybrid_sort.py
│   │       ├── yolox_x_mix_mot20_ch_hybrid_sort_reid.py
│   │       ├── yolox_x_mix_mot20_ch_train.py
│   │       ├── yolox_x_mix_mot20_ch_valhalf.py
│   │       ├── yolox_x_mot17_ablation_half_train.py
│   │       ├── yolox_x_mot17_half.py
│   │       └── yolox_x_mot17_train.py
│   └── permatrack_kitti_test/
│       ├── 0000.txt
│       ├── 0001.txt
│       ├── 0002.txt
│       ├── 0003.txt
│       ├── 0004.txt
│       ├── 0005.txt
│       ├── 0006.txt
│       ├── 0007.txt
│       ├── 0008.txt
│       ├── 0009.txt
│       ├── 0010.txt
│       ├── 0011.txt
│       ├── 0012.txt
│       ├── 0013.txt
│       ├── 0014.txt
│       ├── 0015.txt
│       ├── 0016.txt
│       ├── 0017.txt
│       ├── 0018.txt
│       ├── 0019.txt
│       ├── 0020.txt
│       ├── 0021.txt
│       ├── 0022.txt
│       ├── 0023.txt
│       ├── 0024.txt
│       ├── 0025.txt
│       ├── 0026.txt
│       ├── 0027.txt
│       └── 0028.txt
├── fast_reid/
│   ├── CHANGELOG.md
│   ├── GETTING_STARTED.md
│   ├── INSTALL.md
│   ├── LICENSE
│   ├── MODEL_ZOO.md
│   ├── README.md
│   ├── __init__.py
│   ├── configs/
│   │   ├── Base-AGW.yml
│   │   ├── Base-MGN.yml
│   │   ├── Base-SBS.yml
│   │   ├── Base-bagtricks.yml
│   │   ├── CUHKSYSU/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── CUHKSYSU_DanceTrack/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── mgn_R50-ibn_64d.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── CUHKSYSU_MOT17/
│   │   │   └── sbs_S50.yml
│   │   ├── CUHKSYSU_MOT20/
│   │   │   └── sbs_S50.yml
│   │   ├── DanceTrack/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── DukeMTMC/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── MOT17/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── MOT20/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── MSMT17/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── Market1501/
│   │   │   ├── AGW_R101-ibn.yml
│   │   │   ├── AGW_R50-ibn.yml
│   │   │   ├── AGW_R50.yml
│   │   │   ├── AGW_S50.yml
│   │   │   ├── bagtricks_R101-ibn.yml
│   │   │   ├── bagtricks_R50-ibn.yml
│   │   │   ├── bagtricks_R50.yml
│   │   │   ├── bagtricks_S50.yml
│   │   │   ├── bagtricks_vit.yml
│   │   │   ├── mgn_R50-ibn.yml
│   │   │   ├── sbs_R101-ibn.yml
│   │   │   ├── sbs_R50-ibn.yml
│   │   │   ├── sbs_R50.yml
│   │   │   └── sbs_S50.yml
│   │   ├── VERIWild/
│   │   │   └── bagtricks_R50-ibn.yml
│   │   ├── VeRi/
│   │   │   └── sbs_R50-ibn.yml
│   │   └── VehicleID/
│   │       └── bagtricks_R50-ibn.yml
│   ├── datasets/
│   │   ├── generate_cuhksysu_dance_patches.py
│   │   └── generate_mot_patches.py
│   ├── demo/
│   │   ├── README.md
│   │   ├── demo.py
│   │   ├── plot_roc_with_pickle.py
│   │   ├── predictor.py
│   │   └── visualize_result.py
│   ├── docker/
│   │   ├── Dockerfile
│   │   └── README.md
│   ├── docs/
│   │   ├── .gitignore
│   │   ├── Makefile
│   │   ├── README.md
│   │   ├── _static/
│   │   │   └── css/
│   │   │       └── custom.css
│   │   ├── conf.py
│   │   ├── index.rst
│   │   ├── modules/
│   │   │   ├── checkpoint.rst
│   │   │   ├── config.rst
│   │   │   ├── data.rst
│   │   │   ├── data_transforms.rst
│   │   │   ├── engine.rst
│   │   │   ├── evaluation.rst
│   │   │   ├── index.rst
│   │   │   ├── layers.rst
│   │   │   ├── modeling.rst
│   │   │   ├── solver.rst
│   │   │   └── utils.rst
│   │   └── requirements.txt
│   ├── fast_reid_interfece.py
│   ├── fastreid/
│   │   ├── __init__.py
│   │   ├── config/
│   │   │   ├── __init__.py
│   │   │   ├── config.py
│   │   │   └── defaults.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── build.py
│   │   │   ├── common.py
│   │   │   ├── data_utils.py
│   │   │   ├── datasets/
│   │   │   │   ├── AirportALERT.py
│   │   │   │   ├── __init__.py
│   │   │   │   ├── bases.py
│   │   │   │   ├── caviara.py
│   │   │   │   ├── cuhk03.py
│   │   │   │   ├── cuhksysu.py
│   │   │   │   ├── cuhksysu_dancetrack.py
│   │   │   │   ├── cuhksysu_mot17.py
│   │   │   │   ├── cuhksysu_mot20.py
│   │   │   │   ├── dancetrack.py
│   │   │   │   ├── dukemtmcreid.py
│   │   │   │   ├── grid.py
│   │   │   │   ├── iLIDS.py
│   │   │   │   ├── lpw.py
│   │   │   │   ├── market1501.py
│   │   │   │   ├── mot17.py
│   │   │   │   ├── mot20.py
│   │   │   │   ├── mot20_.py
│   │   │   │   ├── msmt17.py
│   │   │   │   ├── pes3d.py
│   │   │   │   ├── pku.py
│   │   │   │   ├── prai.py
│   │   │   │   ├── prid.py
│   │   │   │   ├── saivt.py
│   │   │   │   ├── sensereid.py
│   │   │   │   ├── shinpuhkan.py
│   │   │   │   ├── sysu_mm.py
│   │   │   │   ├── thermalworld.py
│   │   │   │   ├── vehicleid.py
│   │   │   │   ├── veri.py
│   │   │   │   ├── veriwild.py
│   │   │   │   ├── viper.py
│   │   │   │   └── wildtracker.py
│   │   │   ├── samplers/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── data_sampler.py
│   │   │   │   ├── imbalance_sampler.py
│   │   │   │   └── triplet_sampler.py
│   │   │   └── transforms/
│   │   │       ├── __init__.py
│   │   │       ├── autoaugment.py
│   │   │       ├── build.py
│   │   │       ├── functional.py
│   │   │       └── transforms.py
│   │   ├── engine/
│   │   │   ├── __init__.py
│   │   │   ├── defaults.py
│   │   │   ├── hooks.py
│   │   │   ├── launch.py
│   │   │   └── train_loop.py
│   │   ├── evaluation/
│   │   │   ├── __init__.py
│   │   │   ├── clas_evaluator.py
│   │   │   ├── evaluator.py
│   │   │   ├── query_expansion.py
│   │   │   ├── rank.py
│   │   │   ├── rank_cylib/
│   │   │   │   ├── Makefile
│   │   │   │   ├── __init__.py
│   │   │   │   ├── rank_cy.c
│   │   │   │   ├── rank_cy.pyx
│   │   │   │   ├── roc_cy.c
│   │   │   │   ├── roc_cy.pyx
│   │   │   │   ├── setup.py
│   │   │   │   └── test_cython.py
│   │   │   ├── reid_evaluation.py
│   │   │   ├── rerank.py
│   │   │   ├── roc.py
│   │   │   └── testing.py
│   │   ├── layers/
│   │   │   ├── __init__.py
│   │   │   ├── activation.py
│   │   │   ├── any_softmax.py
│   │   │   ├── batch_norm.py
│   │   │   ├── context_block.py
│   │   │   ├── drop.py
│   │   │   ├── frn.py
│   │   │   ├── gather_layer.py
│   │   │   ├── helpers.py
│   │   │   ├── non_local.py
│   │   │   ├── pooling.py
│   │   │   ├── se_layer.py
│   │   │   ├── splat.py
│   │   │   └── weight_init.py
│   │   ├── modeling/
│   │   │   ├── __init__.py
│   │   │   ├── backbones/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build.py
│   │   │   │   ├── mobilenet.py
│   │   │   │   ├── mobilenetv3.py
│   │   │   │   ├── osnet.py
│   │   │   │   ├── regnet/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── config.py
│   │   │   │   │   ├── effnet/
│   │   │   │   │   │   ├── EN-B0_dds_8gpu.yaml
│   │   │   │   │   │   ├── EN-B1_dds_8gpu.yaml
│   │   │   │   │   │   ├── EN-B2_dds_8gpu.yaml
│   │   │   │   │   │   ├── EN-B3_dds_8gpu.yaml
│   │   │   │   │   │   ├── EN-B4_dds_8gpu.yaml
│   │   │   │   │   │   └── EN-B5_dds_8gpu.yaml
│   │   │   │   │   ├── effnet.py
│   │   │   │   │   ├── regnet.py
│   │   │   │   │   ├── regnetx/
│   │   │   │   │   │   ├── RegNetX-1.6GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-12GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-16GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-200MF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-3.2GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-32GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-4.0GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-400MF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-6.4GF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-600MF_dds_8gpu.yaml
│   │   │   │   │   │   ├── RegNetX-8.0GF_dds_8gpu.yaml
│   │   │   │   │   │   └── RegNetX-800MF_dds_8gpu.yaml
│   │   │   │   │   └── regnety/
│   │   │   │   │       ├── RegNetY-1.6GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-12GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-16GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-200MF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-3.2GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-32GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-4.0GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-400MF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-6.4GF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-600MF_dds_8gpu.yaml
│   │   │   │   │       ├── RegNetY-8.0GF_dds_8gpu.yaml
│   │   │   │   │       └── RegNetY-800MF_dds_8gpu.yaml
│   │   │   │   ├── repvgg.py
│   │   │   │   ├── resnest.py
│   │   │   │   ├── resnet.py
│   │   │   │   ├── resnext.py
│   │   │   │   ├── shufflenet.py
│   │   │   │   └── vision_transformer.py
│   │   │   ├── heads/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build.py
│   │   │   │   ├── clas_head.py
│   │   │   │   └── embedding_head.py
│   │   │   ├── losses/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── circle_loss.py
│   │   │   │   ├── cross_entroy_loss.py
│   │   │   │   ├── focal_loss.py
│   │   │   │   ├── triplet_loss.py
│   │   │   │   └── utils.py
│   │   │   └── meta_arch/
│   │   │       ├── __init__.py
│   │   │       ├── baseline.py
│   │   │       ├── build.py
│   │   │       ├── distiller.py
│   │   │       ├── mgn.py
│   │   │       └── moco.py
│   │   ├── solver/
│   │   │   ├── __init__.py
│   │   │   ├── build.py
│   │   │   ├── lr_scheduler.py
│   │   │   └── optim/
│   │   │       ├── __init__.py
│   │   │       ├── lamb.py
│   │   │       ├── radam.py
│   │   │       └── swa.py
│   │   └── utils/
│   │       ├── __init__.py
│   │       ├── checkpoint.py
│   │       ├── collect_env.py
│   │       ├── comm.py
│   │       ├── compute_dist.py
│   │       ├── env.py
│   │       ├── events.py
│   │       ├── faiss_utils.py
│   │       ├── file_io.py
│   │       ├── history_buffer.py
│   │       ├── logger.py
│   │       ├── params.py
│   │       ├── precision_bn.py
│   │       ├── registry.py
│   │       ├── summary.py
│   │       ├── timer.py
│   │       └── visualizer.py
│   ├── projects/
│   │   ├── CrossDomainReID/
│   │   │   └── README.md
│   │   ├── DG-ReID/
│   │   │   └── README.md
│   │   ├── FastAttr/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-attribute.yml
│   │   │   │   ├── dukemtmc.yml
│   │   │   │   ├── market1501.yml
│   │   │   │   └── pa100.yml
│   │   │   ├── fastattr/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── attr_dataset.py
│   │   │   │   ├── attr_evaluation.py
│   │   │   │   ├── config.py
│   │   │   │   ├── datasets/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── bases.py
│   │   │   │   │   ├── dukemtmcattr.py
│   │   │   │   │   ├── market1501attr.py
│   │   │   │   │   └── pa100k.py
│   │   │   │   └── modeling/
│   │   │   │       ├── __init__.py
│   │   │   │       ├── attr_baseline.py
│   │   │   │       ├── attr_head.py
│   │   │   │       └── bce_loss.py
│   │   │   └── train_net.py
│   │   ├── FastClas/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   └── base-clas.yaml
│   │   │   ├── fastclas/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── bee_ant.py
│   │   │   │   ├── dataset.py
│   │   │   │   └── trainer.py
│   │   │   └── train_net.py
│   │   ├── FastDistill/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-kd.yml
│   │   │   │   ├── kd-sbs_r101ibn-sbs_r34.yml
│   │   │   │   ├── sbs_r101ibn.yml
│   │   │   │   └── sbs_r34.yml
│   │   │   ├── fastdistill/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── overhaul.py
│   │   │   │   └── resnet_distill.py
│   │   │   └── train_net.py
│   │   ├── FastFace/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── face_base.yml
│   │   │   │   ├── r101_ir.yml
│   │   │   │   └── r50_ir.yml
│   │   │   ├── fastface/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── datasets/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── ms1mv2.py
│   │   │   │   │   └── test_dataset.py
│   │   │   │   ├── face_data.py
│   │   │   │   ├── face_evaluator.py
│   │   │   │   ├── modeling/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── face_baseline.py
│   │   │   │   │   ├── face_head.py
│   │   │   │   │   ├── iresnet.py
│   │   │   │   │   └── partial_fc.py
│   │   │   │   ├── pfc_checkpointer.py
│   │   │   │   ├── trainer.py
│   │   │   │   ├── utils_amp.py
│   │   │   │   └── verification.py
│   │   │   └── train_net.py
│   │   ├── FastRT/
│   │   │   ├── .gitignore
│   │   │   ├── CMakeLists.txt
│   │   │   ├── README.md
│   │   │   ├── demo/
│   │   │   │   ├── CMakeLists.txt
│   │   │   │   └── inference.cpp
│   │   │   ├── docker/
│   │   │   │   ├── trt7cu100/
│   │   │   │   │   └── Dockerfile
│   │   │   │   └── trt7cu102/
│   │   │   │       └── Dockerfile
│   │   │   ├── fastrt/
│   │   │   │   ├── CMakeLists.txt
│   │   │   │   ├── backbones/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   └── sbs_resnet.cpp
│   │   │   │   ├── common/
│   │   │   │   │   ├── calibrator.cpp
│   │   │   │   │   └── utils.cpp
│   │   │   │   ├── engine/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   └── InferenceEngine.cpp
│   │   │   │   ├── factory/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   └── factory.cpp
│   │   │   │   ├── heads/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   └── embedding_head.cpp
│   │   │   │   ├── layers/
│   │   │   │   │   ├── CMakeLists.txt
│   │   │   │   │   ├── layers.cpp
│   │   │   │   │   ├── poolingLayerRT.cpp
│   │   │   │   │   └── poolingLayerRT.h
│   │   │   │   └── meta_arch/
│   │   │   │       ├── CMakeLists.txt
│   │   │   │       ├── baseline.cpp
│   │   │   │       └── model.cpp
│   │   │   ├── include/
│   │   │   │   └── fastrt/
│   │   │   │       ├── IPoolingLayerRT.h
│   │   │   │       ├── InferenceEngine.h
│   │   │   │       ├── baseline.h
│   │   │   │       ├── calibrator.h
│   │   │   │       ├── config.h.in
│   │   │   │       ├── cuda_utils.h
│   │   │   │       ├── embedding_head.h
│   │   │   │       ├── factory.h
│   │   │   │       ├── holder.h
│   │   │   │       ├── layers.h
│   │   │   │       ├── logging.h
│   │   │   │       ├── model.h
│   │   │   │       ├── module.h
│   │   │   │       ├── sbs_resnet.h
│   │   │   │       ├── struct.h
│   │   │   │       └── utils.h
│   │   │   ├── pybind_interface/
│   │   │   │   ├── CMakeLists.txt
│   │   │   │   ├── docker/
│   │   │   │   │   ├── trt7cu100/
│   │   │   │   │   │   └── Dockerfile
│   │   │   │   │   └── trt7cu102_torch160/
│   │   │   │   │       └── Dockerfile
│   │   │   │   ├── market_benchmark.py
│   │   │   │   ├── reid.cpp
│   │   │   │   └── test.py
│   │   │   ├── third_party/
│   │   │   │   └── cnpy/
│   │   │   │       ├── CMakeLists.txt
│   │   │   │       ├── LICENSE
│   │   │   │       ├── README.md
│   │   │   │       ├── cnpy.cpp
│   │   │   │       ├── cnpy.h
│   │   │   │       ├── example1.cpp
│   │   │   │       ├── mat2npz
│   │   │   │       ├── npy2mat
│   │   │   │       └── npz2mat
│   │   │   └── tools/
│   │   │       ├── How_to_Generate.md
│   │   │       └── gen_wts.py
│   │   ├── FastRetri/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── base-image_retri.yml
│   │   │   │   ├── cars.yml
│   │   │   │   ├── cub.yml
│   │   │   │   ├── inshop.yml
│   │   │   │   └── sop.yml
│   │   │   ├── fastretri/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── datasets.py
│   │   │   │   └── retri_evaluator.py
│   │   │   └── train_net.py
│   │   ├── FastTune/
│   │   │   ├── README.md
│   │   │   ├── autotuner/
│   │   │   │   ├── __init__.py
│   │   │   │   └── tune_hooks.py
│   │   │   ├── configs/
│   │   │   │   └── search_trial.yml
│   │   │   └── tune_net.py
│   │   ├── HAA/
│   │   │   └── Readme.md
│   │   ├── NAIC20/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-naic.yml
│   │   │   │   ├── nest101-base.yml
│   │   │   │   ├── r34-ibn.yml
│   │   │   │   └── submit.yml
│   │   │   ├── label.txt
│   │   │   ├── naic/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── naic_dataset.py
│   │   │   │   └── naic_evaluator.py
│   │   │   ├── naic20r2_train_list_clean.txt
│   │   │   ├── train_list_clean.txt
│   │   │   ├── train_net.py
│   │   │   ├── val_gallery.txt
│   │   │   └── val_query.txt
│   │   ├── PartialReID/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   └── partial_market.yml
│   │   │   ├── partialreid/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── dsr_distance.py
│   │   │   │   ├── dsr_evaluation.py
│   │   │   │   ├── dsr_head.py
│   │   │   │   ├── partial_dataset.py
│   │   │   │   └── partialbaseline.py
│   │   │   └── train_net.py
│   │   └── README.md
│   ├── tests/
│   │   ├── __init__.py
│   │   ├── dataset_test.py
│   │   ├── feature_align.py
│   │   ├── interp_test.py
│   │   ├── lr_scheduler_test.py
│   │   ├── model_test.py
│   │   ├── sampler_test.py
│   │   └── test_repvgg.py
│   └── tools/
│       ├── deploy/
│       │   ├── Caffe/
│       │   │   ├── ReadMe.md
│       │   │   ├── __init__.py
│       │   │   ├── caffe.proto
│       │   │   ├── caffe_lmdb.py
│       │   │   ├── caffe_net.py
│       │   │   ├── caffe_pb2.py
│       │   │   ├── layer_param.py
│       │   │   └── net.py
│       │   ├── README.md
│       │   ├── caffe_export.py
│       │   ├── caffe_inference.py
│       │   ├── onnx_export.py
│       │   ├── onnx_inference.py
│       │   ├── pytorch_to_caffe.py
│       │   ├── trt_calibrator.py
│       │   ├── trt_export.py
│       │   └── trt_inference.py
│       ├── plain_train_net.py
│       └── train_net.py
├── motmetrics/
│   ├── __init__.py
│   ├── apps/
│   │   ├── __init__.py
│   │   ├── eval_detrac.py
│   │   ├── eval_motchallenge.py
│   │   ├── evaluateTracking.py
│   │   ├── example.py
│   │   └── list_metrics.py
│   ├── data/
│   │   ├── TUD-Campus/
│   │   │   ├── gt.txt
│   │   │   └── test.txt
│   │   ├── TUD-Stadtmitte/
│   │   │   ├── gt.txt
│   │   │   └── test.txt
│   │   └── iotest/
│   │       ├── detrac.mat
│   │       ├── detrac.xml
│   │       ├── motchallenge.txt
│   │       └── vatic.txt
│   ├── distances.py
│   ├── io.py
│   ├── lap.py
│   ├── math_util.py
│   ├── metrics.py
│   ├── mot.py
│   ├── preprocess.py
│   ├── tests/
│   │   ├── __init__.py
│   │   ├── test_distances.py
│   │   ├── test_io.py
│   │   ├── test_issue19.py
│   │   ├── test_lap.py
│   │   ├── test_metrics.py
│   │   ├── test_mot.py
│   │   └── test_utils.py
│   └── utils.py
├── pretrained/
│   └── README.md
├── requirements.txt
├── setup.cfg
├── setup.py
├── tools/
│   ├── convert_bdd_to_kitti.py
│   ├── convert_cityperson_to_coco.py
│   ├── convert_crowdhuman_to_coco.py
│   ├── convert_cuhk_to_coco.py
│   ├── convert_dance_to_coco.py
│   ├── convert_ethz_to_coco.py
│   ├── convert_kitti_to_bdd.py
│   ├── convert_mot17_to_coco.py
│   ├── convert_mot20_to_coco.py
│   ├── convert_video.py
│   ├── demo_track.py
│   ├── gp_interpolation.py
│   ├── interpolation.py
│   ├── mix_data_ablation.py
│   ├── mix_data_test_mot17.py
│   ├── mix_data_test_mot20.py
│   ├── mota.py
│   ├── plot_trajectory.py
│   ├── run_byte.py
│   ├── run_byte_dance.py
│   ├── run_deepsort.py
│   ├── run_deepsort_dance.py
│   ├── run_hybrid_sort_dance.py
│   ├── run_motdt.py
│   ├── run_motdt_dance.py
│   ├── run_ocsort.py
│   ├── run_ocsort_dance.py
│   ├── run_ocsort_public.py
│   ├── run_sort.py
│   ├── run_sort_dance.py
│   ├── train.py
│   ├── txt2video.py
│   └── visualize_results.py
├── trackers/
│   ├── byte_tracker/
│   │   ├── basetrack.py
│   │   ├── byte_tracker.py
│   │   ├── byte_tracker_public.py
│   │   ├── byte_tracker_score.py
│   │   ├── kalman_filter.py
│   │   ├── kalman_filter_score.py
│   │   └── matching.py
│   ├── deepsort_tracker/
│   │   ├── deepsort.py
│   │   ├── deepsort_score.py
│   │   ├── detection.py
│   │   ├── iou_matching.py
│   │   ├── kalman_filter.py
│   │   ├── kalman_filter_score.py
│   │   ├── linear_assignment.py
│   │   ├── linear_assignment_score.py
│   │   ├── reid_model.py
│   │   ├── reid_model_motdt.py
│   │   ├── track.py
│   │   └── track_score.py
│   ├── hybrid_sort_tracker/
│   │   ├── association.py
│   │   ├── hybrid_sort.py
│   │   ├── hybrid_sort_reid.py
│   │   ├── kalmanfilter.py
│   │   ├── kalmanfilter_score.py
│   │   ├── kalmanfilter_score_new.py
│   │   └── new_kalmanfilter.py
│   ├── motdt_tracker/
│   │   ├── basetrack.py
│   │   ├── kalman_filter.py
│   │   ├── kalman_filter_score.py
│   │   ├── matching.py
│   │   ├── motdt_tracker.py
│   │   ├── motdt_tracker_score.py
│   │   └── reid_model.py
│   ├── ocsort_tracker/
│   │   ├── association.py
│   │   ├── kalmanfilter.py
│   │   └── ocsort.py
│   ├── sort_tracker/
│   │   ├── sort.py
│   │   └── sort_score.py
│   └── tracking_utils/
│       ├── evaluation.py
│       ├── io.py
│       └── timer.py
├── trackeval/
│   ├── __init__.py
│   ├── _timing.py
│   ├── baselines/
│   │   ├── __init__.py
│   │   ├── baseline_utils.py
│   │   ├── non_overlap.py
│   │   ├── pascal_colormap.py
│   │   ├── stp.py
│   │   ├── thresholder.py
│   │   └── vizualize.py
│   ├── eval.py
│   ├── metrics/
│   │   ├── __init__.py
│   │   ├── _base_metric.py
│   │   ├── clear.py
│   │   ├── count.py
│   │   ├── hota.py
│   │   ├── identity.py
│   │   ├── ideucl.py
│   │   ├── j_and_f.py
│   │   ├── track_map.py
│   │   └── vace.py
│   ├── plotting.py
│   ├── scripts/
│   │   ├── comparison_plots.py
│   │   ├── run_bdd.py
│   │   ├── run_davis.py
│   │   ├── run_headtracking_challenge.py
│   │   ├── run_kitti.py
│   │   ├── run_kitti_mots.py
│   │   ├── run_mot_challenge.py
│   │   ├── run_mots_challenge.py
│   │   ├── run_rob_mots.py
│   │   ├── run_tao.py
│   │   ├── run_tao_ow.py
│   │   └── run_youtube_vis.py
│   └── utils.py
├── utils/
│   ├── args.py
│   ├── misc.py
│   ├── triplet.py
│   ├── utils.py
│   └── visualize.py
└── yolox/
    ├── __init__.py
    ├── core/
    │   ├── __init__.py
    │   ├── launch.py
    │   └── trainer.py
    ├── data/
    │   ├── __init__.py
    │   ├── data_augment.py
    │   ├── data_prefetcher.py
    │   ├── dataloading.py
    │   ├── datasets/
    │   │   ├── __init__.py
    │   │   ├── datasets_wrapper.py
    │   │   ├── mosaicdetection.py
    │   │   └── mot.py
    │   └── samplers.py
    ├── evaluators/
    │   ├── __init__.py
    │   ├── coco_evaluator.py
    │   ├── evaluation.py
    │   ├── mot_evaluator.py
    │   ├── mot_evaluator_dance.py
    │   └── mot_evaluator_public.py
    ├── exp/
    │   ├── __init__.py
    │   ├── base_exp.py
    │   ├── build.py
    │   └── yolox_base.py
    ├── layers/
    │   ├── __init__.py
    │   ├── csrc/
    │   │   ├── cocoeval/
    │   │   │   ├── cocoeval.cpp
    │   │   │   └── cocoeval.h
    │   │   └── vision.cpp
    │   └── fast_coco_eval_api.py
    ├── models/
    │   ├── __init__.py
    │   ├── darknet.py
    │   ├── losses.py
    │   ├── network_blocks.py
    │   ├── yolo_fpn.py
    │   ├── yolo_head.py
    │   ├── yolo_pafpn.py
    │   └── yolox.py
    └── utils/
        ├── __init__.py
        ├── allreduce_norm.py
        ├── boxes.py
        ├── checkpoint.py
        ├── demo_utils.py
        ├── dist.py
        ├── ema.py
        ├── logger.py
        ├── lr_scheduler.py
        ├── metric.py
        ├── model_utils.py
        ├── setup_env.py
        └── visualize.py
Download .txt
Showing preview only (379K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (4424 symbols across 462 files)

FILE: TrackEval/scripts/run_rob_mots.py
  function rowify (line 128) | def rowify(d):

FILE: TrackEval/tests/test_metrics.py
  function no_confusion (line 7) | def no_confusion():
  function with_confusion (line 56) | def with_confusion():
  function split_tracks (line 107) | def split_tracks():
  function _from_dense (line 163) | def _from_dense(num_timesteps, num_gt_ids, num_tracker_ids, gt_present, ...
  function test_metric (line 207) | def test_metric(sequence_name, metric_name):

FILE: TrackEval/trackeval/_timing.py
  function time (line 11) | def time(f):

FILE: TrackEval/trackeval/baselines/baseline_utils.py
  function load_seq (line 12) | def load_seq(file_to_load):
  function threshold (line 80) | def threshold(tdata, thresh):
  function create_coco_mask (line 89) | def create_coco_mask(mask_rles, im_hs, im_ws):
  function mask_iou (line 96) | def mask_iou(mask_rles1, mask_rles2, im_hs, im_ws, do_ioa=0):
  function sort_by_score (line 115) | def sort_by_score(t_data):
  function mask_NMS (line 123) | def mask_NMS(t_data, nms_threshold=0.5, already_sorted=False):
  function non_overlap (line 151) | def non_overlap(t_data, already_sorted=False):
  function masks2boxes (line 186) | def masks2boxes(mask_rles, im_hs, im_ws):
  function box_iou (line 195) | def box_iou(bboxes1, bboxes2, box_format='xywh', do_ioa=False, do_giou=F...
  function match (line 247) | def match(match_scores):
  function write_seq (line 252) | def write_seq(output_data, out_file):
  function combine_classes (line 263) | def combine_classes(data):
  function save_as_png (line 288) | def save_as_png(t_data, out_file, im_h, im_w):
  function get_frame_size (line 313) | def get_frame_size(data):

FILE: TrackEval/trackeval/baselines/non_overlap.py
  function do_sequence (line 29) | def do_sequence(seq_file):

FILE: TrackEval/trackeval/baselines/stp.py
  function track_sequence (line 35) | def track_sequence(seq_file):

FILE: TrackEval/trackeval/baselines/thresholder.py
  function do_sequence (line 33) | def do_sequence(seq_file):

FILE: TrackEval/trackeval/baselines/vizualize.py
  function do_sequence (line 31) | def do_sequence(seq_file):

FILE: TrackEval/trackeval/datasets/_base_dataset.py
  class _BaseDataset (line 13) | class _BaseDataset(ABC):
    method __init__ (line 15) | def __init__(self):
    method get_default_dataset_config (line 28) | def get_default_dataset_config():
    method _load_raw_file (line 32) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 37) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 41) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
    method get_class_name (line 47) | def get_class_name(cls):
    method get_name (line 50) | def get_name(self):
    method get_output_fol (line 53) | def get_output_fol(self, tracker):
    method get_display_name (line 56) | def get_display_name(self, tracker):
    method get_eval_info (line 62) | def get_eval_info(self):
    method get_raw_seq_data (line 67) | def get_raw_seq_data(self, tracker, seq):
    method _load_simple_text_file (line 105) | def _load_simple_text_file(file, time_col=0, id_col=None, remove_negat...
    method _calculate_mask_ious (line 215) | def _calculate_mask_ious(masks1, masks2, is_encoded=False, do_ioa=False):
    method _calculate_box_ious (line 248) | def _calculate_box_ious(bboxes1, bboxes2, box_format='xywh', do_ioa=Fa...
    method _calculate_euclidean_similarity (line 289) | def _calculate_euclidean_similarity(dets1, dets2, zero_distance=2.0):
    method _check_unique_ids (line 300) | def _check_unique_ids(data, after_preproc=False):

FILE: TrackEval/trackeval/datasets/bdd100k.py
  class BDD100K (line 12) | class BDD100K(_BaseDataset):
    method get_default_dataset_config (line 16) | def get_default_dataset_config():
    method __init__ (line 35) | def __init__(self, config=None):
    method get_display_name (line 95) | def get_display_name(self, tracker):
    method _load_raw_file (line 98) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 185) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 300) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):

FILE: TrackEval/trackeval/datasets/davis.py
  class DAVIS (line 10) | class DAVIS(_BaseDataset):
    method get_default_dataset_config (line 14) | def get_default_dataset_config():
    method __init__ (line 35) | def __init__(self, config=None):
    method _load_raw_file (line 109) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 177) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 274) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):

FILE: TrackEval/trackeval/datasets/head_tracking_challenge.py
  class HeadTrackingChallenge (line 12) | class HeadTrackingChallenge(_BaseDataset):
    method get_default_dataset_config (line 16) | def get_default_dataset_config():
    method __init__ (line 43) | def __init__(self, config=None):
    method get_display_name (line 126) | def get_display_name(self, tracker):
    method _get_seq_info (line 129) | def _get_seq_info(self):
    method _load_raw_file (line 172) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 294) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 457) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):

FILE: TrackEval/trackeval/datasets/kitti_2d_box.py
  class Kitti2DBox (line 12) | class Kitti2DBox(_BaseDataset):
    method get_default_dataset_config (line 16) | def get_default_dataset_config():
    method __init__ (line 34) | def __init__(self, config=None):
    method get_display_name (line 117) | def get_display_name(self, tracker):
    method _load_raw_file (line 120) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 238) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 387) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):

FILE: TrackEval/trackeval/datasets/kitti_mots.py
  class KittiMOTS (line 11) | class KittiMOTS(_BaseDataset):
    method get_default_dataset_config (line 15) | def get_default_dataset_config():
    method __init__ (line 37) | def __init__(self, config=None):
    method get_display_name (line 111) | def get_display_name(self, tracker):
    method _get_seq_info (line 114) | def _get_seq_info(self):
    method _load_raw_file (line 146) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 267) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 389) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
    method _raise_index_error (line 394) | def _raise_index_error(is_gt, tracker, seq):
    method _raise_value_error (line 412) | def _raise_value_error(is_gt, tracker, seq):

FILE: TrackEval/trackeval/datasets/mot_challenge_2d_box.py
  class MotChallenge2DBox (line 12) | class MotChallenge2DBox(_BaseDataset):
    method get_default_dataset_config (line 16) | def get_default_dataset_config():
    method __init__ (line 43) | def __init__(self, config=None):
    method get_display_name (line 128) | def get_display_name(self, tracker):
    method _get_seq_info (line 131) | def _get_seq_info(self):
    method _load_raw_file (line 174) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 290) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 435) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):

FILE: TrackEval/trackeval/datasets/mots_challenge.py
  class MOTSChallenge (line 12) | class MOTSChallenge(_BaseDataset):
    method get_default_dataset_config (line 16) | def get_default_dataset_config():
    method __init__ (line 41) | def __init__(self, config=None):
    method get_display_name (line 121) | def get_display_name(self, tracker):
    method _get_seq_info (line 124) | def _get_seq_info(self):
    method _load_raw_file (line 167) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 288) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 409) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
    method _raise_index_error (line 414) | def _raise_index_error(is_gt, tracker, seq):
    method _raise_value_error (line 432) | def _raise_value_error(is_gt, tracker, seq):

FILE: TrackEval/trackeval/datasets/rob_mots.py
  class RobMOTS (line 13) | class RobMOTS(_BaseDataset):
    method get_default_dataset_config (line 16) | def get_default_dataset_config():
    method __init__ (line 40) | def __init__(self, config=None):
    method get_name (line 129) | def get_name(self):
    method _get_seq_info (line 132) | def _get_seq_info(self):
    method get_display_name (line 162) | def get_display_name(self, tracker):
    method _load_raw_file (line 165) | def _load_raw_file(self, tracker, seq, is_gt):
    method _raise_index_error (line 265) | def _raise_index_error(is_gt, sub_benchmark, seq):
    method _raise_value_error (line 283) | def _raise_value_error(is_gt, sub_benchmark, seq):
    method get_preprocessed_seq_data (line 300) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 494) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):

FILE: TrackEval/trackeval/datasets/run_rob_mots.py
  function rowify (line 100) | def rowify(d):

FILE: TrackEval/trackeval/datasets/tao.py
  class TAO (line 13) | class TAO(_BaseDataset):
    method get_default_dataset_config (line 17) | def get_default_dataset_config():
    method __init__ (line 35) | def __init__(self, config=None):
    method get_display_name (line 139) | def get_display_name(self, tracker):
    method _load_raw_file (line 142) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 257) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 398) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
    method _merge_categories (line 402) | def _merge_categories(self, annotations):
    method _compute_vid_mappings (line 417) | def _compute_vid_mappings(self, annotations):
    method _compute_image_to_timestep_mappings (line 486) | def _compute_image_to_timestep_mappings(self):
    method _limit_dets_per_image (line 503) | def _limit_dets_per_image(self, annotations):
    method _fill_video_ids_inplace (line 523) | def _fill_video_ids_inplace(self, annotations):
    method _make_track_ids_unique (line 538) | def _make_track_ids_unique(annotations):

FILE: TrackEval/trackeval/datasets/tao_ow.py
  class TAO_OW (line 13) | class TAO_OW(_BaseDataset):
    method get_default_dataset_config (line 17) | def get_default_dataset_config():
    method __init__ (line 36) | def __init__(self, config=None):
    method get_display_name (line 149) | def get_display_name(self, tracker):
    method _load_raw_file (line 152) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 272) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 413) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
    method _merge_categories (line 417) | def _merge_categories(self, annotations):
    method _compute_vid_mappings (line 432) | def _compute_vid_mappings(self, annotations):
    method _compute_image_to_timestep_mappings (line 501) | def _compute_image_to_timestep_mappings(self):
    method _limit_dets_per_image (line 518) | def _limit_dets_per_image(self, annotations):
    method _fill_video_ids_inplace (line 538) | def _fill_video_ids_inplace(self, annotations):
    method _make_track_ids_unique (line 553) | def _make_track_ids_unique(annotations):
    method _split_known_unknown_distractor (line 583) | def _split_known_unknown_distractor(self):
    method _filter_gt_data (line 597) | def _filter_gt_data(self, raw_gt_data):

FILE: TrackEval/trackeval/datasets/youtube_vis.py
  class YouTubeVIS (line 10) | class YouTubeVIS(_BaseDataset):
    method get_default_dataset_config (line 14) | def get_default_dataset_config():
    method __init__ (line 32) | def __init__(self, config=None):
    method get_display_name (line 109) | def get_display_name(self, tracker):
    method _load_raw_file (line 112) | def _load_raw_file(self, tracker, seq, is_gt):
    method get_preprocessed_seq_data (line 199) | def get_preprocessed_seq_data(self, raw_data, cls):
    method _calculate_similarities (line 314) | def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
    method _prepare_gt_annotations (line 318) | def _prepare_gt_annotations(self):
    method _get_tracker_seq_tracks (line 338) | def _get_tracker_seq_tracks(self, tracker, seq_id):

FILE: TrackEval/trackeval/eval.py
  class Evaluator (line 12) | class Evaluator:
    method get_default_eval_config (line 16) | def get_default_eval_config():
    method __init__ (line 39) | def __init__(self, config=None):
    method evaluate (line 49) | def evaluate(self, dataset_list, metrics_list):
  function eval_sequence (line 188) | def eval_sequence(seq, dataset, tracker, class_list, metrics_list, metri...

FILE: TrackEval/trackeval/metrics/_base_metric.py
  class _BaseMetric (line 8) | class _BaseMetric(ABC):
    method __init__ (line 10) | def __init__(self):
    method eval_sequence (line 26) | def eval_sequence(self, data):
    method combine_sequences (line 30) | def combine_sequences(self, all_res):
    method combine_classes_class_averaged (line 34) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method combine_classes_det_averaged (line 38) | def combine_classes_det_averaged(self, all_res):
    method plot_single_tracker_results (line 41) | def plot_single_tracker_results(self, all_res, tracker, output_folder,...
    method get_name (line 52) | def get_name(cls):
    method _combine_sum (line 56) | def _combine_sum(all_res, field):
    method _combine_weighted_av (line 61) | def _combine_weighted_av(all_res, field, comb_res, weight_field):
    method print_table (line 66) | def print_table(self, table_res, tracker, cls, output_fol=None):
    method _summary_row (line 87) | def _summary_row(self, results_):
    method _row_print (line 101) | def _row_print(out_file, *argv):
    method summary_results (line 113) | def summary_results(self, table_res):
    method detailed_results (line 117) | def detailed_results(self, table_res):
    method _detailed_row (line 136) | def _detailed_row(self, res):

FILE: TrackEval/trackeval/metrics/clear.py
  class CLEAR (line 8) | class CLEAR(_BaseMetric):
    method get_default_config (line 12) | def get_default_config():
    method __init__ (line 20) | def __init__(self, config=None):
    method eval_sequence (line 38) | def eval_sequence(self, data):
    method combine_sequences (line 131) | def combine_sequences(self, all_res):
    method combine_classes_det_averaged (line 139) | def combine_classes_det_averaged(self, all_res):
    method combine_classes_class_averaged (line 147) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method _compute_final_fields (line 167) | def _compute_final_fields(res):

FILE: TrackEval/trackeval/metrics/count.py
  class Count (line 6) | class Count(_BaseMetric):
    method __init__ (line 8) | def __init__(self, config=None):
    method eval_sequence (line 15) | def eval_sequence(self, data):
    method combine_sequences (line 25) | def combine_sequences(self, all_res):
    method combine_classes_class_averaged (line 32) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method combine_classes_det_averaged (line 39) | def combine_classes_det_averaged(self, all_res):

FILE: TrackEval/trackeval/metrics/hota.py
  class HOTA (line 9) | class HOTA(_BaseMetric):
    method __init__ (line 14) | def __init__(self, config=None):
    method eval_sequence (line 25) | def eval_sequence(self, data):
    method combine_sequences (line 119) | def combine_sequences(self, all_res):
    method combine_classes_class_averaged (line 131) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method combine_classes_det_averaged (line 153) | def combine_classes_det_averaged(self, all_res):
    method _compute_final_fields (line 166) | def _compute_final_fields(res):
    method plot_single_tracker_results (line 181) | def plot_single_tracker_results(self, table_res, tracker, cls, output_...

FILE: TrackEval/trackeval/metrics/identity.py
  class Identity (line 8) | class Identity(_BaseMetric):
    method get_default_config (line 12) | def get_default_config():
    method __init__ (line 20) | def __init__(self, config=None):
    method eval_sequence (line 32) | def eval_sequence(self, data):
    method combine_classes_class_averaged (line 91) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method combine_classes_det_averaged (line 111) | def combine_classes_det_averaged(self, all_res):
    method combine_sequences (line 119) | def combine_sequences(self, all_res):
    method _compute_final_fields (line 128) | def _compute_final_fields(res):

FILE: TrackEval/trackeval/metrics/ideucl.py
  class IDEucl (line 9) | class IDEucl(_BaseMetric):
    method get_default_config (line 13) | def get_default_config():
    method __init__ (line 21) | def __init__(self, config=None):
    method eval_sequence (line 33) | def eval_sequence(self, data):
    method combine_classes_class_averaged (line 88) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method combine_classes_det_averaged (line 102) | def combine_classes_det_averaged(self, all_res):
    method combine_sequences (line 110) | def combine_sequences(self, all_res):
    method _compute_centroid (line 120) | def _compute_centroid(box):
    method _compute_final_fields (line 130) | def _compute_final_fields(res, res_len):

FILE: TrackEval/trackeval/metrics/j_and_f.py
  class JAndF (line 10) | class JAndF(_BaseMetric):
    method __init__ (line 12) | def __init__(self, config=None):
    method eval_sequence (line 21) | def eval_sequence(self, data):
    method combine_sequences (line 124) | def combine_sequences(self, all_res):
    method combine_classes_class_averaged (line 131) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method combine_classes_det_averaged (line 140) | def combine_classes_det_averaged(self, all_res):
    method _seg2bmap (line 148) | def _seg2bmap(seg, width=None, height=None):
    method _compute_f (line 207) | def _compute_f(gt_data, tracker_data, tracker_data_id, gt_id, bound_th):
    method _compute_j (line 275) | def _compute_j(gt_data, tracker_data, num_gt_ids, num_tracker_ids, num...

FILE: TrackEval/trackeval/metrics/track_map.py
  class TrackMAP (line 9) | class TrackMAP(_BaseMetric):
    method get_default_metric_config (line 13) | def get_default_metric_config():
    method __init__ (line 33) | def __init__(self, config=None):
    method eval_sequence (line 62) | def eval_sequence(self, data):
    method combine_sequences (line 170) | def combine_sequences(self, all_res):
    method combine_classes_class_averaged (line 277) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method combine_classes_det_averaged (line 295) | def combine_classes_det_averaged(self, all_res):
    method _compute_track_ig_masks (line 312) | def _compute_track_ig_masks(self, num_ids, track_lengths=None, track_a...
    method _compute_bb_track_iou (line 350) | def _compute_bb_track_iou(dt_track, gt_track, boxformat='xywh'):
    method _compute_mask_track_iou (line 401) | def _compute_mask_track_iou(dt_track, gt_track):
    method _compute_track_ious (line 434) | def _compute_track_ious(dt, gt, iou_function='bbox', boxformat='xywh'):
    method _row_print (line 455) | def _row_print(*argv):

FILE: TrackEval/trackeval/metrics/vace.py
  class VACE (line 7) | class VACE(_BaseMetric):
    method __init__ (line 18) | def __init__(self, config=None):
    method eval_sequence (line 31) | def eval_sequence(self, data):
    method combine_classes_class_averaged (line 95) | def combine_classes_class_averaged(self, all_res, ignore_empty_classes...
    method combine_classes_det_averaged (line 108) | def combine_classes_det_averaged(self, all_res):
    method combine_sequences (line 116) | def combine_sequences(self, all_res):
    method _compute_final_fields (line 125) | def _compute_final_fields(additive):

FILE: TrackEval/trackeval/plotting.py
  function plot_compare_trackers (line 7) | def plot_compare_trackers(tracker_folder, tracker_list, cls, output_fold...
  function get_default_plots_list (line 22) | def get_default_plots_list():
  function load_multiple_tracker_summaries (line 38) | def load_multiple_tracker_summaries(tracker_folder, tracker_list, cls):
  function create_comparison_plot (line 53) | def create_comparison_plot(data, out_loc, y_label, x_label, sort_label, ...
  function _get_boundaries (line 140) | def _get_boundaries(x_values, y_values, round_val):
  function geometric_mean (line 157) | def geometric_mean(x, y):
  function jaccard (line 161) | def jaccard(x, y):
  function multiplication (line 167) | def multiplication(x, y):
  function _plot_bg_contour (line 178) | def _plot_bg_contour(bg_function, plot_boundaries, gap_val):
  function _plot_pareto_optimal_lines (line 204) | def _plot_pareto_optimal_lines(x_values, y_values):

FILE: TrackEval/trackeval/utils.py
  function init_config (line 8) | def init_config(config, default_config, name=None):
  function update_config (line 23) | def update_config(config):
  function get_code_path (line 55) | def get_code_path():
  function validate_metrics_list (line 60) | def validate_metrics_list(metrics_list):
  function write_summary_results (line 77) | def write_summary_results(summaries, cls, output_folder):
  function write_detailed_results (line 108) | def write_detailed_results(details, cls, output_folder):
  function load_detail (line 124) | def load_detail(file):
  class TrackEvalException (line 144) | class TrackEvalException(Exception):

FILE: deploy/ONNXRuntime/onnx_inference.py
  function make_parser (line 17) | def make_parser():
  class Predictor (line 75) | class Predictor(object):
    method __init__ (line 76) | def __init__(self, args):
    method inference (line 83) | def inference(self, ori_img, timer):
  function imageflow_demo (line 110) | def imageflow_demo(predictor, args):

FILE: deploy/TensorRT/cpp/include/BYTETracker.h
  type Object (line 5) | struct Object
  function class (line 12) | class BYTETracker

FILE: deploy/TensorRT/cpp/include/STrack.h
  type TrackState (line 9) | enum TrackState { New = 0, Tracked, Lost, Removed }
  function class (line 11) | class STrack

FILE: deploy/TensorRT/cpp/include/dataType.h
  type Eigen (line 8) | typedef Eigen::Matrix<float, 1, 4, Eigen::RowMajor> DETECTBOX;
  type Eigen (line 9) | typedef Eigen::Matrix<float, -1, 4, Eigen::RowMajor> DETECTBOXSS;
  type Eigen (line 10) | typedef Eigen::Matrix<float, 1, 128, Eigen::RowMajor> FEATURE;
  type Eigen (line 11) | typedef Eigen::Matrix<float, Eigen::Dynamic, 128, Eigen::RowMajor> FEATU...
  type Eigen (line 16) | typedef Eigen::Matrix<float, 1, 8, Eigen::RowMajor> KAL_MEAN;
  type Eigen (line 17) | typedef Eigen::Matrix<float, 8, 8, Eigen::RowMajor> KAL_COVA;
  type Eigen (line 18) | typedef Eigen::Matrix<float, 1, 4, Eigen::RowMajor> KAL_HMEAN;
  type Eigen (line 19) | typedef Eigen::Matrix<float, 4, 4, Eigen::RowMajor> KAL_HCOVA;
  type TRACHER_MATCHD (line 29) | typedef struct t {
  type Eigen (line 36) | typedef Eigen::Matrix<float, -1, -1, Eigen::RowMajor> DYNAMICM;

FILE: deploy/TensorRT/cpp/include/kalmanFilter.h
  function namespace (line 5) | namespace byte_kalman

FILE: deploy/TensorRT/cpp/include/lapjv.h
  type int_t (line 53) | typedef signed int int_t;
  type uint_t (line 54) | typedef unsigned int uint_t;
  type cost_t (line 55) | typedef double cost_t;
  type boolean (line 56) | typedef char boolean;
  type fp_t (line 57) | typedef enum fp_t { FP_1 = 1, FP_2 = 2, FP_DYNAMIC = 3 } fp_t;

FILE: deploy/TensorRT/cpp/include/logging.h
  function class (line 31) | class LogStreamConsumerBuffer : public std::stringbuf
  function class (line 106) | class LogStreamConsumerBase
  function std (line 160) | static std::string severityPrefix(Severity severity)
  function TestResult (line 213) | enum class TestResult
  function LogStreamConsumer (line 447) | inline LogStreamConsumer LOG_VERBOSE(const Logger& logger)
  function LogStreamConsumer (line 459) | inline LogStreamConsumer LOG_INFO(const Logger& logger)
  function LogStreamConsumer (line 471) | inline LogStreamConsumer LOG_WARN(const Logger& logger)
  function LogStreamConsumer (line 483) | inline LogStreamConsumer LOG_ERROR(const Logger& logger)
  function LogStreamConsumer (line 496) | inline LogStreamConsumer LOG_FATAL(const Logger& logger)

FILE: deploy/TensorRT/cpp/src/bytetrack.cpp
  function Mat (line 38) | Mat static_resize(Mat& img) {
  type GridAndStride (line 50) | struct GridAndStride
  function generate_grids_and_stride (line 57) | static void generate_grids_and_stride(const int target_w, const int targ...
  function intersection_area (line 73) | static inline float intersection_area(const Object& a, const Object& b)
  function qsort_descent_inplace (line 79) | static void qsort_descent_inplace(vector<Object>& faceobjects, int left,...
  function qsort_descent_inplace (line 116) | static void qsort_descent_inplace(vector<Object>& objects)
  function nms_sorted_bboxes (line 124) | static void nms_sorted_bboxes(const vector<Object>& faceobjects, vector<...
  function generate_yolox_proposals (line 159) | static void generate_yolox_proposals(vector<GridAndStride> grid_strides,...
  function decode_outputs (line 228) | static void decode_outputs(float* prob, vector<Object>& objects, float s...
  function doInference (line 354) | void doInference(IExecutionContext& context, float* input, float* output...
  function main (line 391) | int main(int argc, char** argv) {

FILE: deploy/TensorRT/cpp/src/kalmanFilter.cpp
  type byte_kalman (line 4) | namespace byte_kalman
    function KAL_DATA (line 33) | KAL_DATA KalmanFilter::initiate(const DETECTBOX &measurement)
    function KAL_HDATA (line 86) | KAL_HDATA KalmanFilter::project(const KAL_MEAN &mean, const KAL_COVA &...
    function KAL_DATA (line 100) | KAL_DATA

FILE: deploy/TensorRT/cpp/src/lapjv.cpp
  function int_t (line 9) | int_t _ccrrt_dense(const uint_t n, cost_t *cost[],
  function int_t (line 76) | int_t _carr_dense(
  function uint_t (line 153) | uint_t _find_dense(const uint_t n, uint_t lo, cost_t *d, int_t *cols, in...
  function int_t (line 174) | int_t _scan_dense(const uint_t n, cost_t *cost[],
  function int_t (line 218) | int_t find_path_dense(
  function int_t (line 283) | int_t _ca_dense(
  function lapjv_internal (line 321) | int lapjv_internal(

FILE: deploy/TensorRT/cpp/src/utils.cpp
  function Scalar (line 425) | Scalar BYTETracker::get_color(int idx)

FILE: deploy/ncnn/cpp/include/BYTETracker.h
  type Object (line 5) | struct Object
  function class (line 12) | class BYTETracker

FILE: deploy/ncnn/cpp/include/STrack.h
  type TrackState (line 9) | enum TrackState { New = 0, Tracked, Lost, Removed }
  function class (line 11) | class STrack

FILE: deploy/ncnn/cpp/include/dataType.h
  type Eigen (line 8) | typedef Eigen::Matrix<float, 1, 4, Eigen::RowMajor> DETECTBOX;
  type Eigen (line 9) | typedef Eigen::Matrix<float, -1, 4, Eigen::RowMajor> DETECTBOXSS;
  type Eigen (line 10) | typedef Eigen::Matrix<float, 1, 128, Eigen::RowMajor> FEATURE;
  type Eigen (line 11) | typedef Eigen::Matrix<float, Eigen::Dynamic, 128, Eigen::RowMajor> FEATU...
  type Eigen (line 16) | typedef Eigen::Matrix<float, 1, 8, Eigen::RowMajor> KAL_MEAN;
  type Eigen (line 17) | typedef Eigen::Matrix<float, 8, 8, Eigen::RowMajor> KAL_COVA;
  type Eigen (line 18) | typedef Eigen::Matrix<float, 1, 4, Eigen::RowMajor> KAL_HMEAN;
  type Eigen (line 19) | typedef Eigen::Matrix<float, 4, 4, Eigen::RowMajor> KAL_HCOVA;
  type TRACHER_MATCHD (line 29) | typedef struct t {
  type Eigen (line 36) | typedef Eigen::Matrix<float, -1, -1, Eigen::RowMajor> DYNAMICM;

FILE: deploy/ncnn/cpp/include/kalmanFilter.h
  function namespace (line 5) | namespace byte_kalman

FILE: deploy/ncnn/cpp/include/lapjv.h
  type int_t (line 53) | typedef signed int int_t;
  type uint_t (line 54) | typedef unsigned int uint_t;
  type cost_t (line 55) | typedef double cost_t;
  type boolean (line 56) | typedef char boolean;
  type fp_t (line 57) | typedef enum fp_t { FP_1 = 1, FP_2 = 2, FP_DYNAMIC = 3 } fp_t;

FILE: deploy/ncnn/cpp/src/bytetrack.cpp
  function Mat (line 24) | Mat static_resize(Mat& img) {
  class YoloV5Focus (line 37) | class YoloV5Focus : public ncnn::Layer
    method YoloV5Focus (line 40) | YoloV5Focus()
    method forward (line 45) | virtual int forward(const ncnn::Mat& bottom_blob, ncnn::Mat& top_blob,...
  type GridAndStride (line 85) | struct GridAndStride
  function intersection_area (line 92) | static inline float intersection_area(const Object& a, const Object& b)
  function qsort_descent_inplace (line 98) | static void qsort_descent_inplace(std::vector<Object>& faceobjects, int ...
  function qsort_descent_inplace (line 135) | static void qsort_descent_inplace(std::vector<Object>& objects)
  function nms_sorted_bboxes (line 143) | static void nms_sorted_bboxes(const std::vector<Object>& faceobjects, st...
  function generate_grids_and_stride (line 177) | static void generate_grids_and_stride(const int target_w, const int targ...
  function generate_yolox_proposals (line 198) | static void generate_yolox_proposals(std::vector<GridAndStride> grid_str...
  function detect_yolox (line 245) | static int detect_yolox(ncnn::Mat& in_pad, std::vector<Object>& objects,...
  function main (line 297) | int main(int argc, char** argv)

FILE: deploy/ncnn/cpp/src/kalmanFilter.cpp
  type byte_kalman (line 4) | namespace byte_kalman
    function KAL_DATA (line 33) | KAL_DATA KalmanFilter::initiate(const DETECTBOX &measurement)
    function KAL_HDATA (line 86) | KAL_HDATA KalmanFilter::project(const KAL_MEAN &mean, const KAL_COVA &...
    function KAL_DATA (line 100) | KAL_DATA

FILE: deploy/ncnn/cpp/src/lapjv.cpp
  function int_t (line 9) | int_t _ccrrt_dense(const uint_t n, cost_t *cost[],
  function int_t (line 76) | int_t _carr_dense(
  function uint_t (line 153) | uint_t _find_dense(const uint_t n, uint_t lo, cost_t *d, int_t *cols, in...
  function int_t (line 174) | int_t _scan_dense(const uint_t n, cost_t *cost[],
  function int_t (line 218) | int_t find_path_dense(
  function int_t (line 283) | int_t _ca_dense(
  function lapjv_internal (line 321) | int lapjv_internal(

FILE: deploy/ncnn/cpp/src/utils.cpp
  function Scalar (line 425) | Scalar BYTETracker::get_color(int idx)

FILE: deploy/scripts/export_onnx.py
  function make_parser (line 14) | def make_parser():
  function main (line 50) | def main():

FILE: deploy/scripts/trt.py
  function make_parser (line 14) | def make_parser():
  function main (line 31) | def main():

FILE: exps/default/nano.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_model (line 22) | def get_model(self, sublinear=False):

FILE: exps/default/yolov3.py
  class Exp (line 12) | class Exp(MyExp):
    method __init__ (line 13) | def __init__(self):
    method get_model (line 19) | def get_model(self, sublinear=False):
    method get_data_loader (line 35) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):

FILE: exps/default/yolox_l.py
  class Exp (line 10) | class Exp(MyExp):
    method __init__ (line 11) | def __init__(self):

FILE: exps/default/yolox_m.py
  class Exp (line 10) | class Exp(MyExp):
    method __init__ (line 11) | def __init__(self):

FILE: exps/default/yolox_s.py
  class Exp (line 10) | class Exp(MyExp):
    method __init__ (line 11) | def __init__(self):

FILE: exps/default/yolox_tiny.py
  class Exp (line 10) | class Exp(MyExp):
    method __init__ (line 11) | def __init__(self):

FILE: exps/default/yolox_x.py
  class Exp (line 10) | class Exp(MyExp):
    method __init__ (line 11) | def __init__(self):

FILE: exps/example/mot/yolox_dancetrack_test.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 34) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 96) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 142) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_dancetrack_test_hybrid_sort.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 47) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 109) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 155) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_dancetrack_test_hybrid_sort_reid.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 55) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 117) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 163) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_dancetrack_val.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 34) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 96) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 142) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_dancetrack_val_hybrid_sort.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 48) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 110) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 156) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_dancetrack_val_hybrid_sort_reid.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 56) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 118) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 164) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_l_mix_det.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 126) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_m_mix_det.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 126) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_nano_mix_det.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_model (line 33) | def get_model(self, sublinear=False):
    method get_data_loader (line 52) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 114) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 146) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_s_mix_det.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 126) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_tiny_mix_det.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 33) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 95) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 127) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_ablation.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 127) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_ablation_hybrid_sort.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 45) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 107) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 140) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_ablation_hybrid_sort_reid.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 53) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 115) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 148) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_ch.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 126) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_det.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 127) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_det_hybrid_sort.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 45) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 107) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 140) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_det_hybrid_sort_reid.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 53) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 115) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 148) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_det_train.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 127) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_mot20_ch.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 33) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 95) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 128) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_mot20_ch_hybrid_sort.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 47) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 109) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 142) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_mot20_ch_hybrid_sort_reid.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 55) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 117) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 150) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_mot20_ch_train.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 33) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 95) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 128) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mix_mot20_ch_valhalf.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 33) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 95) | def get_eval_loader(self, batch_size, is_distributed, testdev=False, r...
    method get_evaluator (line 128) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mot17_ablation_half_train.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 126) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mot17_half.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 126) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: exps/example/mot/yolox_x_mot17_train.py
  class Exp (line 11) | class Exp(MyExp):
    method __init__ (line 12) | def __init__(self):
    method get_data_loader (line 32) | def get_data_loader(self, batch_size, is_distributed, no_aug=False):
    method get_eval_loader (line 94) | def get_eval_loader(self, batch_size, is_distributed, testdev=False):
    method get_evaluator (line 126) | def get_evaluator(self, batch_size, is_distributed, testdev=False):

FILE: fast_reid/datasets/generate_cuhksysu_dance_patches.py
  function make_parser (line 10) | def make_parser():
  function generate_trajectories (line 19) | def generate_trajectories(file_path, GroundTrues):
  function main_dancetrack (line 44) | def main_dancetrack(args):
  function tlwh2xyxy (line 113) | def tlwh2xyxy(det, H, W):
  function save_patch (line 125) | def save_patch(img, det, id, save_path, seq=1, frame=1,):
  function main_cuhksysu (line 143) | def main_cuhksysu(args, id_offset, seq_offset=1000):

FILE: fast_reid/datasets/generate_mot_patches.py
  function generate_trajectories (line 9) | def generate_trajectories(file_path, GroundTrues):
  function make_parser (line 34) | def make_parser():
  function main (line 44) | def main(args):

FILE: fast_reid/demo/demo.py
  function setup_cfg (line 34) | def setup_cfg(args):
  function get_parser (line 44) | def get_parser():
  function postprocess (line 76) | def postprocess(features):

FILE: fast_reid/demo/predictor.py
  class FeatureExtractionDemo (line 23) | class FeatureExtractionDemo(object):
    method __init__ (line 24) | def __init__(self, cfg, parallel=False):
    method run_on_image (line 40) | def run_on_image(self, original_image):
    method run_on_loader (line 60) | def run_on_loader(self, data_loader):
  class AsyncPredictor (line 85) | class AsyncPredictor:
    class _StopToken (line 91) | class _StopToken:
    class _PredictWorker (line 94) | class _PredictWorker(mp.Process):
      method __init__ (line 95) | def __init__(self, cfg, task_queue, result_queue):
      method run (line 101) | def run(self):
    method __init__ (line 112) | def __init__(self, cfg, num_gpus: int = 1):
    method put (line 141) | def put(self, image):
    method get (line 145) | def get(self):
    method __len__ (line 161) | def __len__(self):
    method __call__ (line 164) | def __call__(self, image):
    method shutdown (line 168) | def shutdown(self):
    method default_buffer_size (line 173) | def default_buffer_size(self):

FILE: fast_reid/demo/visualize_result.py
  function setup_cfg (line 37) | def setup_cfg(args):
  function get_parser (line 47) | def get_parser():

FILE: fast_reid/docs/conf.py
  class GithubURLDomain (line 30) | class GithubURLDomain(Domain):
    method resolve_any_xref (line 39) | def resolve_any_xref(self, env, fromdocname, builder, target, node, co...
  function autodoc_skip_member (line 260) | def autodoc_skip_member(app, what, name, obj, skip, options):
  function paper_ref_role (line 314) | def paper_ref_role(
  function setup (line 345) | def setup(app):

FILE: fast_reid/fast_reid_interfece.py
  function setup_cfg (line 16) | def setup_cfg(config_file, opts):
  function postprocess (line 28) | def postprocess(features):
  function preprocess (line 35) | def preprocess(image, input_size):
  class FastReIDInterface (line 52) | class FastReIDInterface:
    method __init__ (line 53) | def __init__(self, config_file, weights_path, device, batch_size=16):
    method inference (line 76) | def inference(self, image, detections):

FILE: fast_reid/fastreid/config/config.py
  class CfgNode (line 21) | class CfgNode(_CfgNode):
    method load_yaml_with_base (line 40) | def load_yaml_with_base(filename: str, allow_unsafe: bool = False):
    method merge_from_file (line 100) | def merge_from_file(self, cfg_filename: str, allow_unsafe: bool = False):
    method merge_from_other_cfg (line 115) | def merge_from_other_cfg(self, cfg_other):
    method merge_from_list (line 125) | def merge_from_list(self, cfg_list: list):
    method __setattr__ (line 136) | def __setattr__(self, name: str, val: Any):
  function get_cfg (line 156) | def get_cfg() -> CfgNode:
  function set_global_cfg (line 167) | def set_global_cfg(cfg: CfgNode) -> None:
  function configurable (line 184) | def configurable(init_func=None, *, from_config=None):
  function _get_args_from_config (line 274) | def _get_args_from_config(from_config_func, *args, **kwargs):
  function _called_with_cfg (line 307) | def _called_with_cfg(*args, **kwargs):

FILE: fast_reid/fastreid/data/build.py
  function _train_loader_from_config (line 32) | def _train_loader_from_config(cfg, *, train_set=None, transforms=None, s...
  function build_reid_train_loader (line 77) | def build_reid_train_loader(
  function _test_loader_from_config (line 104) | def _test_loader_from_config(cfg, *, dataset_name=None, test_set=None, n...
  function build_reid_test_loader (line 127) | def build_reid_test_loader(test_set, test_batch_size, num_query, num_wor...
  function trivial_batch_collator (line 164) | def trivial_batch_collator(batch):
  function fast_batch_collator (line 171) | def fast_batch_collator(batched_inputs):

FILE: fast_reid/fastreid/data/common.py
  class CommDataset (line 12) | class CommDataset(Dataset):
    method __init__ (line 15) | def __init__(self, img_items, transform=None, relabel=True):
    method __len__ (line 32) | def __len__(self):
    method __getitem__ (line 35) | def __getitem__(self, index):
    method num_classes (line 53) | def num_classes(self):
    method num_cameras (line 57) | def num_cameras(self):

FILE: fast_reid/fastreid/data/data_utils.py
  function read_image (line 17) | def read_image(file_name, format=None):
  class BackgroundGenerator (line 82) | class BackgroundGenerator(threading.Thread):
    method __init__ (line 91) | def __init__(self, generator, local_rank, max_prefetch=10):
    method run (line 126) | def run(self):
    method next (line 134) | def next(self):
    method __next__ (line 141) | def __next__(self):
    method __iter__ (line 144) | def __iter__(self):
  class DataLoaderX (line 148) | class DataLoaderX(DataLoader):
    method __init__ (line 149) | def __init__(self, local_rank, **kwargs):
    method __iter__ (line 156) | def __iter__(self):
    method _shutdown_background_thread (line 162) | def _shutdown_background_thread(self):
    method preload (line 178) | def preload(self):
    method __next__ (line 189) | def __next__(self):
    method shutdown (line 200) | def shutdown(self):

FILE: fast_reid/fastreid/data/datasets/AirportALERT.py
  class AirportALERT (line 16) | class AirportALERT(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 35) | def process_train(self, dir_path, train_file):

FILE: fast_reid/fastreid/data/datasets/bases.py
  class Dataset (line 17) | class Dataset(object):
    method __init__ (line 33) | def __init__(self, train, query, gallery, transform=None, mode='train',
    method train (line 57) | def train(self):
    method query (line 63) | def query(self):
    method gallery (line 69) | def gallery(self):
    method __getitem__ (line 74) | def __getitem__(self, index):
    method __len__ (line 77) | def __len__(self):
    method __radd__ (line 80) | def __radd__(self, other):
    method parse_data (line 87) | def parse_data(self, data):
    method get_num_pids (line 100) | def get_num_pids(self, data):
    method get_num_cams (line 104) | def get_num_cams(self, data):
    method show_summary (line 108) | def show_summary(self):
    method combine_all (line 112) | def combine_all(self):
    method check_before_run (line 129) | def check_before_run(self, required_files):
  class ImageDataset (line 142) | class ImageDataset(Dataset):
    method show_train (line 151) | def show_train(self):
    method show_test (line 166) | def show_test(self):

FILE: fast_reid/fastreid/data/datasets/caviara.py
  class CAVIARa (line 17) | class CAVIARa(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/cuhk03.py
  class CUHK03 (line 16) | class CUHK03(ImageDataset):
    method __init__ (line 34) | def __init__(self, root='datasets', split_id=0, cuhk03_labeled=True, c...
    method preprocess_split (line 88) | def preprocess_split(self):

FILE: fast_reid/fastreid/data/datasets/cuhksysu.py
  class CUHKSYSU (line 18) | class CUHKSYSU(ImageDataset):
    method __init__ (line 30) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 67) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/cuhksysu_dancetrack.py
  class CUHKSYSU_DanceTrack (line 18) | class CUHKSYSU_DanceTrack(ImageDataset):
    method __init__ (line 30) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 67) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/cuhksysu_mot17.py
  class CUHKSYSU_MOT17 (line 18) | class CUHKSYSU_MOT17(ImageDataset):
    method __init__ (line 30) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 67) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/cuhksysu_mot20.py
  class CUHKSYSU_MOT20 (line 18) | class CUHKSYSU_MOT20(ImageDataset):
    method __init__ (line 30) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 67) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/dancetrack.py
  class DanceTrack (line 18) | class DanceTrack(ImageDataset):
    method __init__ (line 30) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 67) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/dukemtmcreid.py
  class DukeMTMC (line 16) | class DukeMTMC(ImageDataset):
    method __init__ (line 34) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 56) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/grid.py
  class GRID (line 17) | class GRID(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/iLIDS.py
  class iLIDS (line 17) | class iLIDS(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/lpw.py
  class LPW (line 17) | class LPW(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/market1501.py
  class Market1501 (line 17) | class Market1501(ImageDataset):
    method __init__ (line 34) | def __init__(self, root='datasets', market1501_500k=False, **kwargs):
    method process_dir (line 72) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/mot17.py
  class MOT17 (line 18) | class MOT17(ImageDataset):
    method __init__ (line 35) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 72) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/mot20.py
  class MOT20 (line 17) | class MOT20(ImageDataset):
    method __init__ (line 34) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 71) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/mot20_.py
  class Market1501 (line 17) | class Market1501(ImageDataset):
    method __init__ (line 34) | def __init__(self, root='datasets', market1501_500k=False, **kwargs):
    method process_dir (line 72) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/msmt17.py
  class MSMT17 (line 33) | class MSMT17(ImageDataset):
    method __init__ (line 48) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 98) | def process_dir(self, dir_path, list_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/pes3d.py
  class PeS3D (line 17) | class PeS3D(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/pku.py
  class PKU (line 17) | class PKU(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/prai.py
  class PRAI (line 17) | class PRAI(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/prid.py
  class PRID (line 16) | class PRID(ImageDataset):
    method __init__ (line 22) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 33) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/saivt.py
  class SAIVT (line 17) | class SAIVT(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/sensereid.py
  class SenseReID (line 17) | class SenseReID(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/shinpuhkan.py
  class Shinpuhkan (line 16) | class Shinpuhkan(ImageDataset):
    method __init__ (line 22) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 33) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/sysu_mm.py
  class SYSU_mm (line 17) | class SYSU_mm(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/thermalworld.py
  class Thermalworld (line 17) | class Thermalworld(ImageDataset):
    method __init__ (line 23) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 34) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/vehicleid.py
  class VehicleID (line 15) | class VehicleID(ImageDataset):
    method __init__ (line 30) | def __init__(self, root='datasets', test_list='', **kwargs):
    method process_dir (line 53) | def process_dir(self, list_file, is_train=True):
  class SmallVehicleID (line 85) | class SmallVehicleID(VehicleID):
    method __init__ (line 92) | def __init__(self, root='datasets', **kwargs):
  class MediumVehicleID (line 100) | class MediumVehicleID(VehicleID):
    method __init__ (line 107) | def __init__(self, root='datasets', **kwargs):
  class LargeVehicleID (line 115) | class LargeVehicleID(VehicleID):
    method __init__ (line 122) | def __init__(self, root='datasets', **kwargs):

FILE: fast_reid/fastreid/data/datasets/veri.py
  class VeRi (line 16) | class VeRi(ImageDataset):
    method __init__ (line 32) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 53) | def process_dir(self, dir_path, is_train=True):

FILE: fast_reid/fastreid/data/datasets/veriwild.py
  class VeRiWild (line 14) | class VeRiWild(ImageDataset):
    method __init__ (line 29) | def __init__(self, root='datasets', query_list='', gallery_list='', **...
    method process_dir (line 59) | def process_dir(self, img_list, is_train=True):
    method process_vehicle (line 76) | def process_vehicle(self, vehicle_info):
  class SmallVeRiWild (line 96) | class SmallVeRiWild(VeRiWild):
    method __init__ (line 103) | def __init__(self, root='datasets', **kwargs):
  class MediumVeRiWild (line 112) | class MediumVeRiWild(VeRiWild):
    method __init__ (line 119) | def __init__(self, root='datasets', **kwargs):
  class LargeVeRiWild (line 128) | class LargeVeRiWild(VeRiWild):
    method __init__ (line 135) | def __init__(self, root='datasets', **kwargs):

FILE: fast_reid/fastreid/data/datasets/viper.py
  class VIPeR (line 17) | class VIPeR(ImageDataset):
    method __init__ (line 21) | def __init__(self, root='datasets', **kwargs):
    method process_train (line 32) | def process_train(self, train_path):

FILE: fast_reid/fastreid/data/datasets/wildtracker.py
  class WildTrackCrop (line 15) | class WildTrackCrop(ImageDataset):
    method __init__ (line 33) | def __init__(self, root='datasets', **kwargs):
    method process_dir (line 45) | def process_dir(self, dir_path):

FILE: fast_reid/fastreid/data/samplers/data_sampler.py
  class TrainingSampler (line 15) | class TrainingSampler(Sampler):
    method __init__ (line 26) | def __init__(self, size: int, shuffle: bool = True, seed: Optional[int...
    method __iter__ (line 45) | def __iter__(self):
    method _infinite_indices (line 49) | def _infinite_indices(self):
  class InferenceSampler (line 58) | class InferenceSampler(Sampler):
    method __init__ (line 66) | def __init__(self, size: int):
    method __iter__ (line 81) | def __iter__(self):
    method __len__ (line 84) | def __len__(self):

FILE: fast_reid/fastreid/data/samplers/imbalance_sampler.py
  class ImbalancedDatasetSampler (line 21) | class ImbalancedDatasetSampler(Sampler):
    method __init__ (line 28) | def __init__(self, data_source: List, size: int = None, seed: Optional...
    method _get_label (line 53) | def _get_label(self, dataset, idx):
    method __iter__ (line 59) | def __iter__(self):
    method _infinite_indices (line 63) | def _infinite_indices(self):

FILE: fast_reid/fastreid/data/samplers/triplet_sampler.py
  function no_index (line 18) | def no_index(a, b):
  function reorder_index (line 23) | def reorder_index(batch_indices, world_size):
  class BalancedIdentitySampler (line 41) | class BalancedIdentitySampler(Sampler):
    method __init__ (line 42) | def __init__(self, data_source: List, mini_batch_size: int, num_instan...
    method __iter__ (line 72) | def __iter__(self):
    method _infinite_indices (line 76) | def _infinite_indices(self):
  class SetReWeightSampler (line 122) | class SetReWeightSampler(Sampler):
    method __init__ (line 123) | def __init__(self, data_source: str, mini_batch_size: int, num_instanc...
    method __iter__ (line 173) | def __iter__(self):
    method _infinite_indices (line 177) | def _infinite_indices(self):
  class NaiveIdentitySampler (line 198) | class NaiveIdentitySampler(Sampler):
    method __init__ (line 208) | def __init__(self, data_source: str, mini_batch_size: int, num_instanc...
    method __iter__ (line 230) | def __iter__(self):
    method _infinite_indices (line 234) | def _infinite_indices(self):

FILE: fast_reid/fastreid/data/transforms/autoaugment.py
  function _interpolation (line 45) | def _interpolation(kwargs):
  function _check_args_tf (line 53) | def _check_args_tf(kwargs):
  function shear_x (line 59) | def shear_x(img, factor, **kwargs):
  function shear_y (line 64) | def shear_y(img, factor, **kwargs):
  function translate_x_rel (line 69) | def translate_x_rel(img, pct, **kwargs):
  function translate_y_rel (line 75) | def translate_y_rel(img, pct, **kwargs):
  function translate_x_abs (line 81) | def translate_x_abs(img, pixels, **kwargs):
  function translate_y_abs (line 86) | def translate_y_abs(img, pixels, **kwargs):
  function rotate (line 91) | def rotate(img, degrees, **kwargs):
  function auto_contrast (line 123) | def auto_contrast(img, **__):
  function invert (line 127) | def invert(img, **__):
  function equalize (line 131) | def equalize(img, **__):
  function solarize (line 135) | def solarize(img, thresh, **__):
  function solarize_add (line 139) | def solarize_add(img, add, thresh=128, **__):
  function posterize (line 154) | def posterize(img, bits_to_keep, **__):
  function contrast (line 160) | def contrast(img, factor, **__):
  function color (line 164) | def color(img, factor, **__):
  function brightness (line 168) | def brightness(img, factor, **__):
  function sharpness (line 172) | def sharpness(img, factor, **__):
  function _randomly_negate (line 176) | def _randomly_negate(v):
  function _rotate_level_to_arg (line 181) | def _rotate_level_to_arg(level, _hparams):
  function _enhance_level_to_arg (line 188) | def _enhance_level_to_arg(level, _hparams):
  function _enhance_increasing_level_to_arg (line 193) | def _enhance_increasing_level_to_arg(level, _hparams):
  function _shear_level_to_arg (line 201) | def _shear_level_to_arg(level, _hparams):
  function _translate_abs_level_to_arg (line 208) | def _translate_abs_level_to_arg(level, hparams):
  function _translate_rel_level_to_arg (line 215) | def _translate_rel_level_to_arg(level, hparams):
  function _posterize_level_to_arg (line 223) | def _posterize_level_to_arg(level, _hparams):
  function _posterize_increasing_level_to_arg (line 230) | def _posterize_increasing_level_to_arg(level, hparams):
  function _posterize_original_level_to_arg (line 237) | def _posterize_original_level_to_arg(level, _hparams):
  function _solarize_level_to_arg (line 244) | def _solarize_level_to_arg(level, _hparams):
  function _solarize_increasing_level_to_arg (line 250) | def _solarize_increasing_level_to_arg(level, _hparams):
  function _solarize_add_level_to_arg (line 256) | def _solarize_add_level_to_arg(level, _hparams):
  class AugmentOp (line 317) | class AugmentOp:
    method __init__ (line 319) | def __init__(self, name, prob=0.5, magnitude=10, hparams=None):
    method __call__ (line 337) | def __call__(self, img):
  function auto_augment_policy_v0 (line 348) | def auto_augment_policy_v0(hparams):
  function auto_augment_policy_v0r (line 381) | def auto_augment_policy_v0r(hparams):
  function auto_augment_policy_original (line 415) | def auto_augment_policy_original(hparams):
  function auto_augment_policy_originalr (line 448) | def auto_augment_policy_originalr(hparams):
  function auto_augment_policy (line 481) | def auto_augment_policy(name="original"):
  class AutoAugment (line 495) | class AutoAugment:
    method __init__ (line 497) | def __init__(self):
    method __call__ (line 500) | def __call__(self, img):
  function auto_augment_transform (line 507) | def auto_augment_transform(config_str, hparams):
  function _select_rand_weights (line 594) | def _select_rand_weights(weight_idx=0, transforms=None):
  function rand_augment_ops (line 603) | def rand_augment_ops(magnitude=10, hparams=None, transforms=None):
  class RandAugment (line 610) | class RandAugment:
    method __init__ (line 611) | def __init__(self, ops, num_layers=2, choice_weights=None):
    method __call__ (line 616) | def __call__(self, img):
  function rand_augment_transform (line 625) | def rand_augment_transform(config_str, hparams):
  function augmix_ops (line 689) | def augmix_ops(magnitude=10, hparams=None, transforms=None):
  class AugMixAugment (line 696) | class AugMixAugment:
    method __init__ (line 703) | def __init__(self, ops, alpha=1., width=3, depth=-1, blended=False):
    method _calc_blended_weights (line 710) | def _calc_blended_weights(self, ws, m):
    method _apply_blended (line 720) | def _apply_blended(self, img, mixing_weights, m):
    method _apply_basic (line 736) | def _apply_basic(self, img, mixing_weights, m):
    method __call__ (line 753) | def __call__(self, img):
  function augment_and_mix_transform (line 763) | def augment_and_mix_transform(config_str, hparams):

FILE: fast_reid/fastreid/data/transforms/build.py
  function build_transforms (line 13) | def build_transforms(cfg, is_train=True):

FILE: fast_reid/fastreid/data/transforms/functional.py
  function to_tensor (line 12) | def to_tensor(pic):
  function int_parameter (line 64) | def int_parameter(level, maxval):
  function float_parameter (line 76) | def float_parameter(level, maxval):
  function sample_level (line 88) | def sample_level(n):
  function autocontrast (line 92) | def autocontrast(pil_img, *args):
  function equalize (line 96) | def equalize(pil_img, *args):
  function posterize (line 100) | def posterize(pil_img, level, *args):
  function rotate (line 105) | def rotate(pil_img, level, *args):
  function solarize (line 112) | def solarize(pil_img, level, *args):
  function shear_x (line 117) | def shear_x(pil_img, level):
  function shear_y (line 126) | def shear_y(pil_img, level):
  function translate_x (line 135) | def translate_x(pil_img, level):
  function translate_y (line 144) | def translate_y(pil_img, level):
  function color (line 154) | def color(pil_img, level, *args):
  function contrast (line 160) | def contrast(pil_img, level, *args):
  function brightness (line 166) | def brightness(pil_img, level, *args):
  function sharpness (line 172) | def sharpness(pil_img, level, *args):

FILE: fast_reid/fastreid/data/transforms/transforms.py
  class ToTensor (line 19) | class ToTensor(object):
    method __call__ (line 30) | def __call__(self, pic):
    method __repr__ (line 40) | def __repr__(self):
  class RandomPatch (line 44) | class RandomPatch(object):
    method __init__ (line 57) | def __init__(self, prob_happen=0.5, pool_capacity=50000, min_sample_si...
    method generate_wh (line 71) | def generate_wh(self, W, H):
    method transform_patch (line 82) | def transform_patch(self, patch):
    method __call__ (line 87) | def __call__(self, img):
  class AugMix (line 115) | class AugMix(object):
    method __init__ (line 119) | def __init__(self, prob=0.5, aug_prob_coeff=0.1, mixture_width=3, mixt...
    method __call__ (line 137) | def __call__(self, image):

FILE: fast_reid/fastreid/engine/defaults.py
  function default_argument_parser (line 38) | def default_argument_parser():
  function default_setup (line 72) | def default_setup(cfg, args):
  class DefaultPredictor (line 119) | class DefaultPredictor:
    method __init__ (line 135) | def __init__(self, cfg):
    method __call__ (line 144) | def __call__(self, image):
  class DefaultTrainer (line 157) | class DefaultTrainer(TrainerBase):
    method __init__ (line 190) | def __init__(self, cfg):
    method resume_or_load (line 243) | def resume_or_load(self, resume=True):
    method build_hooks (line 264) | def build_hooks(self):
    method build_writers (line 319) | def build_writers(self):
    method train (line 344) | def train(self):
    method run_step (line 357) | def run_step(self):
    method build_model (line 362) | def build_model(cls, cfg):
    method build_optimizer (line 375) | def build_optimizer(cls, cfg, model):
    method build_lr_scheduler (line 385) | def build_lr_scheduler(cls, cfg, optimizer, iters_per_epoch):
    method build_train_loader (line 393) | def build_train_loader(cls, cfg):
    method build_test_loader (line 405) | def build_test_loader(cls, cfg, dataset_name):
    method build_evaluator (line 415) | def build_evaluator(cls, cfg, dataset_name, output_dir=None):
    method test (line 420) | def test(cls, cfg, model):
    method auto_scale_hyperparams (line 460) | def auto_scale_hyperparams(cfg, num_classes):

FILE: fast_reid/fastreid/engine/hooks.py
  class CallbackHook (line 43) | class CallbackHook(HookBase):
    method __init__ (line 48) | def __init__(self, *, before_train=None, after_train=None, before_epoc...
    method before_train (line 60) | def before_train(self):
    method after_train (line 64) | def after_train(self):
    method before_epoch (line 72) | def before_epoch(self):
    method after_epoch (line 76) | def after_epoch(self):
    method before_step (line 80) | def before_step(self):
    method after_step (line 84) | def after_step(self):
  class IterationTimer (line 89) | class IterationTimer(HookBase):
    method __init__ (line 100) | def __init__(self, warmup_iter=3):
    method before_train (line 109) | def before_train(self):
    method after_train (line 114) | def after_train(self):
    method before_step (line 140) | def before_step(self):
    method after_step (line 144) | def after_step(self):
  class PeriodicWriter (line 157) | class PeriodicWriter(HookBase):
    method __init__ (line 163) | def __init__(self, writers, period=20):
    method after_step (line 174) | def after_step(self):
    method after_epoch (line 181) | def after_epoch(self):
    method after_train (line 185) | def after_train(self):
  class PeriodicCheckpointer (line 190) | class PeriodicCheckpointer(_PeriodicCheckpointer, HookBase):
    method before_train (line 199) | def before_train(self):
    method after_epoch (line 206) | def after_epoch(self):
  class LRScheduler (line 215) | class LRScheduler(HookBase):
    method __init__ (line 221) | def __init__(self, optimizer, scheduler):
    method before_step (line 250) | def before_step(self):
    method after_step (line 254) | def after_step(self):
    method after_epoch (line 263) | def after_epoch(self):
  class AutogradProfiler (line 270) | class AutogradProfiler(HookBase):
    method __init__ (line 290) | def __init__(self, enable_predicate, output_dir, *, use_cuda=True):
    method before_step (line 303) | def before_step(self):
    method after_step (line 310) | def after_step(self):
  class EvalHook (line 330) | class EvalHook(HookBase):
    method __init__ (line 336) | def __init__(self, eval_period, eval_function):
    method _do_eval (line 350) | def _do_eval(self):
    method after_epoch (line 374) | def after_epoch(self):
    method after_train (line 379) | def after_train(self):
  class PreciseBN (line 389) | class PreciseBN(HookBase):
    method __init__ (line 398) | def __init__(self, model, data_loader, num_iter):
    method after_epoch (line 424) | def after_epoch(self):
    method update_stats (line 430) | def update_stats(self):
  class LayerFreeze (line 457) | class LayerFreeze(HookBase):
    method __init__ (line 458) | def __init__(self, model, freeze_layers, freeze_iters):
    method before_step (line 469) | def before_step(self):
    method freeze_specific_layer (line 478) | def freeze_specific_layer(self):
    method open_all_layer (line 492) | def open_all_layer(self):
  class SWA (line 503) | class SWA(HookBase):
    method __init__ (line 504) | def __init__(self, swa_start: int, swa_freq: int, swa_lr_factor: float...
    method before_step (line 511) | def before_step(self):
    method after_step (line 525) | def after_step(self):

FILE: fast_reid/fastreid/engine/launch.py
  function _find_free_port (line 22) | def _find_free_port():
  function launch (line 34) | def launch(main_func, num_gpus_per_machine, num_machines=1, machine_rank...
  function _distributed_worker (line 74) | def _distributed_worker(

FILE: fast_reid/fastreid/engine/train_loop.py
  class HookBase (line 25) | class HookBase:
    method before_train (line 56) | def before_train(self):
    method after_train (line 62) | def after_train(self):
    method before_epoch (line 68) | def before_epoch(self):
    method after_epoch (line 74) | def after_epoch(self):
    method before_step (line 80) | def before_step(self):
    method after_step (line 86) | def after_step(self):
  class TrainerBase (line 93) | class TrainerBase:
    method __init__ (line 108) | def __init__(self):
    method register_hooks (line 111) | def register_hooks(self, hooks):
    method train (line 128) | def train(self, start_epoch: int, max_epoch: int, iters_per_epoch: int):
    method before_train (line 158) | def before_train(self):
    method after_train (line 162) | def after_train(self):
    method before_epoch (line 167) | def before_epoch(self):
    method before_step (line 173) | def before_step(self):
    method after_step (line 179) | def after_step(self):
    method after_epoch (line 183) | def after_epoch(self):
    method run_step (line 187) | def run_step(self):
  class SimpleTrainer (line 191) | class SimpleTrainer(TrainerBase):
    method __init__ (line 204) | def __init__(self, model, data_loader, optimizer, param_wrapper):
    method run_step (line 228) | def run_step(self):
    method _write_metrics (line 265) | def _write_metrics(self, loss_dict: Dict[str, torch.Tensor], data_time...
  class AMPTrainer (line 307) | class AMPTrainer(SimpleTrainer):
    method __init__ (line 313) | def __init__(self, model, data_loader, optimizer, param_wrapper, grad_...
    method run_step (line 333) | def run_step(self):

FILE: fast_reid/fastreid/evaluation/clas_evaluator.py
  function accuracy (line 20) | def accuracy(output, target, topk=(1,)):
  class ClasEvaluator (line 37) | class ClasEvaluator(DatasetEvaluator):
    method __init__ (line 38) | def __init__(self, cfg, output_dir=None):
    method reset (line 45) | def reset(self):
    method process (line 48) | def process(self, inputs, outputs):
    method evaluate (line 58) | def evaluate(self):

FILE: fast_reid/fastreid/evaluation/evaluator.py
  class DatasetEvaluator (line 13) | class DatasetEvaluator:
    method reset (line 22) | def reset(self):
    method preprocess_inputs (line 29) | def preprocess_inputs(self, inputs):
    method process (line 32) | def process(self, inputs, outputs):
    method evaluate (line 41) | def evaluate(self):
  function inference_on_dataset (line 82) | def inference_on_dataset(model, data_loader, evaluator, flip_test=False):
  function inference_context (line 166) | def inference_context(model):

FILE: fast_reid/fastreid/evaluation/query_expansion.py
  function aqe (line 15) | def aqe(query_feat: torch.tensor, gallery_feat: torch.tensor,

FILE: fast_reid/fastreid/evaluation/rank.py
  function eval_cuhk03 (line 20) | def eval_cuhk03(distmat, q_pids, g_pids, q_camids, g_camids, max_rank):
  function eval_market1501 (line 99) | def eval_market1501(distmat, q_pids, g_pids, q_camids, g_camids, max_rank):
  function evaluate_py (line 162) | def evaluate_py(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, u...
  function evaluate_rank (line 169) | def evaluate_rank(

FILE: fast_reid/fastreid/evaluation/rank_cylib/__init__.py
  function compile_helper (line 8) | def compile_helper():

FILE: fast_reid/fastreid/evaluation/rank_cylib/rank_cy.c
  function CYTHON_INLINE (line 404) | static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l,...
  type PyObject (line 489) | typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *co...
  type PyObject (line 490) | typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, ...
  type Py_tss_t (line 531) | typedef int Py_tss_t;
  function CYTHON_INLINE (line 532) | static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
  function CYTHON_INLINE (line 536) | static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
  function CYTHON_INLINE (line 541) | static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
  function CYTHON_INLINE (line 544) | static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
  function CYTHON_INLINE (line 547) | static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
  function CYTHON_INLINE (line 551) | static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
  function CYTHON_INLINE (line 554) | static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
  type Py_hash_t (line 699) | typedef long Py_hash_t;
  type __Pyx_PyAsyncMethodsStruct (line 722) | typedef struct {
  function CYTHON_INLINE (line 738) | static CYTHON_INLINE float __PYX_NAN() {
  type __Pyx_StringTabEntry (line 787) | typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const c...
  function CYTHON_INLINE (line 808) | static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t l...
  function CYTHON_INLINE (line 857) | static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
  function __Pyx_init_sys_getdefaultencoding_params (line 890) | static int __Pyx_init_sys_getdefaultencoding_params(void) {
  function __Pyx_init_sys_getdefaultencoding_params (line 940) | static int __Pyx_init_sys_getdefaultencoding_params(void) {
  function CYTHON_INLINE (line 972) | static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void...
  type __pyx_memoryview_obj (line 1016) | struct __pyx_memoryview_obj
  type __Pyx_memviewslice (line 1017) | typedef struct {
  type __pyx_atomic_int_type (line 1058) | typedef volatile __pyx_atomic_int_type __pyx_atomic_int;
  type __Pyx_StructField_ (line 1085) | struct __Pyx_StructField_
  type __Pyx_TypeInfo (line 1087) | typedef struct {
  type __Pyx_StructField (line 1097) | typedef struct __Pyx_StructField_ {
  type __Pyx_BufFmt_StackElem (line 1102) | typedef struct {
  type __Pyx_BufFmt_Context (line 1106) | typedef struct {
  type npy_int8 (line 1127) | typedef npy_int8 __pyx_t_5numpy_int8_t;
  type npy_int16 (line 1136) | typedef npy_int16 __pyx_t_5numpy_int16_t;
  type npy_int32 (line 1145) | typedef npy_int32 __pyx_t_5numpy_int32_t;
  type npy_int64 (line 1154) | typedef npy_int64 __pyx_t_5numpy_int64_t;
  type npy_uint8 (line 1163) | typedef npy_uint8 __pyx_t_5numpy_uint8_t;
  type npy_uint16 (line 1172) | typedef npy_uint16 __pyx_t_5numpy_uint16_t;
  type npy_uint32 (line 1181) | typedef npy_uint32 __pyx_t_5numpy_uint32_t;
  type npy_uint64 (line 1190) | typedef npy_uint64 __pyx_t_5numpy_uint64_t;
  type npy_float32 (line 1199) | typedef npy_float32 __pyx_t_5numpy_float32_t;
  type npy_float64 (line 1208) | typedef npy_float64 __pyx_t_5numpy_float64_t;
  type npy_long (line 1217) | typedef npy_long __pyx_t_5numpy_int_t;
  type npy_longlong (line 1226) | typedef npy_longlong __pyx_t_5numpy_long_t;
  type npy_longlong (line 1235) | typedef npy_longlong __pyx_t_5numpy_longlong_t;
  type npy_ulong (line 1244) | typedef npy_ulong __pyx_t_5numpy_uint_t;
  type npy_ulonglong (line 1253) | typedef npy_ulonglong __pyx_t_5numpy_ulong_t;
  type npy_ulonglong (line 1262) | typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;
  type npy_intp (line 1271) | typedef npy_intp __pyx_t_5numpy_intp_t;
  type npy_uintp (line 1280) | typedef npy_uintp __pyx_t_5numpy_uintp_t;
  type npy_double (line 1289) | typedef npy_double __pyx_t_5numpy_float_t;
  type npy_double (line 1298) | typedef npy_double __pyx_t_5numpy_double_t;
  type npy_longdouble (line 1307) | typedef npy_longdouble __pyx_t_5numpy_longdouble_t;
  type std (line 1311) | typedef ::std::complex< float > __pyx_t_float_complex;
  type __pyx_t_float_complex (line 1313) | typedef float _Complex __pyx_t_float_complex;
  type __pyx_t_float_complex (line 1316) | typedef struct { float real, imag; } __pyx_t_float_complex;
  type std (line 1323) | typedef ::std::complex< double > __pyx_t_double_complex;
  type __pyx_t_double_complex (line 1325) | typedef double _Complex __pyx_t_double_complex;
  type __pyx_t_double_complex (line 1328) | typedef struct { double real, imag; } __pyx_t_double_complex;
  type __pyx_array_obj (line 1334) | struct __pyx_array_obj
  type __pyx_MemviewEnum_obj (line 1335) | struct __pyx_MemviewEnum_obj
  type __pyx_memoryview_obj (line 1336) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 1337) | struct __pyx_memoryviewslice_obj
  type npy_cfloat (line 1346) | typedef npy_cfloat __pyx_t_5numpy_cfloat_t;
  type npy_cdouble (line 1355) | typedef npy_cdouble __pyx_t_5numpy_cdouble_t;
  type npy_clongdouble (line 1364) | typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;
  type npy_cdouble (line 1373) | typedef npy_cdouble __pyx_t_5numpy_complex_t;
  type __pyx_opt_args_7rank_cy_evaluate_cy (line 1374) | struct __pyx_opt_args_7rank_cy_evaluate_cy
  type __pyx_opt_args_7rank_cy_evaluate_cy (line 1383) | struct __pyx_opt_args_7rank_cy_evaluate_cy {
  type __pyx_array_obj (line 1395) | struct __pyx_array_obj {
  type __pyx_MemviewEnum_obj (line 1420) | struct __pyx_MemviewEnum_obj {
  type __pyx_memoryview_obj (line 1433) | struct __pyx_memoryview_obj {
  type __pyx_memoryviewslice_obj (line 1456) | struct __pyx_memoryviewslice_obj {
  type __pyx_vtabstruct_array (line 1474) | struct __pyx_vtabstruct_array {
  type __pyx_vtabstruct_array (line 1477) | struct __pyx_vtabstruct_array
  type __pyx_vtabstruct_memoryview (line 1488) | struct __pyx_vtabstruct_memoryview {
  type __pyx_vtabstruct_memoryview (line 1497) | struct __pyx_vtabstruct_memoryview
  type __pyx_vtabstruct__memoryviewslice (line 1508) | struct __pyx_vtabstruct__memoryviewslice {
  type __pyx_vtabstruct__memoryviewslice (line 1511) | struct __pyx_vtabstruct__memoryviewslice
  type __Pyx_RefNannyAPIStruct (line 1519) | typedef struct {
  type __pyx_memoryview_obj (line 1652) | struct __pyx_memoryview_obj
  function CYTHON_INLINE (line 1759) | static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {
  type __pyx_array_obj (line 1903) | struct __pyx_array_obj
  function CYTHON_INLINE (line 1908) | static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s...
  function CYTHON_INLINE (line 1912) | static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char ...
  function CYTHON_INLINE (line 1916) | static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char ...
  function CYTHON_INLINE (line 1963) | static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject*...
  function CYTHON_INLINE (line 1987) | static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) {
  function CYTHON_INLINE (line 2000) | static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyO...
  type __Pyx_ImportType_CheckSize (line 2037) | enum __Pyx_ImportType_CheckSize {
  type __Pyx_ImportType_CheckSize (line 2042) | enum __Pyx_ImportType_CheckSize
  type __Pyx_CodeObjectCacheEntry (line 2053) | typedef struct {
  type __Pyx_CodeObjectCache (line 2057) | struct __Pyx_CodeObjectCache {
  type __Pyx_CodeObjectCache (line 2062) | struct __Pyx_CodeObjectCache
  type __Pyx_Buf_DimInfo (line 2081) | typedef struct {
  type __Pyx_Buffer (line 2084) | typedef struct {
  type __Pyx_LocalBuf_ND (line 2088) | typedef struct {
  type __pyx_array_obj (line 2289) | struct __pyx_array_obj
  type __pyx_memoryview_obj (line 2290) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2291) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2292) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2293) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2293) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2294) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2295) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2296) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 2297) | struct __pyx_memoryviewslice_obj
  type __pyx_memoryviewslice_obj (line 2298) | struct __pyx_memoryviewslice_obj
  type __pyx_opt_args_7rank_cy_evaluate_cy (line 2354) | struct __pyx_opt_args_7rank_cy_evaluate_cy
  type __pyx_array_obj (line 2358) | struct __pyx_array_obj
  type __pyx_memoryview_obj (line 2364) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2364) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2369) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2370) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2371) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2372) | struct __pyx_memoryview_obj
  type __pyx_MemviewEnum_obj (line 2390) | struct __pyx_MemviewEnum_obj
  type __pyx_array_obj (line 2644) | struct __pyx_array_obj
  type __pyx_array_obj (line 2645) | struct __pyx_array_obj
  type __pyx_array_obj (line 2646) | struct __pyx_array_obj
  type __pyx_array_obj (line 2647) | struct __pyx_array_obj
  type __pyx_array_obj (line 2648) | struct __pyx_array_obj
  type __pyx_array_obj (line 2649) | struct __pyx_array_obj
  type __pyx_array_obj (line 2650) | struct __pyx_array_obj
  type __pyx_array_obj (line 2651) | struct __pyx_array_obj
  type __pyx_MemviewEnum_obj (line 2654) | struct __pyx_MemviewEnum_obj
  type __pyx_MemviewEnum_obj (line 2655) | struct __pyx_MemviewEnum_obj
  type __pyx_MemviewEnum_obj (line 2656) | struct __pyx_MemviewEnum_obj
  type __pyx_MemviewEnum_obj (line 2657) | struct __pyx_MemviewEnum_obj
  type __pyx_memoryview_obj (line 2658) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2659) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2660) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2661) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2662) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2663) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2664) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2665) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2666) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2667) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2668) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2669) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2670) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2671) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2672) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2673) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2674) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2675) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2676) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2677) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2678) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 2681) | struct __pyx_memoryviewslice_obj
  type __pyx_memoryviewslice_obj (line 2682) | struct __pyx_memoryviewslice_obj
  function PyObject (line 2735) | static PyObject *__pyx_f_7rank_cy_evaluate_cy(PyObject *__pyx_v_distmat,...
  function PyObject (line 3060) | static PyObject *__pyx_pw_7rank_cy_1evaluate_cy(PyObject *__pyx_self, Py...
  function PyObject (line 3181) | static PyObject *__pyx_pf_7rank_cy_evaluate_cy(CYTHON_UNUSED PyObject *_...
  function PyObject (line 3219) | static PyObject *__pyx_f_7rank_cy_eval_cuhk03_cy(__Pyx_memviewslice __py...
  function PyObject (line 4953) | static PyObject *__pyx_pw_7rank_cy_3eval_cuhk03_cy(PyObject *__pyx_self,...
  function PyObject (line 5059) | static PyObject *__pyx_pf_7rank_cy_2eval_cuhk03_cy(CYTHON_UNUSED PyObjec...
  function PyObject (line 5104) | static PyObject *__pyx_f_7rank_cy_eval_market1501_cy(__Pyx_memviewslice ...
  function PyObject (line 6421) | static PyObject *__pyx_pw_7rank_cy_5eval_market1501_cy(PyObject *__pyx_s...
  function PyObject (line 6527) | static PyObject *__pyx_pf_7rank_cy_4eval_market1501_cy(CYTHON_UNUSED PyO...
  function __pyx_fuse_3__pyx_f_7rank_cy_function_cumsum (line 6571) | static void __pyx_fuse_3__pyx_f_7rank_cy_function_cumsum(__Pyx_memviewsl...
  function CYTHON_INLINE (line 6635) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyOb...
  function CYTHON_INLINE (line 6685) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyOb...
  function CYTHON_INLINE (line 6735) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyOb...
  function CYTHON_INLINE (line 6785) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyOb...
  function CYTHON_INLINE (line 6835) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyOb...
  function CYTHON_INLINE (line 6885) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyDataType_SHAPE(PyArray_D...
  function CYTHON_INLINE (line 6959) | static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *_...
  function CYTHON_INLINE (line 7001) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObje...
  function CYTHON_INLINE (line 7082) | static CYTHON_INLINE int __pyx_f_5numpy_import_array(void) {
  function CYTHON_INLINE (line 7214) | static CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) {
  function CYTHON_INLINE (line 7346) | static CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) {
  function CYTHON_INLINE (line 7478) | static CYTHON_INLINE int __pyx_f_5numpy_is_timedelta64_object(PyObject *...
  function CYTHON_INLINE (line 7515) | static CYTHON_INLINE int __pyx_f_5numpy_is_datetime64_object(PyObject *_...
  function CYTHON_INLINE (line 7552) | static CYTHON_INLINE npy_datetime __pyx_f_5numpy_get_datetime64_value(Py...
  function CYTHON_INLINE (line 7586) | static CYTHON_INLINE npy_timedelta __pyx_f_5numpy_get_timedelta64_value(...
  function CYTHON_INLINE (line 7620) | static CYTHON_INLINE NPY_DATETIMEUNIT __pyx_f_5numpy_get_datetime64_unit...
  function __pyx_array___cinit__ (line 7654) | static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx...
  function __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__ (line 7782) | static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(s...
  function CYTHON_UNUSED (line 8405) | static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, P...
  function __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__ (line 8416) | static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffe...
  function __pyx_array___dealloc__ (line 8712) | static void __pyx_array___dealloc__(PyObject *__pyx_v_self) {
  function __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__ (line 8721) | static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc...
  function PyObject (line 8843) | static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__...
  function PyObject (line 8854) | static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(...
  function PyObject (line 8904) | static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v...
  function Py_ssize_t (line 8986) | static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) {
  function Py_ssize_t (line 8997) | static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__l...
  function PyObject (line 9036) | static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObjec...
  function PyObject (line 9047) | static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__ge...
  function PyObject (line 9104) | static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObjec...
  function PyObject (line 9115) | static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__g...
  function __pyx_array___setitem__ (line 9172) | static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__p...
  function __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__ (line 9183) | static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem...
  function PyObject (line 9232) | static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx...
  function PyObject (line 9243) | static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED st...
  function PyObject (line 9289) | static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__p...
  function PyObject (line 9300) | static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED...
  type __pyx_array_obj (line 9345) | struct __pyx_array_obj
  type __pyx_array_obj (line 9346) | struct __pyx_array_obj
  type __pyx_array_obj (line 9347) | struct __pyx_array_obj
  type __pyx_array_obj (line 9399) | struct __pyx_array_obj
  type __pyx_array_obj (line 9463) | struct __pyx_array_obj
  function __pyx_MemviewEnum___init__ (line 9522) | static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *...
  function __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__ (line 9573) | static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init...
  function PyObject (line 9615) | static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) {
  function PyObject (line 9626) | static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_...
  function PyObject (line 9666) | static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject ...
  function PyObject (line 9677) | static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __p...
  function PyObject (line 9901) | static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObjec...
  function PyObject (line 9912) | static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct ...
  function __pyx_memoryview___cinit__ (line 10044) | static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *...
  function __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__ (line 10124) | static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_...
  function __pyx_memoryview___dealloc__ (line 10442) | static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) {
  function __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__ (line 10451) | static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview...
  type __pyx_memoryview_obj (line 10671) | struct __pyx_memoryview_obj
  function PyObject (line 10811) | static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, Py...
  function PyObject (line 10822) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function __pyx_memoryview___setitem__ (line 11000) | static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject...
  function __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__ (line 11011) | static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_...
  function PyObject (line 11226) | static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *...
  function PyObject (line 11436) | static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_...
  function PyObject (line 11526) | static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __p...
  function PyObject (line 11816) | static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryvie...
  function PyObject (line 11877) | static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_me...
  function PyObject (line 12154) | static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_m...
  function CYTHON_UNUSED (line 12395) | static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_se...
  function __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__ (line 12406) | static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_...
  function PyObject (line 12739) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__...
  function PyObject (line 12750) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(...
  function PyObject (line 12825) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__ge...
  function PyObject (line 12836) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get...
  function PyObject (line 12878) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__g...
  function PyObject (line 12889) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___ge...
  function PyObject (line 12959) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1_...
  function PyObject (line 12970) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___...
  function PyObject (line 13073) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffset...
  function PyObject (line 13084) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffset...
  function PyObject (line 13191) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__ge...
  function PyObject (line 13202) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get...
  function PyObject (line 13254) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1...
  function PyObject (line 13265) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize__...
  function PyObject (line 13317) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__...
  function PyObject (line 13328) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___g...
  function PyObject (line 13390) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__ge...
  function PyObject (line 13401) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get...
  function Py_ssize_t (line 13531) | static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) {
  function Py_ssize_t (line 13542) | static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memo...
  function PyObject (line 13611) | static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) {
  function PyObject (line 13622) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 13713) | static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) {
  function PyObject (line 13724) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 13792) | static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CY...
  function PyObject (line 13803) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 13868) | static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CY...
  function PyObject (line 13879) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 13944) | static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UN...
  function PyObject (line 13955) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 14038) | static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, C...
  function PyObject (line 14049) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 14131) | static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *...
  function PyObject (line 14142) | static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUS...
  function PyObject (line 14188) | static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject...
  function PyObject (line 14199) | static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_U...
  function PyObject (line 14244) | static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_f...
  function CYTHON_INLINE (line 14335) | static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) {
  function PyObject (line 14374) | static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) {
  function PyObject (line 14831) | static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets...
  type __pyx_memoryview_obj (line 14919) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 14919) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 14926) | struct __pyx_memoryviewslice_obj
  type __pyx_memoryview_obj (line 14936) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 14941) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 15011) | struct __pyx_memoryviewslice_obj
  type __pyx_memoryview_obj (line 15423) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 15464) | struct __pyx_memoryview_obj
  function __pyx_memoryview_slice_memviewslice (line 15499) | static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx...
  function __pyx_memslice_transpose (line 16592) | static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) {
  function __pyx_memoryviewslice___dealloc__ (line 16768) | static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) {
  function __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__ (line 16777) | static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memo...
  function PyObject (line 16810) | static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __p...
  function PyObject (line 16896) | static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __...
  function PyObject (line 16981) | static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base...
  function PyObject (line 16992) | static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base...
  function PyObject (line 17032) | static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObj...
  function PyObject (line 17043) | static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON...
  function PyObject (line 17089) | static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyO...
  function PyObject (line 17100) | static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYT...
  function PyObject (line 17145) | static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_m...
  function __Pyx_memviewslice (line 17531) | static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(st...
  function __pyx_memoryview_slice_copy (line 17634) | static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__p...
  function PyObject (line 17760) | static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_ob...
  function PyObject (line 17820) | static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_me...
  function Py_ssize_t (line 17946) | static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) {
  function __pyx_get_best_slice_order (line 18012) | static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslic...
  function _copy_strided_to_strided (line 18202) | static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t ...
  function copy_strided_to_strided (line 18439) | static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __P...
  function Py_ssize_t (line 18469) | static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__...
  function Py_ssize_t (line 18541) | static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_sh...
  type __pyx_memoryview_obj (line 18672) | struct __pyx_memoryview_obj
  function __pyx_memoryview_err_extents (line 18918) | static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_...
  function __pyx_memoryview_err_dim (line 19006) | static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx...
  function __pyx_memoryview_err (line 19090) | static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_m...
  function __pyx_memoryview_copy_contents (line 19200) | static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src...
  function __pyx_memoryview_broadcast_leading (line 19779) | static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx...
  function __pyx_memoryview_refcount_copying (line 19892) | static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_...
  function __pyx_memoryview_refcount_objects_in_slice_with_gil (line 19942) | static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__...
  function __pyx_memoryview_refcount_objects_in_slice (line 19981) | static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_dat...
  function __pyx_memoryview_slice_assign_scalar (line 20113) | static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__p...
  function __pyx_memoryview__slice_assign_scalar (line 20161) | static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py...
  function PyObject (line 20293) | static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyO...
  function PyObject (line 20366) | static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTH...
  function PyObject (line 20561) | static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum...
  type __pyx_vtabstruct_array (line 20684) | struct __pyx_vtabstruct_array
  function PyObject (line 20686) | static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObje...
  function __pyx_tp_dealloc_array (line 20706) | static void __pyx_tp_dealloc_array(PyObject *o) {
  function PyObject (line 20725) | static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) {
  function __pyx_mp_ass_subscript_array (line 20733) | static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObje...
  function PyObject (line 20744) | static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) {
  function PyObject (line 20753) | static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_U...
  type PyGetSetDef (line 20764) | struct PyGetSetDef
  type __pyx_array_obj (line 20808) | struct __pyx_array_obj
  function PyObject (line 20877) | static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObje...
  function __pyx_tp_dealloc_Enum (line 20891) | static void __pyx_tp_dealloc_Enum(PyObject *o) {
  function __pyx_tp_traverse_Enum (line 20903) | static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) {
  function __pyx_tp_clear_Enum (line 20912) | static int __pyx_tp_clear_Enum(PyObject *o) {
  type __pyx_MemviewEnum_obj (line 20930) | struct __pyx_MemviewEnum_obj
  type __pyx_vtabstruct_memoryview (line 20998) | struct __pyx_vtabstruct_memoryview
  function PyObject (line 21000) | static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, P...
  function __pyx_tp_dealloc_memoryview (line 21022) | static void __pyx_tp_dealloc_memoryview(PyObject *o) {
  function __pyx_tp_traverse_memoryview (line 21044) | static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *...
  function __pyx_tp_clear_memoryview (line 21062) | static int __pyx_tp_clear_memoryview(PyObject *o) {
  function PyObject (line 21077) | static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) {
  function __pyx_mp_ass_subscript_memoryview (line 21085) | static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, P...
  function PyObject (line 21096) | static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UN...
  function PyObject (line 21100) | static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON...
  function PyObject (line 21104) | static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHO...
  function PyObject (line 21108) | static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYT...
  function PyObject (line 21112) | static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, ...
  function PyObject (line 21116) | static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON...
  function PyObject (line 21120) | static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CY...
  function PyObject (line 21124) | static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTH...
  function PyObject (line 21128) | static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON...
  type PyGetSetDef (line 21142) | struct PyGetSetDef
  type __pyx_memoryview_obj (line 21194) | struct __pyx_memoryview_obj
  type __pyx_vtabstruct__memoryviewslice (line 21262) | struct __pyx_vtabstruct__memoryviewslice
  function PyObject (line 21264) | static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject...
  function __pyx_tp_dealloc__memoryviewslice (line 21275) | static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) {
  function __pyx_tp_traverse__memoryviewslice (line 21296) | static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, ...
  function __pyx_tp_clear__memoryviewslice (line 21306) | static int __pyx_tp_clear__memoryviewslice(PyObject *o) {
  function PyObject (line 21317) | static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, C...
  type PyGetSetDef (line 21327) | struct PyGetSetDef
  type __pyx_memoryviewslice_obj (line 21335) | struct __pyx_memoryviewslice_obj
  type PyModuleDef (line 21430) | struct PyModuleDef
  function CYTHON_SMALL_CODE (line 21581) | static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) {
  function CYTHON_SMALL_CODE (line 21596) | static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {
  function CYTHON_SMALL_CODE (line 21888) | static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) {
  function __Pyx_modinit_global_init_code (line 21909) | static int __Pyx_modinit_global_init_code(void) {
  function __Pyx_modinit_variable_export_code (line 21922) | static int __Pyx_modinit_variable_export_code(void) {
  function __Pyx_modinit_function_export_code (line 21930) | static int __Pyx_modinit_function_export_code(void) {
  function __Pyx_modinit_type_init_code (line 21938) | static int __Pyx_modinit_type_init_code(void) {
  function __Pyx_modinit_type_import_code (line 22003) | static int __Pyx_modinit_type_import_code(void) {
  function __Pyx_modinit_variable_import_code (line 22063) | static int __Pyx_modinit_variable_import_code(void) {
  function __Pyx_modinit_function_import_code (line 22071) | static int __Pyx_modinit_function_import_code(void) {
  function __Pyx_PyMODINIT_FUNC (line 22102) | __Pyx_PyMODINIT_FUNC PyInit_rank_cy(void)
  function CYTHON_SMALL_CODE (line 22107) | static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) {
  function CYTHON_SMALL_CODE (line 22130) | static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, P...
  function CYTHON_SMALL_CODE (line 22145) | static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CY...
  function __Pyx_RefNannyAPIStruct (line 22505) | static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modn...
  function CYTHON_INLINE (line 22522) | static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, ...
  function PyObject (line 22535) | static PyObject *__Pyx_GetBuiltinName(PyObject *name) {
  function CYTHON_INLINE (line 22550) | static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) {
  function CYTHON_INLINE (line 22554) | static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject ...
  function CYTHON_INLINE (line 22566) | static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj...
  function CYTHON_INLINE (line 22578) | static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name)
  function CYTHON_INLINE (line 22611) | static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObj...
  function __Pyx_init_memviewslice (line 22630) | static int
  function __pyx_fatalerror (line 22682) | static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN {
  function CYTHON_INLINE (line 22694) | static CYTHON_INLINE int
  function CYTHON_INLINE (line 22704) | static CYTHON_INLINE int
  function CYTHON_INLINE (line 22714) | static CYTHON_INLINE void
  function CYTHON_INLINE (line 22735) | static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice,
  function __Pyx_RaiseArgtupleInvalid (line 22762) | static void __Pyx_RaiseArgtupleInvalid(
  function __Pyx_RaiseDoubleKeywordsError (line 22788) | static void __Pyx_RaiseDoubleKeywordsError(
  function __Pyx_ParseOptionalKeywords (line 22802) | static int __Pyx_ParseOptionalKeywords(
  function CYTHON_INLINE (line 22905) | static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *fun...
  function PyObject (line 22928) | static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObjec...
  function CYTHON_UNUSED (line 23046) | static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* functi...
  function CYTHON_INLINE (line 23076) | static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, ...
  function PyObject (line 23096) | static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *ar...
  function CYTHON_INLINE (line 23106) | static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func,...
  function CYTHON_INLINE (line 23124) | static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func,...
  function PyObject (line 23135) | static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) {
  function CYTHON_INLINE (line 23142) | static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, P...
  function CYTHON_INLINE (line 23160) | static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, ...
  function CYTHON_INLINE (line 23178) | static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssi...
  function PyObject (line 23223) | static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) {
  function PyObject (line 23241) | static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) {
  function __Pyx_PyObject_GetMethod (line 23251) | static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObj...
  function PyObject (line 23347) | static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject*...
  function PyObject (line 23352) | static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* met...
  function CYTHON_INLINE (line 23365) | static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x) {
  function CYTHON_INLINE (line 23379) | static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) {
  function CYTHON_INLINE (line 23400) | static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expec...
  function CYTHON_INLINE (line 23406) | static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t inde...
  function CYTHON_INLINE (line 23413) | static CYTHON_INLINE int __Pyx_IterFinish(void) {
  function __Pyx_IternextUnpackEndCheck (line 23448) | static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t exp...
  function CYTHON_INLINE (line 23460) | static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varna...
  function _PyErr_StackItem (line 23466) | static _PyErr_StackItem *
  function CYTHON_INLINE (line 23481) | static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, Py...
  function CYTHON_INLINE (line 23496) | static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, P...
  function __Pyx_PyErr_ExceptionMatchesTuple (line 23522) | static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObjec...
  function CYTHON_INLINE (line 23535) | static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadSta...
  function __Pyx_GetException (line 23549) | static int __Pyx_GetException(PyObject **type, PyObject **value, PyObjec...
  function CYTHON_INLINE (line 23621) | static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate,...
  function CYTHON_INLINE (line 23633) | static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, P...
  function __Pyx_Raise (line 23645) | static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,
  function __Pyx_Raise (line 23696) | static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, P...
  function __Pyx__ArgTypeTest (line 23803) | static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const c...
  function CYTHON_INLINE (line 23824) | static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2...
  function CYTHON_INLINE (line 23871) | static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* ...
  function CYTHON_INLINE (line 23973) | static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) {
  function CYTHON_INLINE (line 23986) | static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
  function PyObject (line 24019) | static PyObject *__Pyx_GetAttr3Default(PyObject *d) {
  function CYTHON_INLINE (line 24028) | static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, ...
  function CYTHON_INLINE (line 24034) | static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) {
  function CYTHON_INLINE (line 24039) | static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *typ...
  function CYTHON_INLINE (line 24053) | static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, Py...
  function CYTHON_INLINE (line 24076) | static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject ...
  function PyObject (line 24087) | static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int l...
  function __Pyx_InBases (line 24153) | static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) {
  function CYTHON_INLINE (line 24161) | static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *...
  function __Pyx_inner_PyErr_GivenExceptionMatches2 (line 24177) | static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObj...
  function CYTHON_INLINE (line 24199) | static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObje...
  function __Pyx_PyErr_GivenExceptionMatchesTuple (line 24207) | static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, Py...
  function CYTHON_INLINE (line 24228) | static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err...
  function CYTHON_INLINE (line 24240) | static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *er...
  function PyObject (line 24253) | static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, CYTHO...
  function PyObject (line 24376) | static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) {
  function CYTHON_INLINE (line 24390) | static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) {
  function PyObject (line 24409) | static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, P...
  function CYTHON_INLINE (line 24420) | static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObj...
  function PyObject (line 24449) | static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* ...
  function __Pyx_SetVtable (line 24458) | static int __Pyx_SetVtable(PyObject *dict, void *vtable) {
  function __Pyx_PyObject_GetAttrStr_ClearAttributeError (line 24476) | static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) {
  function CYTHON_INLINE (line 24482) | static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject...
  function __Pyx_setup_reduce_is_named (line 24498) | static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) {
  function __Pyx_setup_reduce (line 24514) | static int __Pyx_setup_reduce(PyObject* type_obj) {
  function PyTypeObject (line 24604) | static PyTypeObject *__Pyx_ImportType(PyObject *module, const char *modu...
  function __Pyx_CLineForTraceback (line 24664) | static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tsta...
  function __pyx_bisect_code_objects (line 24705) | static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries...
  function PyCodeObject (line 24726) | static PyCodeObject *__pyx_find_code_object(int code_line) {
  function __pyx_insert_code_object (line 24740) | static void __pyx_insert_code_object(int code_line, PyCodeObject* code_o...
  function PyCodeObject (line 24794) | static PyCodeObject* __Pyx_CreateCodeObjectForTraceback(
  function __Pyx_AddTraceback (line 24852) | static void __Pyx_AddTraceback(const char *funcname, int c_line,
  function __Pyx_GetBuffer (line 24892) | static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {
  function __Pyx_ReleaseBuffer (line 24899) | static void __Pyx_ReleaseBuffer(Py_buffer *view) {
  function __pyx_memviewslice_is_contig (line 24914) | static int
  function __pyx_get_array_memory_extents (line 24936) | static void
  function __pyx_slices_overlap (line 24960) | static int
  function CYTHON_INLINE (line 24972) | static CYTHON_INLINE PyObject *
  function CYTHON_INLINE (line 24985) | static CYTHON_INLINE int __Pyx_Is_Little_Endian(void)
  function __Pyx_BufFmt_Init (line 24996) | static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
  function __Pyx_BufFmt_ParseNumber (line 25023) | static int __Pyx_BufFmt_ParseNumber(const char** ts) {
  function __Pyx_BufFmt_ExpectNumber (line 25038) | static int __Pyx_BufFmt_ExpectNumber(const char **ts) {
  function __Pyx_BufFmt_RaiseUnexpectedChar (line 25045) | static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) {
  function __Pyx_BufFmt_TypeCharToStandardSize (line 25074) | static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_comple...
  function __Pyx_BufFmt_TypeCharToNativeSize (line 25092) | static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) {
  type __Pyx_st_short (line 25111) | typedef struct { char c; short x; } __Pyx_st_short;
  type __Pyx_st_int (line 25112) | typedef struct { char c; int x; } __Pyx_st_int;
  type __Pyx_st_long (line 25113) | typedef struct { char c; long x; } __Pyx_st_long;
  type __Pyx_st_float (line 25114) | typedef struct { char c; float x; } __Pyx_st_float;
  type __Pyx_st_double (line 25115) | typedef struct { char c; double x; } __Pyx_st_double;
  type __Pyx_st_longdouble (line 25116) | typedef struct { char c; long double x; } __Pyx_st_longdouble;
  type __Pyx_st_void_p (line 25117) | typedef struct { char c; void *x; } __Pyx_st_void_p;
  type __Pyx_st_longlong (line 25119) | typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong;
  function __Pyx_BufFmt_TypeCharToAlignment (line 25121) | static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED in...
  type __Pyx_pad_short (line 25143) | typedef struct { short x; char c; } __Pyx_pad_short;
  type __Pyx_pad_int (line 25144) | typedef struct { int x; char c; } __Pyx_pad_int;
  type __Pyx_pad_long (line 25145) | typedef struct { long x; char c; } __Pyx_pad_long;
  type __Pyx_pad_float (line 25146) | typedef struct { float x; char c; } __Pyx_pad_float;
  type __Pyx_pad_double (line 25147) | typedef struct { double x; char c; } __Pyx_pad_double;
  type __Pyx_pad_longdouble (line 25148) | typedef struct { long double x; char c; } __Pyx_pad_longdouble;
  type __Pyx_pad_void_p (line 25149) | typedef struct { void *x; char c; } __Pyx_pad_void_p;
  type __Pyx_pad_longlong (line 25151) | typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong;
  function __Pyx_BufFmt_TypeCharToPadding (line 25153) | static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int ...
  function __Pyx_BufFmt_TypeCharToGroup (line 25171) | static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) {
  function __Pyx_BufFmt_RaiseExpected (line 25192) | static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) {
  function __Pyx_BufFmt_ProcessTypeChunk (line 25216) | static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) {
  function PyObject (line 25318) | static PyObject *
  function __pyx_typeinfo_cmp (line 25498) | static int
  function __pyx_check_strides (line 25539) | static int
  function __pyx_check_suboffsets (line 25592) | static int
  function __pyx_verify_contig (line 25615) | static int
  function __Pyx_ValidateAndInit_memviewslice (line 25644) | static int __Pyx_ValidateAndInit_memviewslice(
  function CYTHON_INLINE (line 25720) | static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlic...
  function CYTHON_INLINE (line 25743) | static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlic...
  function CYTHON_INLINE (line 25790) | static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_pa...
  function CYTHON_INLINE (line 25794) | static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_pa...
  function CYTHON_INLINE (line 25799) | static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_pa...
  function CYTHON_INLINE (line 25810) | static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx...
  function CYTHON_INLINE (line 25813) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_flo...
  function CYTHON_INLINE (line 25819) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_fl...
  function CYTHON_INLINE (line 25825) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_fl...
  function CYTHON_INLINE (line 25832) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_fl...
  function CYTHON_INLINE (line 25852) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_fl...
  function CYTHON_INLINE (line 25863) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_flo...
  function CYTHON_INLINE (line 25869) | static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) {
  function CYTHON_INLINE (line 25872) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_fl...
  function CYTHON_INLINE (line 25879) | static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) {
  function CYTHON_INLINE (line 25886) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_flo...
  function CYTHON_INLINE (line 25944) | static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_...
  function CYTHON_INLINE (line 25948) | static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_...
  function CYTHON_INLINE (line 25953) | static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_...
  function CYTHON_INLINE (line 25964) | static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __p...
  function CYTHON_INLINE (line 25967) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_d...
  function CYTHON_INLINE (line 25973) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_...
  function CYTHON_INLINE (line 25979) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_...
  function CYTHON_INLINE (line 25986) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_...
  function CYTHON_INLINE (line 26006) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_...
  function CYTHON_INLINE (line 26017) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_d...
  function CYTHON_INLINE (line 26023) | static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) {
  function CYTHON_INLINE (line 26026) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_...
  function CYTHON_INLINE (line 26033) | static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) {
  function CYTHON_INLINE (line 26040) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_d...
  function PyObject (line 26097) | static PyObject *__Pyx_GetStdout(void) {
  function __Pyx_Print (line 26104) | static int __Pyx_Print(PyObject* f, PyObject *arg_tuple, int newline) {
  function __Pyx_Print (line 26146) | static int __Pyx_Print(PyObject* stream, PyObject *arg_tuple, int newlin...
  function CYTHON_INLINE (line 26202) | static CYTHON_INLINE PyObject *__pyx_memview_get_float(const char *itemp) {
  function CYTHON_INLINE (line 26205) | static CYTHON_INLINE int __pyx_memview_set_float(const char *itemp, PyOb...
  function CYTHON_INLINE (line 26214) | static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlic...
  function CYTHON_INLINE (line 26237) | static CYTHON_INLINE PyObject *__pyx_memview_get_long(const char *itemp) {
  function CYTHON_INLINE (line 26240) | static CYTHON_INLINE int __pyx_memview_set_long(const char *itemp, PyObj...
  function CYTHON_INLINE (line 26249) | static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlic...
  function __Pyx_memviewslice (line 26272) | static __Pyx_memviewslice
  function CYTHON_INLINE (line 26535) | static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {
  function __Pyx_PrintOne (line 26574) | static int __Pyx_PrintOne(PyObject* f, PyObject *o) {
  function __Pyx_PrintOne (line 26598) | static int __Pyx_PrintOne(PyObject* stream, PyObject *o) {
  function CYTHON_INLINE (line 26806) | static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) {
  function __Pyx_check_binary_version (line 27040) | static int __Pyx_check_binary_version(void) {
  function __Pyx_InitStrings (line 27078) | static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {
  function CYTHON_INLINE (line 27110) | static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_...
  function CYTHON_INLINE (line 27113) | static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) {
  function CYTHON_INLINE (line 27140) | static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObjec...
  function CYTHON_INLINE (line 27182) | static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {
  function CYTHON_INLINE (line 27187) | static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) {
  function PyObject (line 27194) | static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* resul...
  function CYTHON_INLINE (line 27263) | static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {
  function CYTHON_INLINE (line 27325) | static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) {
  function CYTHON_INLINE (line 27342) | static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) {
  function CYTHON_INLINE (line 27345) | static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {

FILE: fast_reid/fastreid/evaluation/rank_cylib/roc_cy.c
  function CYTHON_INLINE (line 404) | static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l,...
  type PyObject (line 489) | typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *co...
  type PyObject (line 490) | typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, ...
  type Py_tss_t (line 531) | typedef int Py_tss_t;
  function CYTHON_INLINE (line 532) | static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
  function CYTHON_INLINE (line 536) | static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
  function CYTHON_INLINE (line 541) | static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
  function CYTHON_INLINE (line 544) | static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
  function CYTHON_INLINE (line 547) | static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
  function CYTHON_INLINE (line 551) | static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
  function CYTHON_INLINE (line 554) | static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
  type Py_hash_t (line 699) | typedef long Py_hash_t;
  type __Pyx_PyAsyncMethodsStruct (line 722) | typedef struct {
  function CYTHON_INLINE (line 738) | static CYTHON_INLINE float __PYX_NAN() {
  type __Pyx_StringTabEntry (line 787) | typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const c...
  function CYTHON_INLINE (line 808) | static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t l...
  function CYTHON_INLINE (line 857) | static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
  function __Pyx_init_sys_getdefaultencoding_params (line 890) | static int __Pyx_init_sys_getdefaultencoding_params(void) {
  function __Pyx_init_sys_getdefaultencoding_params (line 940) | static int __Pyx_init_sys_getdefaultencoding_params(void) {
  function CYTHON_INLINE (line 972) | static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void...
  type __pyx_memoryview_obj (line 1016) | struct __pyx_memoryview_obj
  type __Pyx_memviewslice (line 1017) | typedef struct {
  type __pyx_atomic_int_type (line 1058) | typedef volatile __pyx_atomic_int_type __pyx_atomic_int;
  type __Pyx_StructField_ (line 1085) | struct __Pyx_StructField_
  type __Pyx_TypeInfo (line 1087) | typedef struct {
  type __Pyx_StructField (line 1097) | typedef struct __Pyx_StructField_ {
  type __Pyx_BufFmt_StackElem (line 1102) | typedef struct {
  type __Pyx_BufFmt_Context (line 1106) | typedef struct {
  type npy_int8 (line 1127) | typedef npy_int8 __pyx_t_5numpy_int8_t;
  type npy_int16 (line 1136) | typedef npy_int16 __pyx_t_5numpy_int16_t;
  type npy_int32 (line 1145) | typedef npy_int32 __pyx_t_5numpy_int32_t;
  type npy_int64 (line 1154) | typedef npy_int64 __pyx_t_5numpy_int64_t;
  type npy_uint8 (line 1163) | typedef npy_uint8 __pyx_t_5numpy_uint8_t;
  type npy_uint16 (line 1172) | typedef npy_uint16 __pyx_t_5numpy_uint16_t;
  type npy_uint32 (line 1181) | typedef npy_uint32 __pyx_t_5numpy_uint32_t;
  type npy_uint64 (line 1190) | typedef npy_uint64 __pyx_t_5numpy_uint64_t;
  type npy_float32 (line 1199) | typedef npy_float32 __pyx_t_5numpy_float32_t;
  type npy_float64 (line 1208) | typedef npy_float64 __pyx_t_5numpy_float64_t;
  type npy_long (line 1217) | typedef npy_long __pyx_t_5numpy_int_t;
  type npy_longlong (line 1226) | typedef npy_longlong __pyx_t_5numpy_long_t;
  type npy_longlong (line 1235) | typedef npy_longlong __pyx_t_5numpy_longlong_t;
  type npy_ulong (line 1244) | typedef npy_ulong __pyx_t_5numpy_uint_t;
  type npy_ulonglong (line 1253) | typedef npy_ulonglong __pyx_t_5numpy_ulong_t;
  type npy_ulonglong (line 1262) | typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;
  type npy_intp (line 1271) | typedef npy_intp __pyx_t_5numpy_intp_t;
  type npy_uintp (line 1280) | typedef npy_uintp __pyx_t_5numpy_uintp_t;
  type npy_double (line 1289) | typedef npy_double __pyx_t_5numpy_float_t;
  type npy_double (line 1298) | typedef npy_double __pyx_t_5numpy_double_t;
  type npy_longdouble (line 1307) | typedef npy_longdouble __pyx_t_5numpy_longdouble_t;
  type std (line 1311) | typedef ::std::complex< float > __pyx_t_float_complex;
  type __pyx_t_float_complex (line 1313) | typedef float _Complex __pyx_t_float_complex;
  type __pyx_t_float_complex (line 1316) | typedef struct { float real, imag; } __pyx_t_float_complex;
  type std (line 1323) | typedef ::std::complex< double > __pyx_t_double_complex;
  type __pyx_t_double_complex (line 1325) | typedef double _Complex __pyx_t_double_complex;
  type __pyx_t_double_complex (line 1328) | typedef struct { double real, imag; } __pyx_t_double_complex;
  type __pyx_array_obj (line 1334) | struct __pyx_array_obj
  type __pyx_MemviewEnum_obj (line 1335) | struct __pyx_MemviewEnum_obj
  type __pyx_memoryview_obj (line 1336) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 1337) | struct __pyx_memoryviewslice_obj
  type npy_cfloat (line 1346) | typedef npy_cfloat __pyx_t_5numpy_cfloat_t;
  type npy_cdouble (line 1355) | typedef npy_cdouble __pyx_t_5numpy_cdouble_t;
  type npy_clongdouble (line 1364) | typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;
  type npy_cdouble (line 1373) | typedef npy_cdouble __pyx_t_5numpy_complex_t;
  type __pyx_array_obj (line 1382) | struct __pyx_array_obj {
  type __pyx_MemviewEnum_obj (line 1407) | struct __pyx_MemviewEnum_obj {
  type __pyx_memoryview_obj (line 1420) | struct __pyx_memoryview_obj {
  type __pyx_memoryviewslice_obj (line 1443) | struct __pyx_memoryviewslice_obj {
  type __pyx_vtabstruct_array (line 1461) | struct __pyx_vtabstruct_array {
  type __pyx_vtabstruct_array (line 1464) | struct __pyx_vtabstruct_array
  type __pyx_vtabstruct_memoryview (line 1475) | struct __pyx_vtabstruct_memoryview {
  type __pyx_vtabstruct_memoryview (line 1484) | struct __pyx_vtabstruct_memoryview
  type __pyx_vtabstruct__memoryviewslice (line 1495) | struct __pyx_vtabstruct__memoryviewslice {
  type __pyx_vtabstruct__memoryviewslice (line 1498) | struct __pyx_vtabstruct__memoryviewslice
  type __Pyx_RefNannyAPIStruct (line 1506) | typedef struct {
  type __pyx_memoryview_obj (line 1585) | struct __pyx_memoryview_obj
  type __pyx_array_obj (line 1853) | struct __pyx_array_obj
  function CYTHON_INLINE (line 1858) | static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s...
  function CYTHON_INLINE (line 1862) | static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char ...
  function CYTHON_INLINE (line 1866) | static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char ...
  function CYTHON_INLINE (line 1919) | static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject*...
  function CYTHON_INLINE (line 1935) | static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) {
  function CYTHON_INLINE (line 1949) | static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {
  function CYTHON_INLINE (line 1965) | static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyO...
  type __Pyx_ImportType_CheckSize (line 2002) | enum __Pyx_ImportType_CheckSize {
  type __Pyx_ImportType_CheckSize (line 2007) | enum __Pyx_ImportType_CheckSize
  type __Pyx_CodeObjectCacheEntry (line 2018) | typedef struct {
  type __Pyx_CodeObjectCache (line 2022) | struct __Pyx_CodeObjectCache {
  type __Pyx_CodeObjectCache (line 2027) | struct __Pyx_CodeObjectCache
  type __Pyx_Buf_DimInfo (line 2046) | typedef struct {
  type __Pyx_Buffer (line 2049) | typedef struct {
  type __Pyx_LocalBuf_ND (line 2053) | typedef struct {
  type __pyx_array_obj (line 2244) | struct __pyx_array_obj
  type __pyx_memoryview_obj (line 2245) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2246) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2247) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2248) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2248) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2249) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2250) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2251) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 2252) | struct __pyx_memoryviewslice_obj
  type __pyx_memoryviewslice_obj (line 2253) | struct __pyx_memoryviewslice_obj
  type __pyx_array_obj (line 2310) | struct __pyx_array_obj
  type __pyx_memoryview_obj (line 2316) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2316) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2321) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2322) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2323) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2324) | struct __pyx_memoryview_obj
  type __pyx_MemviewEnum_obj (line 2342) | struct __pyx_MemviewEnum_obj
  type __pyx_array_obj (line 2574) | struct __pyx_array_obj
  type __pyx_array_obj (line 2575) | struct __pyx_array_obj
  type __pyx_array_obj (line 2576) | struct __pyx_array_obj
  type __pyx_array_obj (line 2577) | struct __pyx_array_obj
  type __pyx_array_obj (line 2578) | struct __pyx_array_obj
  type __pyx_array_obj (line 2579) | struct __pyx_array_obj
  type __pyx_array_obj (line 2580) | struct __pyx_array_obj
  type __pyx_array_obj (line 2581) | struct __pyx_array_obj
  type __pyx_MemviewEnum_obj (line 2584) | struct __pyx_MemviewEnum_obj
  type __pyx_MemviewEnum_obj (line 2585) | struct __pyx_MemviewEnum_obj
  type __pyx_MemviewEnum_obj (line 2586) | struct __pyx_MemviewEnum_obj
  type __pyx_MemviewEnum_obj (line 2587) | struct __pyx_MemviewEnum_obj
  type __pyx_memoryview_obj (line 2588) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2589) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2590) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2591) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2592) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2593) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2594) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2595) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2596) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2597) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2598) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2599) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2600) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2601) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2602) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2603) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2604) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2605) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2606) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2607) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 2608) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 2611) | struct __pyx_memoryviewslice_obj
  type __pyx_memoryviewslice_obj (line 2612) | struct __pyx_memoryviewslice_obj
  function PyObject (line 2665) | static PyObject *__pyx_f_6roc_cy_evaluate_roc_cy(__Pyx_memviewslice __py...
  function PyObject (line 3974) | static PyObject *__pyx_pw_6roc_cy_1evaluate_roc_cy(PyObject *__pyx_self,...
  function PyObject (line 4069) | static PyObject *__pyx_pf_6roc_cy_evaluate_roc_cy(CYTHON_UNUSED PyObject...
  function CYTHON_INLINE (line 4113) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyOb...
  function CYTHON_INLINE (line 4163) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyOb...
  function CYTHON_INLINE (line 4213) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyOb...
  function CYTHON_INLINE (line 4263) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyOb...
  function CYTHON_INLINE (line 4313) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyOb...
  function CYTHON_INLINE (line 4363) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyDataType_SHAPE(PyArray_D...
  function CYTHON_INLINE (line 4437) | static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *_...
  function CYTHON_INLINE (line 4479) | static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObje...
  function CYTHON_INLINE (line 4560) | static CYTHON_INLINE int __pyx_f_5numpy_import_array(void) {
  function CYTHON_INLINE (line 4692) | static CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) {
  function CYTHON_INLINE (line 4824) | static CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) {
  function CYTHON_INLINE (line 4956) | static CYTHON_INLINE int __pyx_f_5numpy_is_timedelta64_object(PyObject *...
  function CYTHON_INLINE (line 4993) | static CYTHON_INLINE int __pyx_f_5numpy_is_datetime64_object(PyObject *_...
  function CYTHON_INLINE (line 5030) | static CYTHON_INLINE npy_datetime __pyx_f_5numpy_get_datetime64_value(Py...
  function CYTHON_INLINE (line 5064) | static CYTHON_INLINE npy_timedelta __pyx_f_5numpy_get_timedelta64_value(...
  function CYTHON_INLINE (line 5098) | static CYTHON_INLINE NPY_DATETIMEUNIT __pyx_f_5numpy_get_datetime64_unit...
  function __pyx_array___cinit__ (line 5132) | static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx...
  function __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__ (line 5260) | static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(s...
  function CYTHON_UNUSED (line 5883) | static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, P...
  function __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__ (line 5894) | static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffe...
  function __pyx_array___dealloc__ (line 6190) | static void __pyx_array___dealloc__(PyObject *__pyx_v_self) {
  function __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__ (line 6199) | static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc...
  function PyObject (line 6321) | static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__...
  function PyObject (line 6332) | static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(...
  function PyObject (line 6382) | static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v...
  function Py_ssize_t (line 6464) | static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) {
  function Py_ssize_t (line 6475) | static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__l...
  function PyObject (line 6514) | static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObjec...
  function PyObject (line 6525) | static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__ge...
  function PyObject (line 6582) | static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObjec...
  function PyObject (line 6593) | static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__g...
  function __pyx_array___setitem__ (line 6650) | static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__p...
  function __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__ (line 6661) | static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem...
  function PyObject (line 6710) | static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx...
  function PyObject (line 6721) | static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED st...
  function PyObject (line 6767) | static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__p...
  function PyObject (line 6778) | static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED...
  type __pyx_array_obj (line 6823) | struct __pyx_array_obj
  type __pyx_array_obj (line 6824) | struct __pyx_array_obj
  type __pyx_array_obj (line 6825) | struct __pyx_array_obj
  type __pyx_array_obj (line 6877) | struct __pyx_array_obj
  type __pyx_array_obj (line 6941) | struct __pyx_array_obj
  function __pyx_MemviewEnum___init__ (line 7000) | static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *...
  function __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__ (line 7051) | static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init...
  function PyObject (line 7093) | static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) {
  function PyObject (line 7104) | static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_...
  function PyObject (line 7144) | static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject ...
  function PyObject (line 7155) | static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __p...
  function PyObject (line 7379) | static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObjec...
  function PyObject (line 7390) | static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct ...
  function __pyx_memoryview___cinit__ (line 7522) | static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *...
  function __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__ (line 7602) | static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_...
  function __pyx_memoryview___dealloc__ (line 7920) | static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) {
  function __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__ (line 7929) | static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview...
  type __pyx_memoryview_obj (line 8149) | struct __pyx_memoryview_obj
  function PyObject (line 8289) | static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, Py...
  function PyObject (line 8300) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function __pyx_memoryview___setitem__ (line 8478) | static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject...
  function __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__ (line 8489) | static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_...
  function PyObject (line 8704) | static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *...
  function PyObject (line 8914) | static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_...
  function PyObject (line 9004) | static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __p...
  function PyObject (line 9294) | static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryvie...
  function PyObject (line 9355) | static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_me...
  function PyObject (line 9632) | static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_m...
  function CYTHON_UNUSED (line 9873) | static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_se...
  function __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__ (line 9884) | static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_...
  function PyObject (line 10217) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__...
  function PyObject (line 10228) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(...
  function PyObject (line 10303) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__ge...
  function PyObject (line 10314) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get...
  function PyObject (line 10356) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__g...
  function PyObject (line 10367) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___ge...
  function PyObject (line 10437) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1_...
  function PyObject (line 10448) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___...
  function PyObject (line 10551) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffset...
  function PyObject (line 10562) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffset...
  function PyObject (line 10669) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__ge...
  function PyObject (line 10680) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get...
  function PyObject (line 10732) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1...
  function PyObject (line 10743) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize__...
  function PyObject (line 10795) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__...
  function PyObject (line 10806) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___g...
  function PyObject (line 10868) | static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__ge...
  function PyObject (line 10879) | static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get...
  function Py_ssize_t (line 11009) | static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) {
  function Py_ssize_t (line 11020) | static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memo...
  function PyObject (line 11089) | static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) {
  function PyObject (line 11100) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 11191) | static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) {
  function PyObject (line 11202) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 11270) | static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CY...
  function PyObject (line 11281) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 11346) | static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CY...
  function PyObject (line 11357) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 11422) | static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UN...
  function PyObject (line 11433) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 11516) | static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, C...
  function PyObject (line 11527) | static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memor...
  function PyObject (line 11609) | static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *...
  function PyObject (line 11620) | static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUS...
  function PyObject (line 11666) | static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject...
  function PyObject (line 11677) | static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_U...
  function PyObject (line 11722) | static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_f...
  function CYTHON_INLINE (line 11813) | static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) {
  function PyObject (line 11852) | static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) {
  function PyObject (line 12309) | static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets...
  type __pyx_memoryview_obj (line 12397) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 12397) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 12404) | struct __pyx_memoryviewslice_obj
  type __pyx_memoryview_obj (line 12414) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 12419) | struct __pyx_memoryview_obj
  type __pyx_memoryviewslice_obj (line 12489) | struct __pyx_memoryviewslice_obj
  type __pyx_memoryview_obj (line 12901) | struct __pyx_memoryview_obj
  type __pyx_memoryview_obj (line 12942) | struct __pyx_memoryview_obj
  function __pyx_memoryview_slice_memviewslice (line 12977) | static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx...
  function __pyx_memslice_transpose (line 14070) | static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) {
  function __pyx_memoryviewslice___dealloc__ (line 14246) | static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) {
  function __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__ (line 14255) | static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memo...
  function PyObject (line 14288) | static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __p...
  function PyObject (line 14374) | static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __...
  function PyObject (line 14459) | static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base...
  function PyObject (line 14470) | static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base...
  function PyObject (line 14510) | static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObj...
  function PyObject (line 14521) | static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON...
  function PyObject (line 14567) | static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyO...
  function PyObject (line 14578) | static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYT...
  function PyObject (line 14623) | static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_m...
  function __Pyx_memviewslice (line 15009) | static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(st...
  function __pyx_memoryview_slice_copy (line 15112) | static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__p...
  function PyObject (line 15238) | static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_ob...
  function PyObject (line 15298) | static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_me...
  function Py_ssize_t (line 15424) | static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) {
  function __pyx_get_best_slice_order (line 15490) | static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslic...
  function _copy_strided_to_strided (line 15680) | static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t ...
  function copy_strided_to_strided (line 15917) | static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __P...
  function Py_ssize_t (line 15947) | static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__...
  function Py_ssize_t (line 16019) | static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_sh...
  type __pyx_memoryview_obj (line 16150) | struct __pyx_memoryview_obj
  function __pyx_memoryview_err_extents (line 16396) | static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_...
  function __pyx_memoryview_err_dim (line 16484) | static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx...
  function __pyx_memoryview_err (line 16568) | static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_m...
  function __pyx_memoryview_copy_contents (line 16678) | static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src...
  function __pyx_memoryview_broadcast_leading (line 17257) | static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx...
  function __pyx_memoryview_refcount_copying (line 17370) | static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_...
  function __pyx_memoryview_refcount_objects_in_slice_with_gil (line 17420) | static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__...
  function __pyx_memoryview_refcount_objects_in_slice (line 17459) | static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_dat...
  function __pyx_memoryview_slice_assign_scalar (line 17591) | static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__p...
  function __pyx_memoryview__slice_assign_scalar (line 17639) | static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py...
  function PyObject (line 17771) | static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyO...
  function PyObject (line 17844) | static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTH...
  function PyObject (line 18039) | static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum...
  type __pyx_vtabstruct_array (line 18162) | struct __pyx_vtabstruct_array
  function PyObject (line 18164) | static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObje...
  function __pyx_tp_dealloc_array (line 18184) | static void __pyx_tp_dealloc_array(PyObject *o) {
  function PyObject (line 18203) | static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) {
  function __pyx_mp_ass_subscript_array (line 18211) | static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObje...
  function PyObject (line 18222) | static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) {
  function PyObject (line 18231) | static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_U...
  type PyGetSetDef (line 18242) | struct PyGetSetDef
  type __pyx_array_obj (line 18286) | struct __pyx_array_obj
  function PyObject (line 18355) | static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObje...
  function __pyx_tp_dealloc_Enum (line 18369) | static void __pyx_tp_dealloc_Enum(PyObject *o) {
  function __pyx_tp_traverse_Enum (line 18381) | static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) {
  function __pyx_tp_clear_Enum (line 18390) | static int __pyx_tp_clear_Enum(PyObject *o) {
  type __pyx_MemviewEnum_obj (line 18408) | struct __pyx_MemviewEnum_obj
  type __pyx_vtabstruct_memoryview (line 18476) | struct __pyx_vtabstruct_memoryview
  function PyObject (line 18478) | static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, P...
  function __pyx_tp_dealloc_memoryview (line 18500) | static void __pyx_tp_dealloc_memoryview(PyObject *o) {
  function __pyx_tp_traverse_memoryview (line 18522) | static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *...
  function __pyx_tp_clear_memoryview (line 18540) | static int __pyx_tp_clear_memoryview(PyObject *o) {
  function PyObject (line 18555) | static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) {
  function __pyx_mp_ass_subscript_memoryview (line 18563) | static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, P...
  function PyObject (line 18574) | static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UN...
  function PyObject (line 18578) | static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON...
  function PyObject (line 18582) | static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHO...
  function PyObject (line 18586) | static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYT...
  function PyObject (line 18590) | static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, ...
  function PyObject (line 18594) | static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON...
  function PyObject (line 18598) | static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CY...
  function PyObject (line 18602) | static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTH...
  function PyObject (line 18606) | static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON...
  type PyGetSetDef (line 18620) | struct PyGetSetDef
  type __pyx_memoryview_obj (line 18672) | struct __pyx_memoryview_obj
  type __pyx_vtabstruct__memoryviewslice (line 18740) | struct __pyx_vtabstruct__memoryviewslice
  function PyObject (line 18742) | static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject...
  function __pyx_tp_dealloc__memoryviewslice (line 18753) | static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) {
  function __pyx_tp_traverse__memoryviewslice (line 18774) | static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, ...
  function __pyx_tp_clear__memoryviewslice (line 18784) | static int __pyx_tp_clear__memoryviewslice(PyObject *o) {
  function PyObject (line 18795) | static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, C...
  type PyGetSetDef (line 18805) | struct PyGetSetDef
  type __pyx_memoryviewslice_obj (line 18813) | struct __pyx_memoryviewslice_obj
  type PyModuleDef (line 18906) | struct PyModuleDef
  function CYTHON_SMALL_CODE (line 19047) | static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) {
  function CYTHON_SMALL_CODE (line 19062) | static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {
  function CYTHON_SMALL_CODE (line 19354) | static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) {
  function __Pyx_modinit_global_init_code (line 19375) | static int __Pyx_modinit_global_init_code(void) {
  function __Pyx_modinit_variable_export_code (line 19388) | static int __Pyx_modinit_variable_export_code(void) {
  function __Pyx_modinit_function_export_code (line 19396) | static int __Pyx_modinit_function_export_code(void) {
  function __Pyx_modinit_type_init_code (line 19404) | static int __Pyx_modinit_type_init_code(void) {
  function __Pyx_modinit_type_import_code (line 19469) | static int __Pyx_modinit_type_import_code(void) {
  function __Pyx_modinit_variable_import_code (line 19529) | static int __Pyx_modinit_variable_import_code(void) {
  function __Pyx_modinit_function_import_code (line 19537) | static int __Pyx_modinit_function_import_code(void) {
  function __Pyx_PyMODINIT_FUNC (line 19568) | __Pyx_PyMODINIT_FUNC PyInit_roc_cy(void)
  function CYTHON_SMALL_CODE (line 19573) | static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) {
  function CYTHON_SMALL_CODE (line 19596) | static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, P...
  function CYTHON_SMALL_CODE (line 19611) | static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CY...
  function __Pyx_RefNannyAPIStruct (line 19960) | static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modn...
  function CYTHON_INLINE (line 19977) | static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, ...
  function PyObject (line 19990) | static PyObject *__Pyx_GetBuiltinName(PyObject *name) {
  function __Pyx_init_memviewslice (line 20004) | static int
  function __pyx_fatalerror (line 20056) | static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN {
  function CYTHON_INLINE (line 20068) | static CYTHON_INLINE int
  function CYTHON_INLINE (line 20078) | static CYTHON_INLINE int
  function CYTHON_INLINE (line 20088) | static CYTHON_INLINE void
  function CYTHON_INLINE (line 20109) | static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice,
  function CYTHON_INLINE (line 20137) | static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) {
  function CYTHON_INLINE (line 20141) | static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject ...
  function CYTHON_INLINE (line 20153) | static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj...
  function CYTHON_INLINE (line 20165) | static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name)
  function CYTHON_INLINE (line 20198) | static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObj...
  function CYTHON_INLINE (line 20218) | static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *fun...
  function PyObject (line 20241) | static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObjec...
  function CYTHON_UNUSED (line 20359) | static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* functi...
  function CYTHON_INLINE (line 20389) | static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, ...
  function PyObject (line 20409) | static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *ar...
  function CYTHON_INLINE (line 20419) | static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func,...
  function CYTHON_INLINE (line 20437) | static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func,...
  function PyObject (line 20448) | static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) {
  function CYTHON_INLINE (line 20455) | static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, P...
  function CYTHON_INLINE (line 20473) | static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, ...
  function CYTHON_INLINE (line 20491) | static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssi...
  function PyObject (line 20536) | static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) {
  function PyObject (line 20554) | static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) {
  function PyObject (line 20565) | static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, CYTHO...
  function __Pyx_RaiseArgtupleInvalid (line 20688) | static void __Pyx_RaiseArgtupleInvalid(
  function __Pyx_RaiseDoubleKeywordsError (line 20714) | static void __Pyx_RaiseDoubleKeywordsError(
  function __Pyx_ParseOptionalKeywords (line 20728) | static int __Pyx_ParseOptionalKeywords(
  function CYTHON_INLINE (line 20830) | static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varna...
  function _PyErr_StackItem (line 20836) | static _PyErr_StackItem *
  function CYTHON_INLINE (line 20851) | static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, Py...
  function CYTHON_INLINE (line 20866) | static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, P...
  function __Pyx_PyErr_ExceptionMatchesTuple (line 20892) | static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObjec...
  function CYTHON_INLINE (line 20905) | static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadSta...
  function __Pyx_GetException (line 20919) | static int __Pyx_GetException(PyObject **type, PyObject **value, PyObjec...
  function CYTHON_INLINE (line 20991) | static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate,...
  function CYTHON_INLINE (line 21003) | static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, P...
  function __Pyx_Raise (line 21015) | static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,
  function __Pyx_Raise (line 21066) | static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, P...
  function __Pyx__ArgTypeTest (line 21173) | static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const c...
  function CYTHON_INLINE (line 21194) | static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2...
  function CYTHON_INLINE (line 21241) | static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* ...
  function CYTHON_INLINE (line 21343) | static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) {
  function CYTHON_INLINE (line 21356) | static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
  function PyObject (line 21389) | static PyObject *__Pyx_GetAttr3Default(PyObject *d) {
  function CYTHON_INLINE (line 21398) | static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, ...
  function CYTHON_INLINE (line 21404) | static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expec...
  function CYTHON_INLINE (line 21410) | static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t inde...
  function CYTHON_INLINE (line 21417) | static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) {
  function CYTHON_INLINE (line 21422) | static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *typ...
  function CYTHON_INLINE (line 21436) | static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, Py...
  function CYTHON_INLINE (line 21459) | static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject ...
  function PyObject (line 21470) | static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int l...
  function __Pyx_InBases (line 21536) | static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) {
  function CYTHON_INLINE (line 21544) | static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *...
  function __Pyx_inner_PyErr_GivenExceptionMatches2 (line 21560) | static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObj...
  function CYTHON_INLINE (line 21582) | static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObje...
  function __Pyx_PyErr_GivenExceptionMatchesTuple (line 21590) | static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, Py...
  function CYTHON_INLINE (line 21611) | static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err...
  function CYTHON_INLINE (line 21623) | static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *er...
  function PyObject (line 21635) | static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) {
  function CYTHON_INLINE (line 21649) | static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) {
  function PyObject (line 21668) | static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, P...
  function CYTHON_INLINE (line 21679) | static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObj...
  function PyObject (line 21708) | static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* ...
  function __Pyx_SetVtable (line 21717) | static int __Pyx_SetVtable(PyObject *dict, void *vtable) {
  function __Pyx_PyObject_GetAttrStr_ClearAttributeError (line 21735) | static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) {
  function CYTHON_INLINE (line 21741) | static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject...
  function __Pyx_setup_reduce_is_named (line 21757) | static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) {
  function __Pyx_setup_reduce (line 21773) | static int __Pyx_setup_reduce(PyObject* type_obj) {
  function PyTypeObject (line 21863) | static PyTypeObject *__Pyx_ImportType(PyObject *module, const char *modu...
  function __Pyx_CLineForTraceback (line 21923) | static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tsta...
  function __pyx_bisect_code_objects (line 21964) | static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries...
  function PyCodeObject (line 21985) | static PyCodeObject *__pyx_find_code_object(int code_line) {
  function __pyx_insert_code_object (line 21999) | static void __pyx_insert_code_object(int code_line, PyCodeObject* code_o...
  function PyCodeObject (line 22053) | static PyCodeObject* __Pyx_CreateCodeObjectForTraceback(
  function __Pyx_AddTraceback (line 22111) | static void __Pyx_AddTraceback(const char *funcname, int c_line,
  function __Pyx_GetBuffer (line 22151) | static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {
  function __Pyx_ReleaseBuffer (line 22158) | static void __Pyx_ReleaseBuffer(Py_buffer *view) {
  function __pyx_memviewslice_is_contig (line 22173) | static int
  function __pyx_get_array_memory_extents (line 22195) | static void
  function __pyx_slices_overlap (line 22219) | static int
  function CYTHON_INLINE (line 22231) | static CYTHON_INLINE PyObject *
  function CYTHON_INLINE (line 22244) | static CYTHON_INLINE int __Pyx_Is_Little_Endian(void)
  function __Pyx_BufFmt_Init (line 22255) | static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
  function __Pyx_BufFmt_ParseNumber (line 22282) | static int __Pyx_BufFmt_ParseNumber(const char** ts) {
  function __Pyx_BufFmt_ExpectNumber (line 22297) | static int __Pyx_BufFmt_ExpectNumber(const char **ts) {
  function __Pyx_BufFmt_RaiseUnexpectedChar (line 22304) | static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) {
  function __Pyx_BufFmt_TypeCharToStandardSize (line 22333) | static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_comple...
  function __Pyx_BufFmt_TypeCharToNativeSize (line 22351) | static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) {
  type __Pyx_st_short (line 22370) | typedef struct { char c; short x; } __Pyx_st_short;
  type __Pyx_st_int (line 22371) | typedef struct { char c; int x; } __Pyx_st_int;
  type __Pyx_st_long (line 22372) | typedef struct { char c; long x; } __Pyx_st_long;
  type __Pyx_st_float (line 22373) | typedef struct { char c; float x; } __Pyx_st_float;
  type __Pyx_st_double (line 22374) | typedef struct { char c; double x; } __Pyx_st_double;
  type __Pyx_st_longdouble (line 22375) | typedef struct { char c; long double x; } __Pyx_st_longdouble;
  type __Pyx_st_void_p (line 22376) | typedef struct { char c; void *x; } __Pyx_st_void_p;
  type __Pyx_st_longlong (line 22378) | typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong;
  function __Pyx_BufFmt_TypeCharToAlignment (line 22380) | static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED in...
  type __Pyx_pad_short (line 22402) | typedef struct { short x; char c; } __Pyx_pad_short;
  type __Pyx_pad_int (line 22403) | typedef struct { int x; char c; } __Pyx_pad_int;
  type __Pyx_pad_long (line 22404) | typedef struct { long x; char c; } __Pyx_pad_long;
  type __Pyx_pad_float (line 22405) | typedef struct { float x; char c; } __Pyx_pad_float;
  type __Pyx_pad_double (line 22406) | typedef struct { double x; char c; } __Pyx_pad_double;
  type __Pyx_pad_longdouble (line 22407) | typedef struct { long double x; char c; } __Pyx_pad_longdouble;
  type __Pyx_pad_void_p (line 22408) | typedef struct { void *x; char c; } __Pyx_pad_void_p;
  type __Pyx_pad_longlong (line 22410) | typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong;
  function __Pyx_BufFmt_TypeCharToPadding (line 22412) | static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int ...
  function __Pyx_BufFmt_TypeCharToGroup (line 22430) | static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) {
  function __Pyx_BufFmt_RaiseExpected (line 22451) | static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) {
  function __Pyx_BufFmt_ProcessTypeChunk (line 22475) | static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) {
  function PyObject (line 22577) | static PyObject *
  function __pyx_typeinfo_cmp (line 22757) | static int
  function __pyx_check_strides (line 22798) | static int
  function __pyx_check_suboffsets (line 22851) | static int
  function __pyx_verify_contig (line 22874) | static int
  function __Pyx_ValidateAndInit_memviewslice (line 22903) | static int __Pyx_ValidateAndInit_memviewslice(
  function CYTHON_INLINE (line 22979) | static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlic...
  function CYTHON_INLINE (line 23002) | static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlic...
  function CYTHON_INLINE (line 23027) | static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_pa...
  function CYTHON_INLINE (line 23031) | static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_pa...
  function CYTHON_INLINE (line 23036) | static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_pa...
  function CYTHON_INLINE (line 23047) | static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx...
  function CYTHON_INLINE (line 23050) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_flo...
  function CYTHON_INLINE (line 23056) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_fl...
  function CYTHON_INLINE (line 23062) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_fl...
  function CYTHON_INLINE (line 23069) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_fl...
  function CYTHON_INLINE (line 23089) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_fl...
  function CYTHON_INLINE (line 23100) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_flo...
  function CYTHON_INLINE (line 23106) | static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) {
  function CYTHON_INLINE (line 23109) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_fl...
  function CYTHON_INLINE (line 23116) | static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) {
  function CYTHON_INLINE (line 23123) | static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_flo...
  function CYTHON_INLINE (line 23181) | static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_...
  function CYTHON_INLINE (line 23185) | static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_...
  function CYTHON_INLINE (line 23190) | static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_...
  function CYTHON_INLINE (line 23201) | static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __p...
  function CYTHON_INLINE (line 23204) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_d...
  function CYTHON_INLINE (line 23210) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_...
  function CYTHON_INLINE (line 23216) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_...
  function CYTHON_INLINE (line 23223) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_...
  function CYTHON_INLINE (line 23243) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_...
  function CYTHON_INLINE (line 23254) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_d...
  function CYTHON_INLINE (line 23260) | static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) {
  function CYTHON_INLINE (line 23263) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_...
  function CYTHON_INLINE (line 23270) | static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) {
  function CYTHON_INLINE (line 23277) | static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_d...
  function CYTHON_INLINE (line 23333) | static CYTHON_INLINE PyObject *__pyx_memview_get_float(const char *itemp) {
  function CYTHON_INLINE (line 23336) | static CYTHON_INLINE int __pyx_memview_set_float(const char *itemp, PyOb...
  function CYTHON_INLINE (line 23367) | static CYTHON_INLINE PyObject *__pyx_memview_get_long(const char *itemp) {
  function CYTHON_INLINE (line 23370) | static CYTHON_INLINE int __pyx_memview_set_long(const char *itemp, PyObj...
  function CYTHON_INLINE (line 23379) | static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlic...
  function CYTHON_INLINE (line 23402) | static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlic...
  function __Pyx_memviewslice (line 23425) | static __Pyx_memviewslice
  function CYTHON_INLINE (line 23492) | static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {
  function CYTHON_INLINE (line 23922) | static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) {
  function __Pyx_check_binary_version (line 24156) | static int __Pyx_check_binary_version(void) {
  function __Pyx_InitStrings (line 24194) | static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {
  function CYTHON_INLINE (line 24226) | static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_...
  function CYTHON_INLINE (line 24229) | static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) {
  function CYTHON_INLINE (line 24256) | static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObjec...
  function CYTHON_INLINE (line 24298) | static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {
  function CYTHON_INLINE (line 24303) | static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) {
  function PyObject (line 24310) | static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* resul...
  function CYTHON_INLINE (line 24379) | static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {
  function CYTHON_INLINE (line 24441) | static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) {
  function CYTHON_INLINE (line 24458) | static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) {
  function CYTHON_INLINE (line 24461) | static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {

FILE: fast_reid/fastreid/evaluation/rank_cylib/setup.py
  function numpy_include (line 8) | def numpy_include():

FILE: fast_reid/fastreid/evaluation/reid_evaluation.py
  class ReidEvaluator (line 26) | class ReidEvaluator(DatasetEvaluator):
    method __init__ (line 27) | def __init__(self, cfg, num_query, output_dir=None):
    method reset (line 37) | def reset(self):
    method process (line 40) | def process(self, inputs, outputs):
    method evaluate (line 49) | def evaluate(self):
    method _compile_dependencies (line 128) | def _compile_dependencies(self):

FILE: fast_reid/fastreid/evaluation/rerank.py
  function re_ranking (line 11) | def re_ranking(q_g_dist, q_q_dist, g_g_dist, k1: int = 20, k2: int = 6, ...

FILE: fast_reid/fastreid/evaluation/roc.py
  function evaluate_roc_py (line 24) | def evaluate_roc_py(distmat, q_pids, g_pids, q_camids, g_camids):
  function evaluate_roc (line 64) | def evaluate_roc(

FILE: fast_reid/fastreid/evaluation/testing.py
  function print_csv_format (line 12) | def print_csv_format(results):
  function verify_results (line 39) | def verify_results(cfg, results):
  function flatten_results_dict (line 71) | def flatten_results_dict(results):

FILE: fast_reid/fastreid/layers/activation.py
  class Mish (line 20) | class Mish(nn.Module):
    method __init__ (line 21) | def __init__(self):
    method forward (line 24) | def forward(self, x):
  class Swish (line 29) | class Swish(nn.Module):
    method forward (line 30) | def forward(self, x):
  class SwishImplementation (line 34) | class SwishImplementation(torch.autograd.Function):
    method forward (line 36) | def forward(ctx, i):
    method backward (line 42) | def backward(ctx, grad_output):
  class MemoryEfficientSwish (line 48) | class MemoryEfficientSwish(nn.Module):
    method forward (line 49) | def forward(self, x):
  class GELU (line 53) | class GELU(nn.Module):
    method forward (line 58) | def forward(self, x):

FILE: fast_reid/fastreid/layers/any_softmax.py
  class Linear (line 18) | class Linear(nn.Module):
    method __init__ (line 19) | def __init__(self, num_classes, scale, margin):
    method forward (line 25) | def forward(self, logits, targets):
    method extra_repr (line 28) | def extra_repr(self):
  class CosSoftmax (line 32) | class CosSoftmax(Linear):
    method forward (line 36) | def forward(self, logits, targets):
  class ArcSoftmax (line 45) | class ArcSoftmax(Linear):
    method forward (line 47) | def forward(self, logits, targets):
  class CircleSoftmax (line 57) | class CircleSoftmax(Linear):
    method forward (line 59) | def forward(self, logits, targets):

FILE: fast_reid/fastreid/layers/batch_norm.py
  class BatchNorm (line 16) | class BatchNorm(nn.BatchNorm2d):
    method __init__ (line 17) | def __init__(self, num_features, eps=1e-05, momentum=0.1, weight_freez...
  class SyncBatchNorm (line 26) | class SyncBatchNorm(nn.SyncBatchNorm):
    method __init__ (line 27) | def __init__(self, num_features, eps=1e-05, momentum=0.1, weight_freez...
  class IBN (line 36) | class IBN(nn.Module):
    method __init__ (line 37) | def __init__(self, planes, bn_norm, **kwargs):
    method forward (line 45) | def forward(self, x):
  class GhostBatchNorm (line 53) | class GhostBatchNorm(BatchNorm):
    method __init__ (line 54) | def __init__(self, num_features, num_splits=1, **kwargs):
    method forward (line 60) | def forward(self, input):
  class FrozenBatchNorm (line 78) | class FrozenBatchNorm(nn.Module):
    method __init__ (line 96) | def __init__(self, num_features, eps=1e-5, **kwargs):
    method forward (line 105) | def forward(self, x):
    method _load_from_state_dict (line 127) | def _load_from_state_dict(
    method __repr__ (line 150) | def __repr__(self):
    method convert_frozen_batchnorm (line 154) | def convert_frozen_batchnorm(cls, module):
  function get_norm (line 184) | def get_norm(norm, out_channels, **kwargs):

FILE: fast_reid/fastreid/layers/context_block.py
  function last_zero_init (line 9) | def last_zero_init(m):
  class ContextBlock (line 20) | class ContextBlock(nn.Module):
    method __init__ (line 22) | def __init__(self,
    method reset_parameters (line 61) | def reset_parameters(self):
    method spatial_pool (line 73) | def spatial_pool(self, x):
    method forward (line 99) | def forward(self, x):

FILE: fast_reid/fastreid/layers/drop.py
  function drop_block_2d (line 17) | def drop_block_2d(
  function drop_block_fast_2d (line 64) | def drop_block_fast_2d(
  class DropBlock2d (line 102) | class DropBlock2d(nn.Module):
    method __init__ (line 106) | def __init__(self,
    method forward (line 123) | def forward(self, x):
  function drop_path (line 134) | def drop_path(x, drop_prob: float = 0., training: bool = False):
  class DropPath (line 152) | class DropPath(nn.Module):
    method __init__ (line 156) | def __init__(self, drop_prob=None):
    method forward (line 160) | def forward(self, x):

FILE: fast_reid/fastreid/layers/frn.py
  class TLU (line 14) | class TLU(nn.Module):
    method __init__ (line 15) | def __init__(self, num_features):
    method reset_parameters (line 22) | def reset_parameters(self):
    method extra_repr (line 25) | def extra_repr(self):
    method forward (line 28) | def forward(self, x):
  class FRN (line 32) | class FRN(nn.Module):
    method __init__ (line 33) | def __init__(self, num_features, eps=1e-6, is_eps_leanable=False):
    method reset_parameters (line 55) | def reset_parameters(self):
    method extra_repr (line 61) | def extra_repr(self):
    method forward (line 64) | def forward(self, x):
  function bnrelu_to_frn (line 86) | def bnrelu_to_frn(module):
  function convert (line 115) | def convert(module, flag_name):
  function remove_flags (line 131) | def remove_flags(module, flag_name):
  function bnrelu_to_frn2 (line 142) | def bnrelu_to_frn2(model, input_size=(3, 128, 128), batch_size=2, flag_n...

FILE: fast_reid/fastreid/layers/gather_layer.py
  class GatherLayer (line 13) | class GatherLayer(torch.autograd.Function):
    method forward (line 18) | def forward(ctx, input):
    method backward (line 26) | def backward(ctx, *grads):

FILE: fast_reid/fastreid/layers/helpers.py
  function _ntuple (line 9) | def _ntuple(n):
  function make_divisible (line 25) | def make_divisible(v, divisor=8, min_value=None):

FILE: fast_reid/fastreid/layers/non_local.py
  class Non_local (line 9) | class Non_local(nn.Module):
    method __init__ (line 10) | def __init__(self, in_channels, bn_norm, reduc_ratio=2):
    method forward (line 33) | def forward(self, x):

FILE: fast_reid/fastreid/layers/pooling.py
  class Identity (line 24) | class Identity(nn.Module):
    method __init__ (line 25) | def __init__(self, *args, **kwargs):
    method forward (line 28) | def forward(self, input):
  class Flatten (line 32) | class Flatten(nn.Module):
    method __init__ (line 33) | def __init__(self, *args, **kwargs):
    method forward (line 36) | def forward(self, input):
  class GlobalAvgPool (line 40) | class GlobalAvgPool(nn.AdaptiveAvgPool2d):
    method __init__ (line 41) | def __init__(self, output_size=1, *args, **kwargs):
  class GlobalMaxPool (line 45) | class GlobalMaxPool(nn.AdaptiveMaxPool2d):
    method __init__ (line 46) | def __init__(self, output_size=1, *args, **kwargs):
  class GeneralizedMeanPooling (line 50) | class GeneralizedMeanPooling(nn.Module):
    method __init__ (line 64) | def __init__(self, norm=3, output_size=(1, 1), eps=1e-6, *args, **kwar...
    method forward (line 71) | def forward(self, x):
    method __repr__ (line 75) | def __repr__(self):
  class GeneralizedMeanPoolingP (line 81) | class GeneralizedMeanPoolingP(GeneralizedMeanPooling):
    method __init__ (line 85) | def __init__(self, norm=3, output_size=(1, 1), eps=1e-6, *args, **kwar...
  class AdaptiveAvgMaxPool (line 90) | class AdaptiveAvgMaxPool(nn.Module):
    method __init__ (line 91) | def __init__(self, output_size=1, *args, **kwargs):
    method forward (line 96) | def forward(self, x):
  class FastGlobalAvgPool (line 103) | class FastGlobalAvgPool(nn.Module):
    method __init__ (line 104) | def __init__(self, flatten=False, *args, **kwargs):
    method forward (line 108) | def forward(self, x):
  class ClipGlobalAvgPool (line 116) | class ClipGlobalAvgPool(nn.Module):
    method __init__ (line 117) | def __init__(self, *args, **kwargs):
    method forward (line 121) | def forward(self, x):

FILE: fast_reid/fastreid/layers/se_layer.py
  class SELayer (line 10) | class SELayer(nn.Module):
    method __init__ (line 11) | def __init__(self, channel, reduction=16):
    method forward (line 21) | def forward(self, x):

FILE: fast_reid/fastreid/layers/splat.py
  class SplAtConv2d (line 15) | class SplAtConv2d(nn.Module):
    method __init__ (line 19) | def __init__(self, in_channels, channels, kernel_size, stride=(1, 1), ...
    method forward (line 52) | def forward(self, x):
  class rSoftMax (line 90) | class rSoftMax(nn.Module):
    method __init__ (line 91) | def __init__(self, radix, cardinality):
    method forward (line 96) | def forward(self, x):
  class DropBlock2D (line 107) | class DropBlock2D(object):
    method __init__ (line 108) | def __init__(self, *args, **kwargs):

FILE: fast_reid/fastreid/layers/weight_init.py
  function weights_init_kaiming (line 14) | def weights_init_kaiming(m):
  function weights_init_classifier (line 30) | def weights_init_classifier(m):
  function _no_grad_trunc_normal_ (line 41) | def _no_grad_trunc_normal_(tensor, mean, std, a, b):
  function trunc_normal_ (line 77) | def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
  function variance_scaling_ (line 98) | def variance_scaling_(tensor, scale=1.0, mode='fan_in', distribution='no...
  function lecun_normal_ (line 121) | def lecun_normal_(tensor):

FILE: fast_reid/fastreid/modeling/backbones/build.py
  function build_backbone (line 18) | def build_backbone(cfg):

FILE: fast_reid/fastreid/modeling/backbones/mobilenet.py
  function _make_divisible (line 21) | def _make_divisible(v, divisor, min_value=None):
  function conv_3x3_bn (line 41) | def conv_3x3_bn(inp, oup, stride, bn_norm):
  function conv_1x1_bn (line 49) | def conv_1x1_bn(inp, oup, bn_norm):
  class InvertedResidual (line 57) | class InvertedResidual(nn.Module):
    method __init__ (line 58) | def __init__(self, inp, oup, bn_norm, stride, expand_ratio):
    method forward (line 90) | def forward(self, x):
  class MobileNetV2 (line 97) | class MobileNetV2(nn.Module):
    method __init__ (line 98) | def __init__(self, bn_norm, width_mult=1.):
    method forward (line 129) | def forward(self, x):
    method _initialize_weights (line 134) | def _initialize_weights(self):
  function build_mobilenetv2_backbone (line 150) | def build_mobilenetv2_backbone(cfg):

FILE: fast_reid/fastreid/modeling/backbones/mobilenetv3.py
  function conv_1x1_bn (line 29) | def conv_1x1_bn(inp, oup, bn_norm):
  class ConvBNActivation (line 37) | class ConvBNActivation(nn.Sequential):
    method __init__ (line 38) | def __init__(
  class SqueezeExcitation (line 61) | class SqueezeExcitation(nn.Module):
    method __init__ (line 62) | def __init__(self, input_channels: int, squeeze_factor: int = 4):
    method _scale (line 69) | def _scale(self, input: Tensor, inplace: bool) -> Tensor:
    method forward (line 76) | def forward(self, input: Tensor) -> Tensor:
  class InvertedResidualConfig (line 81) | class InvertedResidualConfig:
    method __init__ (line 82) | def __init__(self, input_channels: int, kernel: int, expanded_channels...
    method adjust_channels (line 94) | def adjust_channels(channels: int, width_mult: float):
  class InvertedResidual (line 98) | class InvertedResidual(nn.Module):
    method __init__ (line 99) | def __init__(self, cnf: InvertedResidualConfig, bn_norm,
    method forward (line 131) | def forward(self, input: Tensor) -> Tensor:
  class MobileNetV3 (line 138) | class MobileNetV3(nn
Copy disabled (too large) Download .json
Condensed preview — 878 files, each showing path, character count, and a content snippet. Download the .json file for the full structured content (20,221K chars).
[
  {
    "path": ".gitignore",
    "chars": 1980,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\nevaldata/\n# C extensions\n*.so\n# Distribution "
  },
  {
    "path": "Dockerfile",
    "chars": 2367,
    "preview": "FROM nvcr.io/nvidia/tensorrt:21.09-py3\n\nENV DEBIAN_FRONTEND=noninteractive\nARG USERNAME=user\nARG WORKDIR=/workspace/OC_S"
  },
  {
    "path": "LICENSE",
    "chars": 1067,
    "preview": "MIT License\n\nCopyright (c) 2021 Yifu Zhang\n\nPermission is hereby granted, free of charge, to any person obtaining a copy"
  },
  {
    "path": "README.md",
    "chars": 18347,
    "preview": "# Hybrid-SORT\n\n [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/M"
  },
  {
    "path": "TrackEval/.gitignore",
    "chars": 141,
    "preview": "gt_data/*\n!gt_data/Readme.md\ntracker_output/*\n!tracker_output/Readme.md\noutput/*\ndata/*\n!goutput/Readme.md\n**/__pycache_"
  },
  {
    "path": "TrackEval/LICENSE",
    "chars": 1072,
    "preview": "MIT License\n\nCopyright (c) 2020 Jonathon Luiten\n\nPermission is hereby granted, free of charge, to any person obtaining a"
  },
  {
    "path": "TrackEval/Readme.md",
    "chars": 12606,
    "preview": "\n# TrackEval\n*Code for evaluating object tracking.*\n\nThis codebase provides code for a number of different tracking eval"
  },
  {
    "path": "TrackEval/docs/BDD100k-format.txt",
    "chars": 4999,
    "preview": "Taken from: https://bdd-data.berkeley.edu/wad-2020.html\n\nBDD100K MOT Dataset\n\nTo advance the study on multiple object tr"
  },
  {
    "path": "TrackEval/docs/DAVIS-format.txt",
    "chars": 1088,
    "preview": "Annotation Format:\n\n\nThe annotations in each frame are stored in png format.\nThis png is stored indexed i.e. it has a si"
  },
  {
    "path": "TrackEval/docs/How_To/Add_a_new_metric.md",
    "chars": 1596,
    "preview": "# How to add a new or custom family of evaluation metrics to TrackEval\n\n - Create your metrics code in ```trackeval/metr"
  },
  {
    "path": "TrackEval/docs/KITTI-format.txt",
    "chars": 8792,
    "preview": "Taken from download link found at: http://www.cvlibs.net/datasets/kitti/eval_tracking.php\n\n#############################"
  },
  {
    "path": "TrackEval/docs/MOTChallenge-Official/Readme.md",
    "chars": 7232,
    "preview": "![Test Image 4](https://motchallenge.net/img/header-bg/mot_bannerthin.png)\n![MOT_PIC](https://motchallenge.net/sequenceV"
  },
  {
    "path": "TrackEval/docs/MOTChallenge-format.txt",
    "chars": 2102,
    "preview": "Taken from: https://motchallenge.net/instructions/\n\nFile Format\n\nPlease submit your results as a single .zip file. The r"
  },
  {
    "path": "TrackEval/docs/MOTS-format.txt",
    "chars": 1884,
    "preview": "Taken from: https://www.vision.rwth-aachen.de/page/mots\n\n\nAnnotation Format\nWe provide two alternative and equivalent fo"
  },
  {
    "path": "TrackEval/docs/OpenWorldTracking-Official/Readme.md",
    "chars": 2031,
    "preview": "![owt](https://user-images.githubusercontent.com/23000532/160293694-6fc0a3da-c177-4776-8472-49ff6ff375a3.jpg)\n# Opening "
  },
  {
    "path": "TrackEval/docs/RobMOTS-Official/Readme.md",
    "chars": 13589,
    "preview": "[![image](https://user-images.githubusercontent.com/23000532/118353602-607d1080-b567-11eb-8744-3e346a438583.png)](https:"
  },
  {
    "path": "TrackEval/docs/TAO-format.txt",
    "chars": 1203,
    "preview": "Taken from: https://github.com/TAO-Dataset/tao/blob/master/tao/toolkit/tao/tao.py\n\nAnnotation file format:\n{\n    \"info\" "
  },
  {
    "path": "TrackEval/docs/YouTube-VIS-format.txt",
    "chars": 1500,
    "preview": "Taken from: https://competitions.codalab.org/competitions/20128#participate-get-data\n\nThe label file follows MSCOCO's st"
  },
  {
    "path": "TrackEval/minimum_requirements.txt",
    "chars": 27,
    "preview": "scipy==1.4.1\nnumpy==1.18.1\n"
  },
  {
    "path": "TrackEval/pyproject.toml",
    "chars": 104,
    "preview": "[build-system]\nrequires = [\n    \"setuptools>=42\",\n    \"wheel\"\n]\nbuild-backend = \"setuptools.build_meta\"\n"
  },
  {
    "path": "TrackEval/requirements.txt",
    "chars": 137,
    "preview": "numpy==1.18.1\nscipy==1.4.1\npycocotools==2.0.2\nmatplotlib==3.2.1\nopencv_python==4.4.0.46\nscikit_image==0.16.2\npytest==6.0"
  },
  {
    "path": "TrackEval/scripts/comparison_plots.py",
    "chars": 732,
    "preview": "import sys\nimport os\n\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))\nimport trackeva"
  },
  {
    "path": "TrackEval/scripts/run_bdd.py",
    "chars": 4153,
    "preview": "\n\"\"\" run_bdd.py\n\nRun example:\nrun_bdd.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL qdtrack\n\nCommand Line Ar"
  },
  {
    "path": "TrackEval/scripts/run_davis.py",
    "chars": 4332,
    "preview": "\"\"\" run_davis.py\n\nRun example:\nrun_davis.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL ags\n\nCommand Line Arg"
  },
  {
    "path": "TrackEval/scripts/run_headtracking_challenge.py",
    "chars": 4270,
    "preview": "\n\"\"\" run_mot_challenge.py\n\nRun example:\nrun_mot_challenge.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL Lif_"
  },
  {
    "path": "TrackEval/scripts/run_kitti.py",
    "chars": 3937,
    "preview": "\n\"\"\" run_kitti.py\n\nRun example:\nrun_kitti.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL CIWT\n\nCommand Line A"
  },
  {
    "path": "TrackEval/scripts/run_kitti_mots.py",
    "chars": 4419,
    "preview": "\n\"\"\" run_kitti_mots.py\n\nRun example:\nrun_kitti_mots.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL trackrcnn\n"
  },
  {
    "path": "TrackEval/scripts/run_mot_challenge.py",
    "chars": 4818,
    "preview": "\n\"\"\" run_mot_challenge.py\n\nRun example:\nrun_mot_challenge.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL Lif_"
  },
  {
    "path": "TrackEval/scripts/run_mots_challenge.py",
    "chars": 4743,
    "preview": "\"\"\" run_mots.py\n\nRun example:\nrun_mots.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL TrackRCNN\n\nCommand Line"
  },
  {
    "path": "TrackEval/scripts/run_rob_mots.py",
    "chars": 7172,
    "preview": "# python3 scripts/run_rob_mots.py --ROBMOTS_SPLIT train --TRACKERS_TO_EVAL STP --USE_PARALLEL True --NUM_PARALLEL_CORES "
  },
  {
    "path": "TrackEval/scripts/run_tao.py",
    "chars": 4198,
    "preview": "\"\"\" run_tao.py\n\nRun example:\nrun_tao.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL Tracktor++\n\nCommand Line "
  },
  {
    "path": "TrackEval/scripts/run_tao_ow.py",
    "chars": 4360,
    "preview": "\"\"\" run_tao.py\n\nRun example:\nrun_tao.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL Tracktor++\n\nCommand Line "
  },
  {
    "path": "TrackEval/scripts/run_youtube_vis.py",
    "chars": 5153,
    "preview": "\n\"\"\" run_youtube_vis.py\nRun example:\nrun_youtube_vis.py --USE_PARALLEL False --METRICS HOTA --TRACKERS_TO_EVAL STEm_Seg\n"
  },
  {
    "path": "TrackEval/setup.cfg",
    "chars": 745,
    "preview": "[metadata]\nname = trackeval\nversion = 1.0.dev1\nauthor = Jonathon Luiten, Arne Hoffhues\nauthor_email = jonoluiten@gmail.c"
  },
  {
    "path": "TrackEval/setup.py",
    "chars": 38,
    "preview": "from setuptools import setup\n\nsetup()\n"
  },
  {
    "path": "TrackEval/tests/test_all_quick.py",
    "chars": 3495,
    "preview": "\"\"\" Test to ensure that the code is working correctly.\nShould test ALL metrics across all datasets and splits currently "
  },
  {
    "path": "TrackEval/tests/test_davis.py",
    "chars": 2749,
    "preview": "import sys\nimport os\nimport numpy as np\nfrom multiprocessing import freeze_support\n\nsys.path.insert(0, os.path.abspath(o"
  },
  {
    "path": "TrackEval/tests/test_metrics.py",
    "chars": 6734,
    "preview": "import numpy as np\nimport pytest\n\nimport trackeval\n\n\ndef no_confusion():\n    num_timesteps = 5\n    num_gt_ids = 2\n    nu"
  },
  {
    "path": "TrackEval/tests/test_mot17.py",
    "chars": 2725,
    "preview": "\"\"\" Test to ensure that the code is working correctly.\nRuns all metrics on 14 trackers for the MOT Challenge MOT17 bench"
  },
  {
    "path": "TrackEval/tests/test_mots.py",
    "chars": 2860,
    "preview": "import sys\nimport os\nimport numpy as np\nfrom multiprocessing import freeze_support\n\nsys.path.insert(0, os.path.abspath(o"
  },
  {
    "path": "TrackEval/trackeval/__init__.py",
    "chars": 116,
    "preview": "from .eval import Evaluator\nfrom . import datasets\nfrom . import metrics\nfrom . import plotting\nfrom . import utils\n"
  },
  {
    "path": "TrackEval/trackeval/_timing.py",
    "chars": 2323,
    "preview": "from functools import wraps\nfrom time import perf_counter\nimport inspect\n\nDO_TIMING = False\nDISPLAY_LESS_PROGRESS = Fals"
  },
  {
    "path": "TrackEval/trackeval/baselines/__init__.py",
    "chars": 110,
    "preview": "import baseline_utils\nimport stp\nimport non_overlap\nimport pascal_colormap\nimport thresholder\nimport vizualize"
  },
  {
    "path": "TrackEval/trackeval/baselines/baseline_utils.py",
    "chars": 12727,
    "preview": "\nimport os\nimport csv\nimport numpy as np\nfrom copy import deepcopy\nfrom PIL import Image\nfrom pycocotools import mask as"
  },
  {
    "path": "TrackEval/trackeval/baselines/non_overlap.py",
    "chars": 3330,
    "preview": "\"\"\"\nNon-Overlap: Code to take in a set of raw detections and produce a set of non-overlapping detections from it.\n\nAutho"
  },
  {
    "path": "TrackEval/trackeval/baselines/pascal_colormap.py",
    "chars": 8723,
    "preview": "pascal_colormap = [\n    0     ,         0,         0,\n    0.5020,         0,         0,\n         0,    0.5020,         0"
  },
  {
    "path": "TrackEval/trackeval/baselines/stp.py",
    "chars": 5982,
    "preview": "\"\"\"\nSTP: Simplest Tracker Possible\n\nAuthor: Jonathon Luiten\n\nThis simple tracker, simply assigns track IDs which maximis"
  },
  {
    "path": "TrackEval/trackeval/baselines/thresholder.py",
    "chars": 3111,
    "preview": "\"\"\"\nThresholder\n\nAuthor: Jonathon Luiten\n\nSimply reads in a set of detection, thresholds them at a certain score thresho"
  },
  {
    "path": "TrackEval/trackeval/baselines/vizualize.py",
    "chars": 3417,
    "preview": "\"\"\"\nVizualize: Code which converts .txt rle tracking results into a visual .png format.\n\nAuthor: Jonathon Luiten\n\"\"\"\n\nim"
  },
  {
    "path": "TrackEval/trackeval/datasets/__init__.py",
    "chars": 392,
    "preview": "from .kitti_2d_box import Kitti2DBox\nfrom .kitti_mots import KittiMOTS\nfrom .mot_challenge_2d_box import MotChallenge2DB"
  },
  {
    "path": "TrackEval/trackeval/datasets/_base_dataset.py",
    "chars": 17008,
    "preview": "import csv\nimport io\nimport zipfile\nimport os\nimport traceback\nimport numpy as np\nfrom copy import deepcopy\nfrom abc imp"
  },
  {
    "path": "TrackEval/trackeval/datasets/bdd100k.py",
    "chars": 16589,
    "preview": "\nimport os\nimport json\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ..utils import TrackEval"
  },
  {
    "path": "TrackEval/trackeval/datasets/davis.py",
    "chars": 14167,
    "preview": "import os\nimport csv\nimport numpy as np\nfrom ._base_dataset import _BaseDataset\nfrom ..utils import TrackEvalException\nf"
  },
  {
    "path": "TrackEval/trackeval/datasets/head_tracking_challenge.py",
    "chars": 24685,
    "preview": "import os\nimport csv\nimport configparser\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base"
  },
  {
    "path": "TrackEval/trackeval/datasets/kitti_2d_box.py",
    "chars": 21551,
    "preview": "\nimport os\nimport csv\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_dataset import _Ba"
  },
  {
    "path": "TrackEval/trackeval/datasets/kitti_mots.py",
    "chars": 22380,
    "preview": "import os\nimport csv\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_dataset import _Bas"
  },
  {
    "path": "TrackEval/trackeval/datasets/mot_challenge_2d_box.py",
    "chars": 24118,
    "preview": "import os\nimport csv\nimport configparser\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base"
  },
  {
    "path": "TrackEval/trackeval/datasets/mots_challenge.py",
    "chars": 23613,
    "preview": "import os\nimport csv\nimport configparser\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base"
  },
  {
    "path": "TrackEval/trackeval/datasets/rob_mots.py",
    "chars": 28123,
    "preview": "\nimport os\nimport csv\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_dataset import _Ba"
  },
  {
    "path": "TrackEval/trackeval/datasets/rob_mots_classmap.py",
    "chars": 1270,
    "preview": "cls_id_to_name = {\n 1: 'person',\n 2: 'bicycle',\n 3: 'car',\n 4: 'motorcycle',\n 5: 'airplane',\n 6: 'bus',\n 7: 'train',\n 8:"
  },
  {
    "path": "TrackEval/trackeval/datasets/run_rob_mots.py",
    "chars": 5817,
    "preview": "\n# python3 scripts\\run_rob_mots.py --ROBMOTS_SPLIT val --TRACKERS_TO_EVAL tracker_name (e.g. STP) --USE_PARALLEL True --"
  },
  {
    "path": "TrackEval/trackeval/datasets/tao.py",
    "chars": 29842,
    "preview": "import os\nimport numpy as np\nimport json\nimport itertools\nfrom collections import defaultdict\nfrom scipy.optimize import"
  },
  {
    "path": "TrackEval/trackeval/datasets/tao_ow.py",
    "chars": 33927,
    "preview": "import os\nimport numpy as np\nimport json\nimport itertools\nfrom collections import defaultdict\nfrom scipy.optimize import"
  },
  {
    "path": "TrackEval/trackeval/datasets/youtube_vis.py",
    "chars": 19626,
    "preview": "import os\nimport numpy as np\nimport json\nfrom ._base_dataset import _BaseDataset\nfrom ..utils import TrackEvalException\n"
  },
  {
    "path": "TrackEval/trackeval/eval.py",
    "chars": 11175,
    "preview": "import time\nimport traceback\nfrom multiprocessing.pool import Pool\nfrom functools import partial\nimport os\nfrom . import"
  },
  {
    "path": "TrackEval/trackeval/metrics/__init__.py",
    "chars": 212,
    "preview": "from .hota import HOTA\nfrom .clear import CLEAR\nfrom .identity import Identity\nfrom .count import Count\nfrom .j_and_f im"
  },
  {
    "path": "TrackEval/trackeval/metrics/_base_metric.py",
    "chars": 5689,
    "preview": "\nimport numpy as np\nfrom abc import ABC, abstractmethod\nfrom .. import _timing\nfrom ..utils import TrackEvalException\nim"
  },
  {
    "path": "TrackEval/trackeval/metrics/clear.py",
    "chars": 9166,
    "preview": "\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfrom .. impor"
  },
  {
    "path": "TrackEval/trackeval/metrics/count.py",
    "chars": 1584,
    "preview": "\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\n\n\nclass Count(_BaseMetric):\n    \"\"\"Class which simply coun"
  },
  {
    "path": "TrackEval/trackeval/metrics/hota.py",
    "chars": 10506,
    "preview": "\nimport os\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfro"
  },
  {
    "path": "TrackEval/trackeval/metrics/identity.py",
    "chars": 6214,
    "preview": "import numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfrom .. import"
  },
  {
    "path": "TrackEval/trackeval/metrics/ideucl.py",
    "chars": 5514,
    "preview": "import numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfrom .. import"
  },
  {
    "path": "TrackEval/trackeval/metrics/j_and_f.py",
    "chars": 13021,
    "preview": "\nimport numpy as np\nimport math\nfrom scipy.optimize import linear_sum_assignment\nfrom ..utils import TrackEvalException\n"
  },
  {
    "path": "TrackEval/trackeval/metrics/track_map.py",
    "chars": 20859,
    "preview": "import numpy as np\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\nfrom functools import partial\nfrom .. im"
  },
  {
    "path": "TrackEval/trackeval/metrics/vace.py",
    "chars": 5881,
    "preview": "import numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfrom .. import"
  },
  {
    "path": "TrackEval/trackeval/plotting.py",
    "chars": 8400,
    "preview": "\nimport os\nimport numpy as np\nfrom .utils import TrackEvalException\n\n\ndef plot_compare_trackers(tracker_folder, tracker_"
  },
  {
    "path": "TrackEval/trackeval/utils.py",
    "chars": 5807,
    "preview": "\nimport os\nimport csv\nimport argparse\nfrom collections import OrderedDict\n\n\ndef init_config(config, default_config, name"
  },
  {
    "path": "deploy/ONNXRuntime/README.md",
    "chars": 564,
    "preview": "## ByteTrack-ONNXRuntime in Python\n\nThis doc introduces how to convert your pytorch model into onnx, and how to run an o"
  },
  {
    "path": "deploy/ONNXRuntime/onnx_inference.py",
    "chars": 5849,
    "preview": "import argparse\nimport os\n\nimport cv2\nimport numpy as np\nfrom loguru import logger\n\nimport onnxruntime\n\nfrom yolox.data."
  },
  {
    "path": "deploy/TensorRT/cpp/CMakeLists.txt",
    "chars": 1237,
    "preview": "cmake_minimum_required(VERSION 2.6)\n\nproject(bytetrack)\n\nadd_definitions(-std=c++11)\n\noption(CUDA_USE_STATIC_CUDA_RUNTIM"
  },
  {
    "path": "deploy/TensorRT/cpp/README.md",
    "chars": 1543,
    "preview": "# ByteTrack-TensorRT in C++\n\n## Installation\n\nInstall opencv with ```sudo apt-get install libopencv-dev``` (we don't nee"
  },
  {
    "path": "deploy/TensorRT/cpp/include/BYTETracker.h",
    "chars": 1669,
    "preview": "#pragma once\r\n\r\n#include \"STrack.h\"\r\n\r\nstruct Object\r\n{\r\n    cv::Rect_<float> rect;\r\n    int label;\r\n    float prob;\r\n};"
  },
  {
    "path": "deploy/TensorRT/cpp/include/STrack.h",
    "chars": 1118,
    "preview": "#pragma once\r\n\r\n#include <opencv2/opencv.hpp>\r\n#include \"kalmanFilter.h\"\r\n\r\nusing namespace cv;\r\nusing namespace std;\r\n\r"
  },
  {
    "path": "deploy/TensorRT/cpp/include/dataType.h",
    "chars": 1261,
    "preview": "#pragma once\r\n\r\n#include <cstddef>\r\n#include <vector>\r\n\r\n#include <Eigen/Core>\r\n#include <Eigen/Dense>\r\ntypedef Eigen::M"
  },
  {
    "path": "deploy/TensorRT/cpp/include/kalmanFilter.h",
    "chars": 836,
    "preview": "#pragma once\r\n\r\n#include \"dataType.h\"\r\n\r\nnamespace byte_kalman\r\n{\r\n\tclass KalmanFilter\r\n\t{\r\n\tpublic:\r\n\t\tstatic const dou"
  },
  {
    "path": "deploy/TensorRT/cpp/include/lapjv.h",
    "chars": 1522,
    "preview": "#ifndef LAPJV_H\r\n#define LAPJV_H\r\n\r\n#define LARGE 1000000\r\n\r\n#if !defined TRUE\r\n#define TRUE 1\r\n#endif\r\n#if !defined FAL"
  },
  {
    "path": "deploy/TensorRT/cpp/include/logging.h",
    "chars": 16559,
    "preview": "/*\n * Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 "
  },
  {
    "path": "deploy/TensorRT/cpp/src/BYTETracker.cpp",
    "chars": 6891,
    "preview": "#include \"BYTETracker.h\"\r\n#include <fstream>\r\n\r\nBYTETracker::BYTETracker(int frame_rate, int track_buffer)\r\n{\r\n\ttrack_th"
  },
  {
    "path": "deploy/TensorRT/cpp/src/STrack.cpp",
    "chars": 3953,
    "preview": "#include \"STrack.h\"\r\n\r\nSTrack::STrack(vector<float> tlwh_, float score)\r\n{\r\n\t_tlwh.resize(4);\r\n\t_tlwh.assign(tlwh_.begin"
  },
  {
    "path": "deploy/TensorRT/cpp/src/bytetrack.cpp",
    "chars": 15841,
    "preview": "#include <fstream>\n#include <iostream>\n#include <sstream>\n#include <numeric>\n#include <chrono>\n#include <vector>\n#includ"
  },
  {
    "path": "deploy/TensorRT/cpp/src/kalmanFilter.cpp",
    "chars": 4695,
    "preview": "#include \"kalmanFilter.h\"\r\n#include <Eigen/Cholesky>\r\n\r\nnamespace byte_kalman\r\n{\r\n\tconst double KalmanFilter::chi2inv95["
  },
  {
    "path": "deploy/TensorRT/cpp/src/lapjv.cpp",
    "chars": 7217,
    "preview": "#include <stdio.h>\r\n#include <stdlib.h>\r\n#include <string.h>\r\n\r\n#include \"lapjv.h\"\r\n\r\n/** Column-reduction and reduction"
  },
  {
    "path": "deploy/TensorRT/cpp/src/utils.cpp",
    "chars": 9498,
    "preview": "#include \"BYTETracker.h\"\r\n#include \"lapjv.h\"\r\n\r\nvector<STrack*> BYTETracker::joint_stracks(vector<STrack*> &tlista, vect"
  },
  {
    "path": "deploy/TensorRT/python/README.md",
    "chars": 773,
    "preview": "# ByteTrack-TensorRT in Python\n\n## Install TensorRT Toolkit\nPlease follow the [TensorRT Installation Guide](https://docs"
  },
  {
    "path": "deploy/ncnn/cpp/CMakeLists.txt",
    "chars": 3240,
    "preview": "macro(ncnn_add_example name)\n    add_executable(${name} ${name}.cpp)\n    if(OpenCV_FOUND)\n        target_include_directo"
  },
  {
    "path": "deploy/ncnn/cpp/README.md",
    "chars": 4027,
    "preview": "# ByteTrack-CPP-ncnn\n\n## Installation\n\nClone [ncnn](https://github.com/Tencent/ncnn) first, then please following [build"
  },
  {
    "path": "deploy/ncnn/cpp/include/BYTETracker.h",
    "chars": 1669,
    "preview": "#pragma once\r\n\r\n#include \"STrack.h\"\r\n\r\nstruct Object\r\n{\r\n    cv::Rect_<float> rect;\r\n    int label;\r\n    float prob;\r\n};"
  },
  {
    "path": "deploy/ncnn/cpp/include/STrack.h",
    "chars": 1118,
    "preview": "#pragma once\r\n\r\n#include <opencv2/opencv.hpp>\r\n#include \"kalmanFilter.h\"\r\n\r\nusing namespace cv;\r\nusing namespace std;\r\n\r"
  },
  {
    "path": "deploy/ncnn/cpp/include/dataType.h",
    "chars": 1261,
    "preview": "#pragma once\r\n\r\n#include <cstddef>\r\n#include <vector>\r\n\r\n#include <Eigen/Core>\r\n#include <Eigen/Dense>\r\ntypedef Eigen::M"
  },
  {
    "path": "deploy/ncnn/cpp/include/kalmanFilter.h",
    "chars": 836,
    "preview": "#pragma once\r\n\r\n#include \"dataType.h\"\r\n\r\nnamespace byte_kalman\r\n{\r\n\tclass KalmanFilter\r\n\t{\r\n\tpublic:\r\n\t\tstatic const dou"
  },
  {
    "path": "deploy/ncnn/cpp/include/lapjv.h",
    "chars": 1522,
    "preview": "#ifndef LAPJV_H\r\n#define LAPJV_H\r\n\r\n#define LARGE 1000000\r\n\r\n#if !defined TRUE\r\n#define TRUE 1\r\n#endif\r\n#if !defined FAL"
  },
  {
    "path": "deploy/ncnn/cpp/src/BYTETracker.cpp",
    "chars": 6891,
    "preview": "#include \"BYTETracker.h\"\r\n#include <fstream>\r\n\r\nBYTETracker::BYTETracker(int frame_rate, int track_buffer)\r\n{\r\n\ttrack_th"
  },
  {
    "path": "deploy/ncnn/cpp/src/STrack.cpp",
    "chars": 3953,
    "preview": "#include \"STrack.h\"\r\n\r\nSTrack::STrack(vector<float> tlwh_, float score)\r\n{\r\n\t_tlwh.resize(4);\r\n\t_tlwh.assign(tlwh_.begin"
  },
  {
    "path": "deploy/ncnn/cpp/src/bytetrack.cpp",
    "chars": 11908,
    "preview": "#include \"layer.h\"\n#include \"net.h\"\n\n#if defined(USE_NCNN_SIMPLEOCV)\n#include \"simpleocv.h\"\n#include <opencv2/opencv.hpp"
  },
  {
    "path": "deploy/ncnn/cpp/src/kalmanFilter.cpp",
    "chars": 4695,
    "preview": "#include \"kalmanFilter.h\"\r\n#include <Eigen/Cholesky>\r\n\r\nnamespace byte_kalman\r\n{\r\n\tconst double KalmanFilter::chi2inv95["
  },
  {
    "path": "deploy/ncnn/cpp/src/lapjv.cpp",
    "chars": 7217,
    "preview": "#include <stdio.h>\r\n#include <stdlib.h>\r\n#include <string.h>\r\n\r\n#include \"lapjv.h\"\r\n\r\n/** Column-reduction and reduction"
  },
  {
    "path": "deploy/ncnn/cpp/src/utils.cpp",
    "chars": 9498,
    "preview": "#include \"BYTETracker.h\"\r\n#include \"lapjv.h\"\r\n\r\nvector<STrack*> BYTETracker::joint_stracks(vector<STrack*> &tlista, vect"
  },
  {
    "path": "deploy/scripts/export_onnx.py",
    "chars": 3043,
    "preview": "from loguru import logger\n\nimport torch\nfrom torch import nn\n\nfrom yolox.exp import get_exp\nfrom yolox.models.network_bl"
  },
  {
    "path": "deploy/scripts/trt.py",
    "chars": 2143,
    "preview": "from loguru import logger\n\nimport tensorrt as trt\nimport torch\nfrom torch2trt import torch2trt\n\nfrom yolox.exp import ge"
  },
  {
    "path": "docs/DEPLOY.md",
    "chars": 1748,
    "preview": "# Deployment \n\nWe provide support to some popular deployment tools. This part is built upon the implementation of [YOLOX"
  },
  {
    "path": "exps/default/nano.py",
    "chars": 1320,
    "preview": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\nimport torch.n"
  },
  {
    "path": "exps/default/yolov3.py",
    "chars": 3006,
    "preview": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\nimport torch\ni"
  },
  {
    "path": "exps/default/yolox_l.py",
    "chars": 355,
    "preview": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.ex"
  },
  {
    "path": "exps/default/yolox_m.py",
    "chars": 357,
    "preview": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.ex"
  },
  {
    "path": "exps/default/yolox_s.py",
    "chars": 357,
    "preview": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.ex"
  },
  {
    "path": "exps/default/yolox_tiny.py",
    "chars": 496,
    "preview": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.ex"
  },
  {
    "path": "exps/default/yolox_x.py",
    "chars": 357,
    "preview": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.ex"
  },
  {
    "path": "exps/example/mot/yolox_dancetrack_test.py",
    "chars": 5073,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_dancetrack_test_hybrid_sort.py",
    "chars": 5539,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_dancetrack_test_hybrid_sort_reid.py",
    "chars": 5963,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_dancetrack_val.py",
    "chars": 5072,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_dancetrack_val_hybrid_sort.py",
    "chars": 5539,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_dancetrack_val_hybrid_sort_reid.py",
    "chars": 5963,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_l_mix_det.py",
    "chars": 4352,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_m_mix_det.py",
    "chars": 4354,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_nano_mix_det.py",
    "chars": 5189,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_s_mix_det.py",
    "chars": 4354,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_tiny_mix_det.py",
    "chars": 4387,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_ablation.py",
    "chars": 4505,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_ablation_hybrid_sort.py",
    "chars": 4966,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_ablation_hybrid_sort_reid.py",
    "chars": 5370,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_ch.py",
    "chars": 4354,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_det.py",
    "chars": 4603,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_det_hybrid_sort.py",
    "chars": 5057,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_det_hybrid_sort_reid.py",
    "chars": 5461,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_det_train.py",
    "chars": 4605,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_mot20_ch.py",
    "chars": 4645,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_mot20_ch_hybrid_sort.py",
    "chars": 5131,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_mot20_ch_hybrid_sort_reid.py",
    "chars": 5536,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_mot20_ch_train.py",
    "chars": 4647,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mix_mot20_ch_valhalf.py",
    "chars": 4653,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mot17_ablation_half_train.py",
    "chars": 4360,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mot17_half.py",
    "chars": 4356,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/example/mot/yolox_x_mot17_train.py",
    "chars": 4353,
    "preview": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolo"
  },
  {
    "path": "exps/permatrack_kitti_test/0000.txt",
    "chars": 354147,
    "preview": "0 1 Car -1 -1 -1 720.30 170.64 914.77 310.69 -1 -1 -1 -1000 -1000 -1000 -10 0.98\n0 2 Car -1 -1 -1 676.65 173.45 793.68 2"
  },
  {
    "path": "exps/permatrack_kitti_test/0001.txt",
    "chars": 15303,
    "preview": "0 1 Car -1 -1 -1 483.81 173.31 658.93 242.23 -1 -1 -1 -1000 -1000 -1000 -10 0.93\n0 2 Car -1 -1 -1 1194.45 161.21 1236.85"
  },
  {
    "path": "exps/permatrack_kitti_test/0002.txt",
    "chars": 97279,
    "preview": "0 1 Car -1 -1 -1 436.32 183.45 498.72 231.80 -1 -1 -1 -1000 -1000 -1000 -10 0.92\n0 2 Car -1 -1 -1 555.20 179.74 591.20 2"
  },
  {
    "path": "exps/permatrack_kitti_test/0003.txt",
    "chars": 63704,
    "preview": "0 1 Car -1 -1 -1 -2.54 209.60 356.33 369.89 -1 -1 -1 -1000 -1000 -1000 -10 0.98\n0 2 Car -1 -1 -1 323.26 197.18 449.98 28"
  },
  {
    "path": "exps/permatrack_kitti_test/0004.txt",
    "chars": 41732,
    "preview": "0 1 Car -1 -1 -1 193.61 191.06 437.96 342.77 -1 -1 -1 -1000 -1000 -1000 -10 0.97\n0 2 Car -1 -1 -1 628.99 172.36 658.62 1"
  },
  {
    "path": "exps/permatrack_kitti_test/0005.txt",
    "chars": 73206,
    "preview": "0 1 Car -1 -1 -1 223.46 196.73 331.60 245.55 -1 -1 -1 -1000 -1000 -1000 -10 0.72\n0 2 Car -1 -1 -1 258.53 191.84 320.19 2"
  },
  {
    "path": "exps/permatrack_kitti_test/0006.txt",
    "chars": 21981,
    "preview": "0 1 Car -1 -1 -1 370.69 168.99 489.39 255.71 -1 -1 -1 -1000 -1000 -1000 -10 0.94\n1 1 Car -1 -1 -1 391.62 170.00 497.52 2"
  },
  {
    "path": "exps/permatrack_kitti_test/0007.txt",
    "chars": 63299,
    "preview": "0 1 Car -1 -1 -1 644.01 186.16 701.50 231.62 -1 -1 -1 -1000 -1000 -1000 -10 0.95\n0 2 Car -1 -1 -1 854.72 182.94 890.33 2"
  },
  {
    "path": "exps/permatrack_kitti_test/0008.txt",
    "chars": 83975,
    "preview": "0 1 Car -1 -1 -1 842.19 176.14 1038.64 334.22 -1 -1 -1 -1000 -1000 -1000 -10 0.97\n0 2 Car -1 -1 -1 716.49 178.83 846.64 "
  },
  {
    "path": "exps/permatrack_kitti_test/0009.txt",
    "chars": 54988,
    "preview": "0 1 Car -1 -1 -1 487.70 168.95 509.64 180.98 -1 -1 -1 -1000 -1000 -1000 -10 0.60\n1 1 Car -1 -1 -1 487.33 168.65 509.76 1"
  },
  {
    "path": "exps/permatrack_kitti_test/0010.txt",
    "chars": 281847,
    "preview": "0 1 Car -1 -1 -1 530.56 179.79 731.32 254.72 -1 -1 -1 -1000 -1000 -1000 -10 0.97\n0 2 Car -1 -1 -1 73.74 184.36 257.33 23"
  },
  {
    "path": "exps/permatrack_kitti_test/0011.txt",
    "chars": 369492,
    "preview": "0 1 Car -1 -1 -1 847.09 187.81 1235.58 369.99 -1 -1 -1 -1000 -1000 -1000 -10 0.98\n0 2 Car -1 -1 -1 718.03 189.50 900.05 "
  },
  {
    "path": "exps/permatrack_kitti_test/0012.txt",
    "chars": 372686,
    "preview": "0 1 Car -1 -1 -1 532.35 192.59 715.14 357.48 -1 -1 -1 -1000 -1000 -1000 -10 0.99\n0 2 Car -1 -1 -1 408.03 173.87 537.04 2"
  },
  {
    "path": "exps/permatrack_kitti_test/0013.txt",
    "chars": 130016,
    "preview": "0 1 Car -1 -1 -1 773.43 182.57 868.92 240.18 -1 -1 -1 -1000 -1000 -1000 -10 0.95\n0 2 Van -1 -1 -1 398.45 168.18 450.33 2"
  },
  {
    "path": "exps/permatrack_kitti_test/0014.txt",
    "chars": 593916,
    "preview": "0 1 Car -1 -1 -1 2.16 197.46 383.05 369.37 -1 -1 -1 -1000 -1000 -1000 -10 0.98\n0 2 Car -1 -1 -1 525.07 173.60 649.34 285"
  },
  {
    "path": "exps/permatrack_kitti_test/0015.txt",
    "chars": 336210,
    "preview": "0 1 Car -1 -1 -1 604.56 197.13 982.57 361.01 -1 -1 -1 -1000 -1000 -1000 -10 0.98\n0 2 Car -1 -1 -1 1066.70 175.05 1235.53"
  },
  {
    "path": "exps/permatrack_kitti_test/0016.txt",
    "chars": 251791,
    "preview": "0 1 Cyclist -1 -1 -1 871.26 155.94 972.71 246.42 -1 -1 -1 -1000 -1000 -1000 -10 0.89\n0 2 Pedestrian -1 -1 -1 334.65 163."
  },
  {
    "path": "exps/permatrack_kitti_test/0017.txt",
    "chars": 108627,
    "preview": "0 1 Car -1 -1 -1 682.96 172.39 903.26 246.13 -1 -1 -1 -1000 -1000 -1000 -10 0.92\n0 2 Car -1 -1 -1 998.47 163.51 1139.47 "
  },
  {
    "path": "exps/permatrack_kitti_test/0018.txt",
    "chars": 173181,
    "preview": "0 1 Car -1 -1 -1 246.26 185.91 386.63 259.15 -1 -1 -1 -1000 -1000 -1000 -10 0.96\n0 2 Car -1 -1 -1 381.09 172.92 457.72 2"
  },
  {
    "path": "exps/permatrack_kitti_test/0019.txt",
    "chars": 538646,
    "preview": "0 1 Cyclist -1 -1 -1 459.29 160.61 547.91 336.61 -1 -1 -1 -1000 -1000 -1000 -10 0.93\n0 2 Cyclist -1 -1 -1 311.16 158.54 "
  },
  {
    "path": "exps/permatrack_kitti_test/0020.txt",
    "chars": 207470,
    "preview": "0 1 Cyclist -1 -1 -1 851.42 167.11 987.90 307.31 -1 -1 -1 -1000 -1000 -1000 -10 0.94\n0 2 Car -1 -1 -1 1094.47 184.10 122"
  },
  {
    "path": "exps/permatrack_kitti_test/0021.txt",
    "chars": 235557,
    "preview": "0 1 Car -1 -1 -1 1095.11 184.42 1220.57 235.96 -1 -1 -1 -1000 -1000 -1000 -10 0.92\n0 2 Car -1 -1 -1 953.84 182.53 1067.9"
  },
  {
    "path": "exps/permatrack_kitti_test/0022.txt",
    "chars": 494379,
    "preview": "0 1 Cyclist -1 -1 -1 389.76 158.78 547.61 361.72 -1 -1 -1 -1000 -1000 -1000 -10 0.93\n0 2 Car -1 -1 -1 954.05 184.03 1067"
  },
  {
    "path": "exps/permatrack_kitti_test/0023.txt",
    "chars": 443507,
    "preview": "0 1 Car -1 -1 -1 953.80 183.77 1068.13 233.42 -1 -1 -1 -1000 -1000 -1000 -10 0.92\n0 2 Car -1 -1 -1 1095.17 185.12 1220.5"
  },
  {
    "path": "exps/permatrack_kitti_test/0024.txt",
    "chars": 445069,
    "preview": "0 1 Car -1 -1 -1 1095.16 185.54 1220.62 235.63 -1 -1 -1 -1000 -1000 -1000 -10 0.91\n0 2 Car -1 -1 -1 953.48 183.67 1068.6"
  },
  {
    "path": "exps/permatrack_kitti_test/0025.txt",
    "chars": 249669,
    "preview": "0 1 Car -1 -1 -1 955.03 183.02 1067.18 234.16 -1 -1 -1 -1000 -1000 -1000 -10 0.91\n0 2 Pedestrian -1 -1 -1 144.54 150.71 "
  },
  {
    "path": "exps/permatrack_kitti_test/0026.txt",
    "chars": 137184,
    "preview": "0 1 Pedestrian -1 -1 -1 729.08 163.15 752.05 226.15 -1 -1 -1 -1000 -1000 -1000 -10 0.75\n0 2 Pedestrian -1 -1 -1 569.54 1"
  },
  {
    "path": "exps/permatrack_kitti_test/0027.txt",
    "chars": 86019,
    "preview": "0 1 Car -1 -1 -1 555.45 194.16 680.29 301.62 -1 -1 -1 -1000 -1000 -1000 -10 0.97\n0 2 Van -1 -1 -1 998.96 119.68 1220.17 "
  },
  {
    "path": "exps/permatrack_kitti_test/0028.txt",
    "chars": 212900,
    "preview": "0 1 Car -1 -1 -1 560.18 164.07 599.56 201.78 -1 -1 -1 -1000 -1000 -1000 -10 0.87\n0 2 Pedestrian -1 -1 -1 665.64 162.72 6"
  },
  {
    "path": "fast_reid/CHANGELOG.md",
    "chars": 832,
    "preview": "# Changelog\n\n### v1.3\n\n#### New Features\n- Vision Transformer backbone, see config in `configs/Market1501/bagtricks_vit."
  },
  {
    "path": "fast_reid/GETTING_STARTED.md",
    "chars": 2219,
    "preview": "# Getting Started with Fastreid\n\n## Prepare pretrained model\n\nIf you use backbones supported by fastreid, you do not nee"
  },
  {
    "path": "fast_reid/INSTALL.md",
    "chars": 794,
    "preview": "# Installation\n\n## Requirements\n\n- Linux or macOS with python ≥ 3.6\n- PyTorch ≥ 1.6\n- torchvision that matches the Pytor"
  },
  {
    "path": "fast_reid/LICENSE",
    "chars": 11347,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "fast_reid/MODEL_ZOO.md",
    "chars": 14758,
    "preview": "# FastReID Model Zoo and Baselines\n\n## Introduction\n\nThis file documents collection of baselines trained with fastreid. "
  },
  {
    "path": "fast_reid/README.md",
    "chars": 4257,
    "preview": "<img src=\".github/FastReID-Logo.png\" width=\"300\" >\n\n[![Gitter](https://badges.gitter.im/fast-reid/community.svg)](https:"
  },
  {
    "path": "fast_reid/__init__.py",
    "chars": 10,
    "preview": "# hgx0914\n"
  },
  {
    "path": "fast_reid/configs/Base-AGW.yml",
    "chars": 285,
    "preview": "_BASE_: Base-bagtricks.yml\n\nMODEL:\n  BACKBONE:\n    WITH_NL: True\n\n  HEADS:\n    POOL_LAYER: GeneralizedMeanPooling\n\n  LOS"
  },
  {
    "path": "fast_reid/configs/Base-MGN.yml",
    "chars": 161,
    "preview": "_BASE_: Base-SBS.yml\n\nMODEL:\n  META_ARCHITECTURE: MGN\n\n  FREEZE_LAYERS: [backbone, b1, b2, b3,]\n\n  BACKBONE:\n    WITH_NL"
  },
  {
    "path": "fast_reid/configs/Base-SBS.yml",
    "chars": 944,
    "preview": "_BASE_: Base-bagtricks.yml\n\nMODEL:\n  FREEZE_LAYERS: [ backbone ]\n\n  BACKBONE:\n    WITH_NL: True\n\n  HEADS:\n    NECK_FEAT:"
  },
  {
    "path": "fast_reid/configs/Base-bagtricks.yml",
    "chars": 1112,
    "preview": "MODEL:\n  META_ARCHITECTURE: Baseline\n\n  BACKBONE:\n    NAME: build_resnet_backbone\n    NORM: BN\n    DEPTH: 50x\n    LAST_S"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/AGW_R101-ibn.yml",
    "chars": 186,
    "preview": "_BASE_: ../Base-AGW.yml\n\nMODEL:\n  BACKBONE:\n    DEPTH: 101x\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/AGW_R50-ibn.yml",
    "chars": 169,
    "preview": "_BASE_: ../Base-AGW.yml\n\nMODEL:\n  BACKBONE:\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS: (\"CUHKSYSU\",)\n"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/AGW_R50.yml",
    "chars": 126,
    "preview": "_BASE_: ../Base-AGW.yml\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS: (\"CUHKSYSU\",)\n\nOUTPUT_DIR: fast_reid/logs/CUHKSYSU/ag"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/AGW_S50.yml",
    "chars": 179,
    "preview": "_BASE_: ../Base-AGW.yml\n\nMODEL:\n  BACKBONE:\n    NAME: build_resnest_backbone\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS: "
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/bagtricks_R101-ibn.yml",
    "chars": 198,
    "preview": "_BASE_: ../Base-bagtricks.yml\n\nMODEL:\n  BACKBONE:\n    DEPTH: 101x\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n "
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/bagtricks_R50-ibn.yml",
    "chars": 181,
    "preview": "_BASE_: ../Base-bagtricks.yml\n\nMODEL:\n  BACKBONE:\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS: (\"CUHKSY"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/bagtricks_R50.yml",
    "chars": 138,
    "preview": "_BASE_: ../Base-bagtricks.yml\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS: (\"CUHKSYSU\",)\n\nOUTPUT_DIR: fast_reid/logs/CUHKS"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/bagtricks_S50.yml",
    "chars": 191,
    "preview": "_BASE_: ../Base-bagtricks.yml\n\nMODEL:\n  BACKBONE:\n    NAME: build_resnest_backbone\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  T"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/mgn_R50-ibn.yml",
    "chars": 169,
    "preview": "_BASE_: ../Base-MGN.yml\n\nMODEL:\n  BACKBONE:\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS: (\"CUHKSYSU\",)\n"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/sbs_R101-ibn.yml",
    "chars": 186,
    "preview": "_BASE_: ../Base-SBS.yml\n\nMODEL:\n  BACKBONE:\n    DEPTH: 101x\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/sbs_R50-ibn.yml",
    "chars": 169,
    "preview": "_BASE_: ../Base-SBS.yml\n\nMODEL:\n  BACKBONE:\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS: (\"CUHKSYSU\",)\n"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/sbs_R50.yml",
    "chars": 126,
    "preview": "_BASE_: ../Base-SBS.yml\n\nDATASETS:\n  NAMES: (\"CUHKSYSU\",)\n  TESTS: (\"CUHKSYSU\",)\n\nOUTPUT_DIR: fast_reid/logs/CUHKSYSU/sb"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU/sbs_S50.yml",
    "chars": 225,
    "preview": "_BASE_: ../Base-SBS.yml\n\nMODEL:\n  BACKBONE:\n#    NORM: syncBN\n    NAME: build_resnest_backbone\n#  HEADS:\n#    NORM: sync"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU_DanceTrack/AGW_R101-ibn.yml",
    "chars": 219,
    "preview": "_BASE_: ../Base-AGW.yml\n\nMODEL:\n  BACKBONE:\n    DEPTH: 101x\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU_DanceTrack"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU_DanceTrack/AGW_R50-ibn.yml",
    "chars": 202,
    "preview": "_BASE_: ../Base-AGW.yml\n\nMODEL:\n  BACKBONE:\n    WITH_IBN: True\n\nDATASETS:\n  NAMES: (\"CUHKSYSU_DanceTrack\",)\n  TESTS: (\"C"
  },
  {
    "path": "fast_reid/configs/CUHKSYSU_DanceTrack/AGW_R50.yml",
    "chars": 159,
    "preview": "_BASE_: ../Base-AGW.yml\n\nDATASETS:\n  NAMES: (\"CUHKSYSU_DanceTrack\",)\n  TESTS: (\"CUHKSYSU_DanceTrack\",)\n\nOUTPUT_DIR: fast"
  }
]

// ... and 678 more files (download for full content)

About this extraction

This page contains the full source code of the ymzis69/HybridSORT GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 878 files (18.6 MB), approximately 4.9M tokens, and a symbol index with 4424 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!