[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n"
  },
  {
    "path": "README.md",
    "content": "# MTMC\nA paper list of Multi Target Multi Camera (MTMC) tracking and related topics <br/>\nincluding application case in: vehicle tracking :red_car: , pedestrian tracking :frowning_person: , sports player tracking :soccer: . \n\n<details><summary>Click to show menu</summary>\n<p>\n\n1. <a href=\"#multi-target-single-camera-tracking-paper\">Multi Target Single Camera Tracking Paper </a> <br/>\n2. <a href=\"#multi-target-multi-camera-tracking-paper\">Multi Target Multi Camera Tracking Paper </a> <br/>\n3. <a href=\"#related-github-repo\">Related Github Repo</a> <br/>\n4. <a href=\"#related-competition\">Related Competition</a> <br/>\n<!--\n5. <a href=\"#related-group-or-researcher\">Related Group or Researcher</a>\n-->\n</p>\n</details>\n\n## Multi Target Single Camera Tracking Paper \n\n### 2022\n- Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking, Cao et al. [[paper]](https://arxiv.org/abs/2203.14360) [[code]](https://github.com/noahcao/OC_SORT)\n> interesting to see a variant of SORT (observation-centered) achieve decent results \n\n- PoserNet: Refining Relative Camera Poses Exploiting Object Detections, Taiana et al. :rainbow: [[paper]](https://arxiv.org/pdf/2207.09445.pdf) [[code]](https://github.com/IIT-PAVIS/PoserNet)\n> not tracking but seems applicable in MC-tracking, detect bbox from images and match roughly, use interesting GNN formulation to refine camera pose: image as node, edge as relative pose, bbox info added during message passing\n\n### 2021\n- ByteTrack: Multi-Object Tracking by Associating Every Detection Box, Zhang et al. [[paper]](https://arxiv.org/abs/2110.06864) [[code]](https://github.com/ifzhang/ByteTrack)\n> at first associate box with high detection score, then associate box with low detection score, improve tracking on occluded objects\n\n- Quasi-Dense Similarity Learning for Multiple Object Tracking, Pang et al. :rainbow: [[paper]](https://arxiv.org/abs/2006.06664) [[code]](https://github.com/SysCV/qdtrack)\n> instance similarity learning based on region proposal, flexible, no external data required\n\n- TrackFormer: Multi-Object Tracking with Transformers, Meinhardt et al. [[paper]](https://arxiv.org/abs/2101.02702)\n> Transformer, detection and tracking simultaneously\n\n### 2020\n- How To Train Your Deep Multi-Object Tracker, Xu et al. :rainbow: [[paper]](https://arxiv.org/abs/1906.06618)\n> Deep Hungarian Net, approximate MOTA, MOTP for loss function directly\n\n- Learning a Neural Solver for Multiple Object Tracking, Braso & Leal-Taixe :rainbow: [[paper]](https://arxiv.org/abs/1912.07515)\n> apperance embedding (node) and geometry distance embedding (edge) for graph, edge classification with cross entropy loss \n\n- Deep learning in video multi-object tracking: A survey, Ciaparrone et al. [[paper]](https://arxiv.org/abs/1907.12740)\n> pipeline: detection, feature extraction, affinity, association\n\n- Chained-Tracker: Chaining Paired Attentive Regression Results for End-to-End Joint Multiple-Object Detection and Tracking, Peng et al. :rainbow:  [[paper]](https://arxiv.org/abs/2007.14557) [[code]](https://github.com/pjl1995/CTracker) \n> end-to-end MOT, use adjacent frames (chained) to combine detection, feature extraction and tracking \n\n### 2019\n- Spatial-Temporal Relation Networks for Multi-Object Tracking, Xu et al. [[paper]](https://openaccess.thecvf.com/content_ICCV_2019/papers/Xu_Spatial-Temporal_Relation_Networks_for_Multi-Object_Tracking_ICCV_2019_paper.pdf)\n> use appearance, location and topology cues for similarity score, then graph solved by Hungarian algorithm\n\n- Graph convolutional tracking, Gao et al. [[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Gao_Graph_Convolutional_Tracking_CVPR_2019_paper.pdf)\n> GNN, Siamese network\n\n- Tracking without bells and whistles, Bergmann et al. [[paper]](https://arxiv.org/abs/1903.05625) [[code]](https://github.com/phil-bergmann/tracking_wo_bnw)\n> motion and appearance extention -> Tracktor++\n\n- Deep Learning for Visual Tracking: A Comprehensive Survey, Marvasti-Zadeh et al. [[paper]](https://arxiv.org/abs/1912.00535)\n> traditional and deep visual trackers \n\n- A Review of Visual Trackers and Analysis of its Application to Mobile Robot, You et al. [[paper]](https://arxiv.org/abs/1910.09761)\n> correlation filter, deep learning and convolutional features\n\n### 2018\n\n- Exploit the Connectivity: Multi-Object Tracking with TrackletNet, Wang et al. [[paper]](https://arxiv.org/abs/1811.07258)\n> use epipolar geometry, tracklet as node in graph\n\n- Real-time Multiple People Tracking with Deeply Learned Candidate Selection and Person Re-Identification, Chen et al. [[paper]](https://arxiv.org/abs/1809.04427)[[code]](https://github.com/longcw/MOTDT)\n> online MOT tracker\n\n### 2017\n- Multi-Object Tracking with Quadruplet Convolutional Neural Networks, Son et al. [[paper]](https://openaccess.thecvf.com/content_cvpr_2017/papers/Son_Multi-Object_Tracking_With_CVPR_2017_paper.pdf)\n> learn statistics to normalize effect of camera poses, temporal adjacent constraint for data association \n\n- Real-Time Multiple Object Tracking, Murray. [[paper]](https://www.diva-portal.org/smash/get/diva2:1146388/FULLTEXT01.pdf)\n> not use appearance feature, very fast, not accurate\n\n- High-Speed Tracking-by-Detection Without Using Image Information, Bochinski et al. [[paper]](http://elvera.nue.tu-berlin.de/files/1517Bochinski2017.pdf) [[code]](https://github.com/bochinski/iou-tracker)\n> IoU tracker, no visual cues used, fast  \n\n- Online Multi-Target Tracking Using Recurrent Neural Networks, Milan et al. [[paper]](https://arxiv.org/abs/1604.03635)\n> RNN as tracker, LSTM for data association\n\n### 2016\n- Learning by tracking: Siamese CNN for robust target association, Leal-Taixe et al. [[paper]](https://arxiv.org/abs/1604.07866)\n> use Siamese CNN to learn similarity, for data association, graph solved by Linear Programming \n\n### 2014\n- Learning an image-based motion context for multiple people tracking, Leal-Taixe et al. [[paper]](https://ieeexplore.ieee.org/document/6909848)\n> interaction between objects, relax the dependency of tracking on detections\n\n\n## Multi Target Multi Camera Tracking Paper\n\n### 2022\n- Graph Convolutional Network for Multi-Target Multi-Camera Vehicle Tracking, Luna et al. [[paper]](https://arxiv.org/pdf/2211.15538.pdf)\n> step 1: single camera tracking & generate appearance feature, step 2: multi camera association with GNN (single camera trajectories as node, averaged feature as node feature, cos(feature) as edge feature), weighted loss for imbalance\n\n### 2021\n- DyGLIP: A Dynamic Graph Model with Link Prediction for Accurate Multi-Camera Multiple Object Tracking, Quach et al. [[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Quach_DyGLIP_A_Dynamic_Graph_Model_With_Link_Prediction_for_Accurate_CVPR_2021_paper.pdf)\n> tracklet as node, link prediction for data association, ok for w/wo overalaping view, use large training data \n\n- Online Clustering-based Multi-Camera Vehicle Tracking in Scenarios with overlapping FOVs, Luna et al. [[paper]](https://arxiv.org/pdf/2102.04091.pdf)\n> detection-> feature extraction, homography -> cross-camera cluster -> incremental temporal association, small latency, not very accurate\n\n### 2020\n\n- Real-time 3D Deep Multi-Camera Tracking, You & Jiang [[paper]](https://arxiv.org/abs/2003.11753)\n> fusion all views into ground-plane occupancy heatmap \n\n- City-Scale Multi-Camera Vehicle Tracking by Semantic Attribute Parsing and Cross-Camera Tracklet Matching, He et al. [[paper]](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w35/He_City-Scale_Multi-Camera_Vehicle_Tracking_by_Semantic_Attribute_Parsing_and_Cross-Camera_CVPRW_2020_paper.pdf)\n> tracklet representation with spatial-temporal attention, then tracklet-to-target assignment\n\n- Multi-Target Multi-Camera Tracking by Tracklet-to-Target Assignment, He et al. [[paper]](https://ieeexplore.ieee.org/document/9042858) [[code]](https://github.com/GehenHe/TRACTA)\n> tracklet-to-target assignment\n\n- AI City Challenge 2020 – Computer Vision for Smart Transportation Applications, Chang et al. [[paper]](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w35/Chang_AI_City_Challenge_2020_-_Computer_Vision_for_Smart_Transportation_CVPRW_2020_paper.pdf)\n> single camera tracklet -> multi-camera tracklet fusion with appearance and physical features\n\n- Multi-Camera Tracking of Vehicles based on Deep Features Re-ID and Trajectory-Based Camera Link Models, Hsu et al. [[paper]](https://openaccess.thecvf.com/content_CVPRW_2019/papers/AI%20City/Hsu_Multi-Camera_Tracking_of_Vehicles_based_on_Deep_Features_Re-ID_and_CVPRW_2019_paper.pdf)\n> use TrackletNet for single camera trajectory -> inter-camera tracking\n\n- ELECTRICITY: An Efficient Multi-camera Vehicle Tracking System for Intelligent City, Qian et al. [[paper]](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w35/Qian_ELECTRICITY_An_Efficient_Multi-Camera_Vehicle_Tracking_System_for_Intelligent_City_CVPRW_2020_paper.pdf)\n> single camera tracking -> match tracklets across camera views\n\n- Pose-Assisted Multi-Camera Collaboration for Active Object Tracking, Li et al. [[paper]](https://arxiv.org/abs/2001.05161) [[code]](https://github.com/LilJing/pose-assisted-collaboration)\n> Reinforcement learning, collaborative multi-camera\n\n- Reconstruction of 3D flight trajectories from ad-hoc camera networks, Li et al. [[paper]](https://arxiv.org/abs/2003.04784) [[code]](https://github.com/CenekAlbl/mvus)\n> camera synchronization, SfM, Bundle Adjustment, spline representation for drone trajectory  \n\n- The MTA Dataset for Multi Target Multi Camera Pedestrian Tracking\nby Weighted Distance Aggregation [[paper]](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w70/Kohl_The_MTA_Dataset_for_Multi-Target_Multi-Camera_Pedestrian_Tracking_by_Weighted_CVPRW_2020_paper.pdf)\n> combine appearance and homography for hierachical clustering, known camera pose\n\n- Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS, Chen et al. [[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Chen_Cross-View_Tracking_for_Multi-Human_3D_Pose_Estimation_at_Over_100_CVPR_2020_paper.pdf) \n\n### 2019\n- People tracking in multi-camera systems: a review, Iguernaissi et al. [[paper]](https://link.springer.com/article/10.1007/s11042-018-6638-5)\n> Centralized (combine cross-camera views before tracking, like Wen et al.) and Distributed methods (single-camera tracking before fusion)\n\n- CityFlow: A City-Scale Benchmark for Multi-Target Multi-Camera Vehicle Tracking and Re-Identification, Tang et al. [[paper]](https://arxiv.org/abs/1903.09254)\n\n- Real-Time Multi-Target Multi-Camera Tracking with Spatial-Temporal Information, Zhang & Izquierdo :rainbow: [[paper]](https://ieeexplore.ieee.org/document/8965845)\n> single camera detection -> create/match to track, with apperance, motion, spatial-temporal cues (cross-camera)\n\n### 2018\n- Features for Multi-Target Multi-Camera Tracking and Re-Identification, Ristani & Tomasi [[paper]](https://arxiv.org/abs/1803.10859) [[code]](https://github.com/SamvitJ/Duke-DeepCC)\n> tracklet -> single camera trajectory (correlation clustering) -> multi camera trajectory\n\n- Vehicle Re-Identification with the Space-Time Prior, Wu et al. [[paper]](https://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w3/Wu_Vehicle_Re-Identification_With_CVPR_2018_paper.pdf) [[code]](https://github.com/cw1204772/AIC2018_iamai)\n> single camera tracking -> CNN feature extraction -> multi camera tracking (KMeans)\n\n### 2017\n- Multi-Camera Multi-Target Tracking with Space-Time-View Hyper-graph, Wen et al. :rainbow: [[paper]](https://link.springer.com/article/10.1007/s11263-016-0943-0)\n> 3D position for affinity computation, need know camera parameters, cross-view coupling before trajectory\n\n### 2014\n- Persistent Tracking for Wide Area Aerial Surveillance, Prokaj & Medioni :rainbow: [[paper]](https://ieeexplore.ieee.org/document/6909551)\n> two tracker (detection and regression) in parallel, measure their correspondence\n\n### 2013\n- Hypergraphs for joint multi-view reconstruction and multi-object tracking, Hofmann et al. :rainbow: [[paper]](https://ieeexplore.ieee.org/document/6619312) [[code]](https://github.com/neohanju/HYPERGRAPH_TRACKING)\n> detection as node in hypergraph to find 3d reconstruction, which is node in a min-cost flow graph, solved by binary linear programming\n\n### 2012\n- Branch-and-price global optimization for multi-view multi-target tracking, Leal-Taixé et al. [[paper]](https://www.researchgate.net/publication/261200087_Branch-and-price_global_optimization_for_multi-view_multi-target_tracking)\n\n## Related Github Repo\n- [Multi-camera live object tracking](https://github.com/LeonLok/Multi-Camera-Live-Object-Tracking)\n\n- [Resource collection about multi camera network](https://github.com/YanLu-nyu/Awesome-Multi-Camera-Network)<br/>\n\n- [Recource collection about multi object tracking](https://github.com/nightmaredimple/Multi-object-Tracking-paper-code-list)\n\n- [Multi Object Tracking Paper List](https://github.com/SpyderXu/multi-object-tracking-paper-list)\n\n- [UAV detection and tracking](https://github.com/tau-adl/Detection_Tracking_JetsonTX2)<br/>\n\n- [Resource collection about person reid dataset](https://github.com/NEU-Gou/awesome-reid-dataset)<br/>\n\n- [OpenMMLab: toolbox for SOT, MOT](https://github.com/open-mmlab/mmtracking)\n\n- [DeepOcculusion](https://github.com/pierrebaque/DeepOcclusion)\n\n- [MOT Metrics library (Python)](https://github.com/cheind/py-motmetrics)\n\n- [MOT Metrics library (Python) 2](https://github.com/Videmo/pymot)\n\n- [Multi camera person tracker for synthetic data](https://github.com/koehlp/wda_tracker)\n\n## Related Dataset\n- [Multi Track Auto (GTA)](https://github.com/schuar-iosb/mta-dataset) [[baseline provided](https://github.com/koehlp/wda_tracker)]\n\n- [BDD100K large driving dataset](https://github.com/bdd100k/bdd100k)\n\n- [Visual Tracker Benchmark](http://cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html)\n\n- [DJI Drone Images](https://github.com/chuanenlin/drone-net)\n\n## Related Competition\n- [AI City Challenge](https://www.aicitychallenge.org/)\n\n- [Anti-UAV Challenge](https://anti-uav.github.io/)\n\n- [Waymo Open Dataset Challenge](https://waymo.com/open/challenges)\n\n- [SoccerNet](https://www.soccer-net.org/home)\n\n<!--\n## Related Group or Researcher\n\n- [Dynamic Vision and Learning Group, TUM](https://dvl.in.tum.de/research/)\n> TrackFormer, Tracktor++, Siamese\n\n- [CVLab, EPFL](https://www.epfl.ch/labs/cvlab/research/research-surv/research-body-surv-index-php/)\n> Probabilistic Occupancy Map\n\n\n<!--\n[DeepSORT](https://github.com/nwojke/deep_sort) <br/>\n<br/>\n<br/>\n-->\n"
  }
]